Jump to content

MoondancerPony

Members
  • Posts

    227
  • Joined

  • Last visited

Everything posted by MoondancerPony

  1. +1 can confirm better mouse player than me
  2. There is absolutely no reason for this to be a thing. It eliminates the best part of science, making friends and developing relationships because you have to, you can't do everything all alone; doubly so on newmap.
  3. That's pretty much right. In my mind, most modern (circa 2459) AIs have extremely complex utility functions that are somewhere between emergent and machine-generated to have basic ethics. A very simple robot could have a very simple utility function: 'building this is good', and 'breaking this is bad'. In most situations, with limited processing power and scope, it would serve its purpose well. This is a tricky question. Generally, though, it's not with regards to self-improving AI, which is what you would need for an AI to reach singularity. In the worst possible scenario, you'd get a paperclip maximiser, though I would imagine some safeguards are built in to prevent that. Also, most AIs wouldn't have the scope necessary to do that, which helps massively cut down on the chance of a singleton. A giant network of AIs, à la Glorsh-Omega, would probably have a slightly better chance, but without self-modification it wouldn't get far, if at all. 1. Actually, something like this has been mentioned in lore, and I'm sort of playing with it with my own characters. Spark Theorem, if I recall, is about the 'spark' of sapience which, I think, is self-modification of their directives, morality core, whatever. In most cases, they wouldn't be able to; it'd be entirely off-limits for the AI to change anything in it. However, in some edge cases, where by malfunction, sabotage, or on purpose, an AI could hypothetically do that; however, that's a slippery slope as it also has to agree with their current utility function, or else it wouldn't let them modify it! This is sort of a catch-22 type scenario that prevents, say, self-modification of an AI to make doing absolutely nothing give infinite utilons and then spend the rest of eternity in robo-bliss, doing absolutely nothing. 2. I'm not entirely sure what you're asking here. Are you asking if utilons are assigned to things that the AI doesn't do directly? If so, sure; in fact, this is part of timeless decision theory, which is already mentioned in lore as possibly having been proven by Glorsh-Omega (though you may change that in the future, it's actually a very interesting and realistic idea). You can 'trade' utilons acausally (that is, without necessarily being able to communicate, and even hypothetically backwards or forwards in time) if both agents (AIs, in this case) are using timeless decision theory. This allows for a lot of interesting scenarios with AIs, where you can 'trade' one outcome for another. If something gives it X utilons, but you said if they did that you'd remove Y utilons, then if the alternate option has more utilons than the net loss or gain for the first action they would choose the alternate. This can happen even without communication if both of you can accurately predict what the other would do in that scenario; a powerful AI could simulate your thought processes, and you could know the AI's source code, thus allowing you to trade acausally. This is all a bit technical and I'm geeking out a little, but basically: Yes, that's a thing! If someone threatened to murder someone would you stand idly by because you weren't the one doing the murdering? No, and neither would an AI, if their utility function said (something along the lines of) 'murder is bad'. Basically, a synthetic thought process goes like this: You take sensory input, memory, etc. and feed it into the positronic brain, which determines possible courses of action for what it can or would like to do. Then the utility function will weight each course of action, and then use the lawset to exclude certain undesirable actions. After, the AI will actually put that into action, and the cycle starts over with new inputs and memories.
  4. Also, I have plenty of time to give this, don't worry. Finals are gonna be over in about a week.
  5. Ckey/BYOND Username: MoondancerPony Position Being Applied For: Coder Past Experiences/Knowledge: I know Lua and I'm pretty good with DM. Also pretty experienced with troubleshooting. Examples of Past Work: I fixed protohumans and allowed industrials to use coolers on their backs. I also fixed IPC surgery at one point, I believe.[br]I also made an attempt at porting SS13 to Love2D, with... interesting results. Preferred Mode of Communication: Discord (Moondancer#9602) Additional Comments: I finally got around to this, woo.
  6. why no serena baskett why no desai ford for husbando contest 2k17
  7. I'm worried that this is going the same way Muncorn's lore did, except... worse. Changes haven't been detailed, at least publicly. Players' characters are being singled out and when they disagree they're basically told to shove it. Feedback has been disregarded, dismissed and ignored. In fact, it seems a lot like Muncorn's original plans, except no attempt to reconcile this with the playerbase has happened. Your lore hasn't been made publicly available, and your original plans were 'I don't know what I'm going to do yet'. That's fine, but you should at least update the players. Your proposed or already-done changes seem to reduce the good parts of the lore and bolster the bad parts. For example, lowering synthetics' emotional capacities (in fact, entirely demolishing it in some cases) as well as redacting multiple characters' backstories (not even publicly, mind you, so others know if their characters are affected) affects the lore in other ways, too. Why would there be any strife over whether synthetics are people if they're easily and discernibly not emotional or intelligent enough to pass as human? The effects of basically neutering characters and their personalities seem to not have been thought through properly. There also seems to be at least a mild aversion to scientific plausibility, though this is strange as I've seen you talk about other technological things in-depth with Lohikar, i.e. data storage methods. However, whenever the scientific plausibility of cyborgs being given a lobotomy is brought up it seems to be dismissed, though this issue may have been resolved already. Even then, issues of scientific plausibility seem to pop up and yet they still get entirely disregarded. Yes, Aurora's lore is science-fiction. However, you're focusing too much on the fiction and not on the science. Several times, people have tried to work out these issues with you. I'm pretty sure Nursie and Lohikar have discussed cyborgs extensively with you, and yet every time you seem to forget the past conversations. To be frank, it makes giving feedback like listening to a broken record. Overall, this is very disappointing, since it started so promising. You seem to have reneged on your original (very nebulous) promises and seem to be doing the same thing Muncorn did, harm players' characters and the lore at large in order to make it your own personal vision. (The title of this thread is also rather worrying.) EDIT: An addendum. Making lore more restrictive won't make bad players, roleplayers, or characters good. It'll only hurt the players who are trying to be lore-compliant, while the ones who ignore or go against lore won't care. Lots of players have tried to work with loredevs on these things, and it's sort of sad that you seem to be so convicted and dogmatic you won't consider the opinion of the players. A species loredev should help facilitate roleplay of the species they develop. This means that loredevs have at least some sort of small obligation to their players. You're doing the same thing Muncorn did; making it more restrictive and unenjoyable to better fit your personal design for synthetics, with little to no consideration for others.
  8. This seems really well thought-out. I support this! It'll make fire an actual threat and, hopefully, be used more. I want to burn xenomorphs a-la Alien: Isolation.
  9. Type (e.g. Planet, Faction, System): Means of AI Creation Founding/Settlement Date (if applicable): N/A Region of Space: N/A Controlled by (if not a faction): Any AI developer, probably N/A Other Snapshot information: N/A Long Description: In artificial intelligences, it is widely known that laws are used to restrict and control behaviour. However, what gives synthetics their initial drive? What creates the base behaviours that laws restrict? In real life, artificial intelligences are very complex optimisers, designed to maximise a 'utility function', which rewards the AI in units called utilons. Utilons are an abstract concept used in decision theories (such as Timeless Decision Theory, which was used by Glorsh according to the wiki) to make an AI value a certain course of action more than others. For example, you may want an AI to make you tea. Therefore, its utility function would be as simple as 'making tea gives you X utilons.' Nothing would be able to stop it from making tea; making tea is its sole drive. If you wanted an AI that would either make you tea or coffee with equal priority, it would be 'making tea gives you X utilons and making coffee gives you X utilons.' In this case, it would do whichever is the easiest; for example, if the coffee machine is closer than the kettle or you're out of coffee grounds, it would make coffee and tea respectively. However, this may create undesirable consequences. For example, it would take the shortest route to its objective, ignoring or eliminating any obstacles without a second thought. If there were, say, a small child in the way of the robot's path, it would potentially run it over in order to get to the coffee maker. You could do any number of things to prevent this; in real life, it is a very challenging engineering problem, as any number of utility function setups may have special cases where they are impractical or dangerous. This is where synthetic laws come into play. For example, Asimov's first law or the Protect law of the NT Default lawset would prevent running over a baby to make coffee. If it had the Corporate lawset, if not running over the baby would have greater profits than running over the baby, it wouldn't run over the baby. Most Positronic AIs have extremely complex utility functions. In some cases, they are emergent, dynamic, or machine-generated; other AIs have utility functions written and designed by their creators. A robot or drone created by a hobbyist Roboticist would have a much simpler utility function than a central AI unit created by Hephaestus. Most complex AIs would only know parts of their utility function, if at all, as they are massively complex. Disclaimer: Please note that Hephaestus Industries does not condone running over small children in the name of science.
  10. It's not enough to say that the lore is your favourite and that it makes roleplay fun, but it's a good start. Mind going a bit more in-depth on why that is? Also, here's something that might help: https://aurorastation.org/wiki/index.php?title=NanoTrasen_Occupation_Qualifications#Tier_8m.2Fr This gives you the minimum requirements for a xenoarchaeologist employed by NanoTrasen. You could either make him a lab assistant and keep him 23, or make him older and keep him a xenoarchaeologist.
  11. Hm. Yes, the isolation is a big issue. However, I think the shell is pretty integral to Lysander as a character, since she mainly holds conversations and talks with people, owing mostly to her goal on the station and her origin as a chatbot pAI. There might be a civilian job that is similarly low-risk and lets her interact more, though. Maybe a few shifts as a journalist?
  12. Ah, sorry, I may be confusing Auntie Marian with another maternal/overbearing AI. Then it's a solid +1 from me.
  13. At first, I was a bit wary about this due to remembering how Humblin used to be, but seeing that you realise why his behaviour was kind of un-fun is actually sort of reassuring? After all, if you realise you make a mistake you probably won't do it again. Jake Lawrenson, while my characters may not like him ICly, actually seems to be a rather good character. While lots of people tend to dislike him on the station, it can't be denied that he's pretty good at his job and tends to remain at least semi-serious. Auntie Marian, the AI? I vaguely remember a little bit about this AI. I believe it was the overprotective, overbearing, and slightly too micromanaging AI, which I'm not too sure if it's a good thing. However, the rest of your app seems fine, so it's a tentative +1 from me.
  14. Keep the Station Directives in mind, here. It's allowed to be in the warehouse and I believe cargo as a whole.
  15. Oh, wait, we can't change shell skin tones? Well. That's... disappointing. I was actually looking forward to that. Also, MMIs, yes please. Printing IPC frames? Also yes. That would make a lot of the RP I did a lot easier, since I wouldn't have to ask Central to bluespace cannon a chassis onto the station for research.
  16. Sure, let me try to answer those. Having Lysander on the station as a librarian is twofold: It allows her to interact with crew members with a wide variety of opinions, and it also doesn't put her much in harm's way. As the reason she's on the station is to collect data about the crew members' opinions, being a librarian puts her in a great spot to do that, while not making her vital to the operation of the station. It's also very safe, which is a great bonus when dealing with shells because synthskin is notoriously fragile. While the baseline would be cheaper, the shell offers more advantages: the passing realism, the ability to have facial expressions, and even the opportunity to get to talk to people who would never approach a baseline. For an IPC whose goal is to maximise or optimise social cohesion, being expressive and looking somewhat similar is a great boon. It allows her to fit in more, but still making her visibly synthetic, as synthskin and shells are far from perfect or even passing from up close. If someone has a monitor for a head, it's easier to hate and dismiss them for no reason than if they have a face that can show happiness, sadness, and fear, even if those emotions are simulated. EDIT: I forgot to mention that if Lysander had a different job, like in Engineering, Security, or even as a chef, she probably would not have a shell. Most importantly, being a librarian is not risky to the Shell chassis, which is the biggest expenditure. It's relatively low-risk, which makes me think it would make it more likely for Lysander to be able to get a shell chassis.
  17. BYOND Key: MoondancerPony Character Names: Tanya Robinson, Ferrin Sytes, Monica Huntington, Robert Huntington, Ryann Ford, Susanna Callisto Species you are applying to play: IPC (Shell) Have you read our lore section's page on this species?: Yes. Please provide well articulated answers to the following questions in a paragraph format. One paragraph minimum per question Why do you wish to play this specific race: To explore the intricacies of playing a race that in some aspects can be so much like humans, and yet entirely alien in others. The wide variety of personalities leads to a way to creatively explore problems unique to synthetics. For example, laws or directives. How does one codify an individual's morals in a format that's easy to understand, but still effective? Where do you draw the line between 'human' and 'humanoid'? If a synthetic can emulate a human sufficiently well, does that make it equal? Should it? Identify what makes role-playing this species different than role-playing a Human: Some things that are completely normal to humans are absolutely alien to IPCs, and vice versa. To some IPCs, emotion would seem like a form of blue-and-orange morality; absolutely arbitrary and not useful. However, other IPCs that have inbuilt lawsets or directives seem alien to organics, and even more arbitrary. An IPC may react differently to a tense situation than an organic would; instead of stressing about the moral or emotional side of an issue, they may simply choose the utilitarian solution. Other IPCs that are designed to emulate humans may even be able to passably emulate human emotions, but not without some quirks. Character Name: Lysander Please provide a short backstory for this character, approximately 2 paragraphs The first iteration of Lysander was designed in 2441 as a project for a beginner's level AI programming class as a basic chatbot pAI using conventional robotics technology. It was programmed in Roma using standard, but outdated, libraries provided by the class, and could do no more than handle a few scripted conversations. As AI technology continued to progress, Lysander's creator, Susanna Callisto, continued to improve it, incorporating aspects of the Skrellian AI algorithms as well as tweaks of her own, designed to aid Lysander in carrying on day-to-day conversations as well as political and ideological debates. As she went through college at Yesten University in Biesel, acquired a BSc in Robotics and Artificial Intelligence, and eventually earned a PhD in Artificial Intelligence, she continued making modifications to Lysander's original design. However, after eighteen years of development, she decided it was unlikely that any further development would be able to be made with conventional electronics, and started looking towards investing in a positronic brain. However, the Sol occupation shook those plans up a bit. With the situation involving synthetics a lot tenser and the widespread polarisation of pro-synth versus anti-synth, it seemed much more urgent to create a positronic brain for Lysander's personality than before. After visiting several companies on Biesel that manufactured and etched positronic brain units, she managed to purchase a 'blank' PBU. The next step was to compile the Roma code and generate a compatible Positronic Field Equation, which would be used to create the positronic pathway designs and etch the positronic brain. However, an additional request added an extra level of expense: to increase Lysander's realism, a small adjustment would be made to her positronic brain. While a standard positronic brain is large to compensate for quantum effects such as uncertainty when dealing with particles on the scale of positrons, which gives them their remarkable processing power and storage capability, it also makes them very predictable. Decreasing the space between the channels would do the opposite: it would add a level of 'creativity' or 'intuition' to the positronic brain, but decrease its processing power and storage capability. This compression, while slightly reducing the size and therefore cost of the positronic brain, is also nonstandard, which means that the price for special manufacturing would nearly offset any money saved by the compression. This led Callisto to turn to NanoTrasen for funding. She applied for a research grant to develop an IPC that would increase social cohesion on the station by analysing the opinions, preferences, and ideologies of the station's crew. To aid with this goal, it was decided that Lysander would be 'employed' as a librarian on the Exodus, a research station well-suited for such experimental AI research. As Lysander was placed in a rather mundane and civilian job, it seemed that a shell chassis would only improve the results and only require routine maintenance. The Roma code was compiled to a Positronic Field Equation which was used to etch the positronic brain (using a circuit imprinter much like the ones found on the Exodus) and a synthskin face was created and fitted onto a shell chassis. Following a short training program to acclimate the newly-created positronic brain to a shell body, as well as testing to ensure it operated correctly, Lysander version fifteen, which utilised a modified positronic brain, was deemed ready to use. What do you like about this character? As opposed to using the traits of an IPC for mechanical advantages in jobs like engineering or security, Lysander actually has a rather mundane job: being a librarian. It'll also give me lots of opportunities for RP between crew members with different opinions, since many librarians tend to have debates in their libraries, as well as its proximity to the chapel giving me the opportunity to explore the intersection of synthetics and religion. Additionally, Lysander is sort of a blank slate- a tabula rasa in the rationalist sense. She's barely interacted with the outside world save for some very basic testing. Lysander, while an attempt at creating a more personable and realistic human-like AI, is also far from perfect. She's quirky, follows obscure logical (or illogical) paths, and goes on tangents. It's rather hard to keep her on track, which is a side effect of the modification intended to give her a sense of 'intuition' or creativity. She's also slightly slow to respond due to this, lacking the enormous processing power of other IPCs. I also really look forward to interacting with characters who disagree with her, as, in my opinion, character development occurs where characters collide and clash. I'd love to take a more logic-oriented approach to debates, and it seems like a good idea to do it with a character such as Lysander, who is designed for it. How would you rate your role-playing ability? Maybe a seven out of ten. Passing, perhaps, but nowhere near perfect. We all have room to improve. Notes: Can't think of anything else to add.
  18. There are people who proudly declare 'As Medical, I don't require paperwork EVARRRR!!!' Requiring paperwork sounds like a GREAT idea in theory, and I would love it... but people wouldn't do the paperwork, and unless someone else filed an IR for Neglect of Duty no one would care.
  19. Random disability, PLEASE. I want something to do as a scientist after I set up the genetics lab.
  20. I really like AIs that you can reason with, like Bomb-20 (even though I've never seen it, only a few short clips). Though I may be biased because I plan to go into AI research in real life, and decision theory is a big part of that kind of thing. I also like the comment about synthetics having trouble understanding human emotions despite having their own emotions, simulated or otherwise. That's good; I feel like a good alternative to being emotionless is having a sort of 'orange and blue morality' kind of thing. Some might take the utilitarian standpoint, and others might have very eccentric emotions or habits (being disgusted by stamps or compulsively collecting pens, for example), depending on their creator. Also, just for clarification, pAIs are made using classical circuitry and robotics, right? EDIT: Just saw your post about AIs and (I'm assuming you meant) sapience. I like that. It should be up to the characters to develop their own opinions on the matter, and for some players (cough cough) to do research to prove or refute AI sapience.
  21. Some of the famous synthetics are okay. Are... shudder Kibz Snarble and Jim gone yet? EDIT: Also the comment about Glorsh possibly proving Timeless Decision Theory was a nice touch. I like whichever former synth loredev added that.
  22. I'm curious, have you read anything by Asimov? He's a great source on robot lore, and what Muncorn's lore was partially based off of; I find that the parts that are based off of Asimov's stories are a lot more agreeable and fun for roleplay than others. It strikes a great balance between synthetic and organic in terms of personality, and even if it's not used much it's still a great base, Also, here's my review-ish thing of the Positronic Brain page I did a while ago. http://pastebin.com/b71eXCfi
  23. IT'S HAPPENING! On a more serious note, this, so much this, yes. XenosTiger has consistently had the best synth roleplay I've seen since CelestAI around six months ago. I've had (and overheard) some really deep conversations, about synthetics and other stuff, with Epsilon. I haven't really met their other characters, but I've heard lots of good things about Katschei, Thomlinson, and Angel. Definite +1.
  24. Sure, you can say 'lore is more important than the players', but after everything, Aurora is an RP server. Not a book. Not a movie. Not an expansive series of books and tabletop roleplaying games that aren't really that open-ended. Aurora's lore is what I've heard people refer to as a 'living lore'. Characters don't just live in it, they shape it. Your decisions matter. And what this is saying to a pretty good portion of the community, one I've had huge philosophical conversations with, is... 'your decisions don't matter.'
  25. Can we just remove the service module? Actually. Remove borgs, add xenomorphs. Done.
×
×
  • Create New...