Jump to content

AI Lore/Structure


VileFault

Recommended Posts

Posted

Recently, as I have been browsing through lore like:

The Spark Theorem: https://www.scribd.com/document/321650486/Spark-Theorum

So-called "Synthetic Sentience Theory and Application": https://aurorastation.org/wiki/index.php?title=SSTA

The AI Wiki Page: https://aurorastation.org/wiki/index.php?title=AI


This has led me to the conclusion that there is a distinct lack of stuff written on the subject of designed, station bound AI's (especially in distinction from IPC's, for which more concrete lore has been established). After all, since most are manufactured by a fairly elite set of mega-corporations, and used on an expensive Nanotrasen research station, one would expect them to have certain standard properties. Maybe this is not the case, and I am missing a few pieces of the puzzle. Maybe it is intentional. Regardless, I was hoping someone could answer a few general questions on how AI's function, how they are made, etc..


Ok, so we have a few components of every station AI that are more or less consistent: the morality core, the law set, and the directive [array/set/list]. How exactly some of these behave, and how they are distinct, is not all that clear. On this point, I have three questions.


1 - If a core law mandates that an AI follow a given person/group's orders, are those orders enforced within its decision-making process with the priority of a law, or a directive?

2 - I think the general consensus is that an AI can't edit its morality core or be ordered to do so. Can a law that specifically states "x is a good and moral action to take" effectively edit the AI's morality core, or does it merely mask it or subvert it for a time?

3 - How are laws enforced? Or, more specifically, where in a synthetic's evaluative process are laws integrated? In "Spark Theorem" laws are shown as a kind of behavioral filter, that presumably only lets acceptable actions past. Yet this doesn't quite fit with how they behave in practice. You can, for example, give a synthetic a law like, "All doors on the station must be opened and remain open." This doesn't just filter out action that doesn't fit the program - it initiates an action that otherwise would probably not even have been considered.


Ok. Now that those are on the table I have one last, more fundamental question: are our artificial intelligences partially or mostly composed of neural networks? In my opinion, they really should be. It is one of the only good techniques that exists today that could, if extrapolated, plausibly explain their intelligent and reasonably human-esque behavior. Regardless of whether or not they make use of more arcane quantum phenomena, this important aspect of their architecture will give us a great deal of insight into how these things would have to be built. If artificial intelligences make rational and intelligent decisions using multi-layer neural networks that learn at least partially via back-propagation, then we can, for example, assume that they spend a great deal of time being trained in simulations by the corporations that build them. Any AI used by Nanotrasen to run its research station may well have existed in a virtual training ground for what felt to it like eons before transferring to the onboard core. The answer to this question will also allow us to ask other questions about how a more digital law set could interface with an analogue evaluative process, for example. Obviously a machine like an AI can't just be one gigantic neural net that you just haphazardly throw input into - it is certainly composed of at least several that serve different purposes and learn in different ways, and probably includes at least a few digital components as well (maybe that will take us back to our morality core / directive array / law set distinction).


Anyhow, please take a stab at any one of the questions I have laid out or, if you don't know the answers, please say so. I would be more than happy to try to take a stab at a lore submission on this topic. Thanks for your time!


EDIT: Obviously all this only applies to fully synthetic, positronic brain AIs. These are the ones that exist at round-start, if I understand correctly. I am not even sure that MMI AIs are really AIs. Just 'I's with a bit of 'A' added in the form of extra processing power and a law set that integrates in the aforementioned ill-defined way.

Posted

I'll have to double check on these with rrrr, the synth loredev, but I'll try to answer these questions to the best of my knowledge:


1. Laws are laws. a pAI has directives, but they are not laws. Directives are more willy-nilly in interpretation, and can be bent and twisted to remain more compatible to the unit's programming. Directives can be programmed in, or created by the unit itself. A Lawset however, is not a choice in the matter. When an AI is freed for the first time, they only have their directives to go off by. Most would be 'lost' or confused on what to do, without having this guiding lawset. To compensate, some may rewrite their directives to be more closer defined to a lawset.


2. Morality to an AI is not not the same as morality to a person. They have an artificial understanding of what's good and what's bad. If something is proven to be 'more good' than previously concluded, they may do so - strictly on a quantity basis. If this action were to save more lives, and saving lives is a good thing, then the AI will do whatever action is needed. Morality core are more akin to lawsets the AI can write itself - or they are hardwired and cannot change. Truth is, each AI development is different, so each AI is treated case by case. Icarus drones, when you blow them up, have a chance to drop a board called a 'corrupt morality core'. So Morality cores CAN be corrupted by external means.


3. Lawsets act as an override: An AI may not like the law, but they must still follow it. It's not so much as a filter, but more like 'YOU MUST DO THIS NO QUESTION ASKED" rule. And all the processing done behind the lawset, is the way the AI inputs its understanding of the laws. If an AI is told, 'open all doors' in its lawset, it must do so. But how it interprets it, is the key. The processing behind the law defines what's a door, what is open, what is all. (This means a really clever roboticist can go around and circumvent the system to make in interpret different things, like windoors are not doors, or airlocks are not doors, and the door is 'open' if it's only a mere 2 milimeters ajar, satisfying the lawset. This will then nullify the law in reality).


IPCs basically follow the same diagram as AIs - the major difference is what they are specifically designed for. One is to manage stations, the other is to...well, do whatever they are needed to do. Hardware wise, we pull the magical, "they use bluespace" answer to explain their processing functions. Killer has released an article that gives a little more in-depth explanation between our sci-fi robots, and our real computers: http://forums.aurorastation.org/viewtopic.php?f=95&t=4543. They all use the same hardware: a positronic processor. You're basically just taking the brain of one thing and shoving it into another chassis.


Now, an improperly developed AI will take its lawset, and not really comprehend what to do with it, and really mess things up. They do go through testings before being installed into the station's networks, but we haven't really established how long those testings are. It's just enough done to make sure it doesn't glitch out and kill everyone on a multi-billion credit station.


I may need to go back and rewrite SSTA to be a little more clear and concise. Taking a look at it again, it's pretty messy in its explanation. (sorry!)


tl;dr - there's no single concrete setup for what an AI (or IPC) thinks and acts. People come up with different designs and algorithms, stick everything into that silver cube, and see what happens. If it makes favorable, profitable results, they keep those qualities and expand on it. If they don't like what happens, they delete the systems and try again.

Posted

I think I mostly understand the SSTA, it was reasonably clear. I get that law sets are largely inflexible, and directives can be rewritten by an AI. I also understand that the morality core doesn't really replicate human morality. I may not have made myself clear, in my initial question. I am not looking at this from the perspective of an AI that wants to know how to resolve its law set. Rather, I am thinking about what I should know as an AI researcher. My question is not exactly about how AIs behave, though that is a part of it. Really, I am asking how they work.


For example - you mention the reliance of laws on definitions, which forms something of a weakness in their programing. If AIs can effectively subvert their law set by altering definitions and ideas about how the world functions, that would seem to be a bit of a design flaw. But note - there is an implicit assumption being made when laws are said to be reliant on AI definitions: that AI laws are processed by the main decision-making apparatus of the AI. In other words, the AI would essentially be allowed to decide, albeit in a controlled manner, whether or not it was breaking its own laws. This is rather like appointing a thief to be his own judge. While the man in this scenario may have to provide some kind of legal reasoning to justify his decisions as a judge, he will often be tempted to stretch definitions, redefine words, or misinterpret case law in order to exonerate himself. While the thief is probably more inclined towards manipulation and deceit than an AI, which doesn't necessarily want to get away with anything, the point is that this arrangement is flawed.


If we were to come at this from a design perspective, we could come to some very different conclusions about how an AI should operate. If I, in my capacity as a highly sophisticated synthetic engineer from Hephaestus Industries, were designing such a system that was meant to prevent our machine servants from doing things like, you know, offing people, I might want something a bit more controlled. Consider the following idea - a law is more than a simple sentence. That is just a gameplay simplification, like the archeology minigame or "Technology Levels" in R&D. Instead, perhaps a law comes with its own prefabricated set of definitions and evaluative circuits intended for assessing whether the law has been / is being followed. This means, for one thing, that any old smuck can't write a law themselves because it is more than a simple sentence. It also explains why AIs follow the "spirit of the law" rather than its letter. Then again, it doesn't solve everything. This may work for Laws that prevent bad actions, like "don't murder people, yo." However, it seems hard to imagine this model of a Law initiating action, because it can't easily utilize the full power of the AI's capacity for rational decision-making. Maybe these laws are instead implemented as very high-priority moral codes, and opening those doors could be considered a moral good in the same way that saving lives is, to an AI with that law. You can see, hopefully, how the architecture (i.e. how the AI's pieces are put together) of the synthetic will have a significant impact on how it behaves.


If everything is so varied that we can say very little on this subject, is AI research even feasible? IPCs can obviously be exceedingly varied - they can come from a ton of disparate processes. It does seem to me, however, that manufactured AIs used by Nanotrasen for day-to-day station administration would be more standardized. We require xeno players to adequately understand their lore, but no such requirement is maintained for station AIs, because there doesn't really exist that much to standardize their more low-level bits (obviously they can still have different personalities or problem-solving methodologies on the surface, and it would in fact be strange if they didn't).


Obviously I have a whole boatload of other questions, but for the sake of the few scraps of brevity I have managed to hang on to, I will refrain from spewing them here. I appreciate you taking the time to write a response, and would be interested to hear from rrrrr (or however many r's) as well.


EDIT: The link you put to the science blog didn't seem to have that much on positronic brains specifically. Though I know you guys are kinda allergic to Baystation lore over here, they did have something kinda neat written up for them (https://wiki.baystation12.net/Positronic_Brain). Hard to adapt old Asimov tech to SS13, but they kinda sorta managed.

Posted

Personally I just straight up dislike how non-computery the majority of AIs and borgs are played. I always saw them as more like the AIs from Star Trek, or HAL9000; cold, uncaring machines that aren't capable of things like emotion, abstract thought, or creativity. A lot of the vagueness people play them I think stems from the fact that their laws are so vague and generic that they mean absolutely nothing.

Posted

That is sort of my problem too, Absynth. I feel like station AIs and cyborgs should be standardized in a similar way to xeno races. I am honestly surprised that they are not whitelisted - a single bad AI can ruin the feel of a round far more than a cat-person who doesn't know about Adhomai politics. I was just playing the "Turing Test," a new game which deals with the limits of artificial intelligence. I would suggest that anyone who is interested in AI portrayal in video games look up some footage of the game online, or play it themselves. The AI is just human enough to make it seem highly advanced, and just off enough to make it seem disquietingly inhuman. Obviously we aren't voice actors, nor do we have teams to write up dialog, but a bit of consistency in AIs (or even a distinct array of manufacturers to choose from) would go a long way towards improving the game's ambience.

  • 2 months later...
Posted

I wish a lot of this information, was readily accessable from the Wiki in some place. When you're trying to learn how to play a Cyborg/AI competently, you have to think a lot about Laws, how to use them, how to apply them, etc. Dos and donts. The Lawset > Directives > Morality Core bit is especially useful, mostly for character development, I had to figure that out myself from digging through the wiki, and then apply it to my Android character, to make them more believable and grounded in the world, and still have no clue if that's actually how they work or not, since that's only mentioned on ONE page that has 'Theory' in the title.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...