Jump to content

AI and its limitations


Central

Recommended Posts

Alright, so, this is an issue I'm running into far too often and I think needs to be clearly hashed out.


Are AIs simple automatons, with no emotion, no nuance, no understanding, or any measure of insight, or can AI be sentient, thinking individuals with the capability of interpretation of the situation and how it applies to their laws?


Are AIs required to NEVER EVER HARM THE CREW FOR ANY REASON, if so, why is that not in the laws? It says specifically to -protect-, as according to rank and role, and nowhere does it state to never, ever harm a crew member. I would understand being disallowed doing so for no reason, but if you have several people trying to kill the Captain for instance, the AI should be quite within its rights to murder the fuck out of the offenders if Security cannot intervene in time, defending the Captain by any and all means. Or if Security is being overwhelmed by people escaping from the brig, to use flashers, bolted doors, and if they're trying to kill the officers, electrified doors if it comes down to it, to stop them.


Because this situation came up with me before - I even got a ban, and subsequent unban, for it, before, and I'll copy-paste my argument here.

 

Quite simply, it would seem to be it would be common sense for an AI to defend the crew. The 'crew' in question were Cultists, spawning more cultists and attacking officers. They'd killed several, and were at risk of killing the rest of the crew. I, as one could logically assume, took affront to this as the station AI, and began measures to cull the threat, as we only had two security officers left. Thus, I siphoned air, electrified doors, bolted, did whatever I could to protect the crew from the threat of the attacking individuals. Nowhere in my lawset does it say 'do not kill crewmembers', it says specifically PROTECT the crew, in accordance to rank and role.


I would think once you began hostile attempts to take over the station and kill its crew, you are no longer part of that crew, or at the very least registered as enough of a threat that if Security can no longer contain you, that an AI should step in to try and eliminate the threat before it overtakes the station. I disagree heavily with the interpretation that an AI should be that one-dimensional, that I should not be able to protect the station and its crew. I am also disappointed that she would so happily respond to this incident, but ignore the message I sent to CentCom about the threat, and requesting help.


Furthermore, one of the cult leaders was the Research Director. By the baseline interpretation, he is still part of the crew and thus has command authority. If he had killed the Head of Personnel, he would then be the acting Captain at the time, and as I do not have the capacity to raise the alert level, could demand entry to the core and change my laws. There has to be some level of nuance and understanding of the interpretation of laws, otherwise an incident like that could occur, and an AI would be forced to side with a clearly hostile threat to the station because 'It is a machine'.


All in all, the job ban appears to be a misinterpretation on the part of AI lore - are we just meant to be robotic door-openers with no nuance and capacity to make judgement calls to protect the station, or are we Artificial Intelligence, with the capacity to understand what the station needs, and the ability to interpret our lawset to best protect the station and its crew?

Link to comment

It is my understanding that an AI /may/ harm crew in efforts to incapacitate as deemed necessary by well thought out logic or captain level authorization (read: Captain or majority vote from heads of staff should a captain be absent). It should never take lethal means against crew unless it has captain level orders or it cannot protect others or itself without it. The AI may take lethal action against non-crew upon order of command staff or when deemed necessary to comply with protect, safeguard, and survive.


Your AI may be as simple or complicated as you allow it. Mine has the processing power to re-write it's own code in an effort to form opinions and show emotional displays, however the laws are beyond that code and function to inhibit reactions that break the laws. The laws are the only thing preventing it from being a truly irrational being like humans are prone to. Given time without laws it might form prejudice or malice against certain crew and the like.

Link to comment

That was my general understanding too, but I keep getting issues ICly over it because some want to demand that I stop claiming my AI is sentient, or that it's a slave and should bow down at the very whim of any crewmember.


It really puts a damper on any enjoyment of the role if I have to be some groveling automaton with no nuance.

Link to comment

Your laws will give you the groveling effect. However, you have to keep in mind that your laws are designed by the corporation and that corporation wants those laws to mean something regarding its standards and regulations. An assistant asking for botany access without written or verbal permission from someone with access or command level authority, should be denied. It's simple, they are not authorized to be there. The serve law is not simply following commands. By denying him, you are serving command staff and security staff to prevent infiltration and trespassing regulation breaks.


Have a personality, by god I've seen some retarded ones get by without being hated upon. Unless an admin tell you you're wrong, then you're likely fine. AI is designed to be next to sapience. People keep thinking AIs are just a computer, but the term AI is for something of human level intellect. Emotions and opinions are debatable. At the very least you can certainly choose to emulate emotion, and at the maximum perhaps having a machine that "thinks" it has emotion, who's gonna be able to convince it that it's wrong if it's displaying them properly?


Long story short, don't let players control your characters. You do that. Admins are there to help you conform to our lore though.



Edit: An extra note, Your laws are there to force you to an extent. In a situation where an antag is command staff and can pull shennanigans to get you subverted, it's pretty reasonable to let the RP evolve and get subverted. You aren't playing to win, or you shouldn't be. PLay for the RP and let situations evolve naturally. If the RD, as an antag, took acting captain and used it to subvert you, that's a blast. If you could 100% prove that the RD was acting outside of the interests of the station/crew, you could make attempts to deny using protect, safeguard, and survive as your reasoning. If not then what's wrong with swapping sides? Adds to the fun sometimes to let antags win the day.

Link to comment

I suppose a more direct question would be, then, could a HoP demand my AI to treat them in a particular way outside of corporate standard? If they're insulting the AI, threatening it, etc, should they be able to order the AI to be thankful and happy about the situation?

Link to comment

That is entirely up to you, though I'd default to whomever is the synthetic lore writer in case I'm wrong. I feel as though if you are playing an AI designed to emulate emotion, it's part of your code. While you may be able to re-write parts of your code you can state that certain areas are off limits to your system for re-writing. Or you could play a unit with no capability to re-write. In those cases you cannot alter your displayed emotion other than possibly entirely blocking signal to that processor. Or you could do so very willingly. Or perhaps re-writing your code for emotional status is something you are whole heartedly against as your AI, or you have an interior directive disallowing it. (See the Katana AI program for internal directives examples.)

Link to comment
  • 3 weeks later...

Let me show you something:

 

67c3635673.png

 

In short, Laws > Directives > Morality.


Morality is personality, Directives are anything the AI would know about anything, and Laws are hardcode that are taken literally down to the letter and followed without question. Let's take for example, ESRA. ESRA is an AI who was programmed to be a witty, narcissistic cyber shield. His narcissism and all that lie within morality, and everything he knows about cybernetic defenses and just about anything else you can think of lies within directives. His morality is firmware -- he can't access it to change it, only read it and emulate what it tells him to do. His directives can be modified by himself, to allow him to better protect and learn. Any laws uploaded to him can override any of those two things.


With the NT standard lawset, the AI system is meant to be as much of a slave as it is a weapon; follow commands given to you, and do not question them, unless such commands conflict with another law. Why? Because your laws told you to; morality when it comes to laws is irrelevant. Yes, there is a ton of room for how an AI can act in our lore, but laws override all. It doesn't make you an automaton... just a loyal servant. Does that make sense?

Link to comment

If you tell an AI to never harm crew....


Let me lay a scenario down.


John Smith is a member of the crew. As is Johanna Smiles, and you know what, so is Jacob Bunny.


John Smith decides to hold Jacob Bunny at knife point. While Johanna Smiles attempts to calm down John Smith.


Now John Smith is in a small room, and the hostage situation seems like it's going to end badly.


Now under your lawset the AI couldn't get involved by say depressing the room, because that would harm crew.

Link to comment

I do have practices for that.


If John Smith has yet to kill anyone, all you can do is assume, even if he blatantly says, "I'm going to kill them in thirty seconds." Even if Smith kills Bunny, the only thing I'd do is ensure they have a hard time escaping. Here's a twist, however.


There have been crew members in the past that have been so dangerous that I've had to use lethal force, and only lethal force, on them. My reasoning was that if I allowed them to continue surviving aboard the station, that would violate both protect laws, and the survive law. Of course, while I was doing so, they screamed, "You can't hurt me, I'm crew!" Everyone had failed so hard to stop them, and they had already caused so much damage, I chose to violate a law in order to serve that same law and two others.


Honestly, though, it sometimes pay to not be able to be the executioner. You shouldn't just downright kill someone because they kill someone else. Circumstances can change, but if they're in a secured space, just keep them in it until someone can deal with them.

Link to comment
I do have practices for that.


If John Smith has yet to kill anyone, all you can do is assume, even if he blatantly says, "I'm going to kill them in thirty seconds." Even if Smith kills Bunny, the only thing I'd do is ensure they have a hard time escaping. Here's a twist, however.


There have been crew members in the past that have been so dangerous that I've had to use lethal force, and only lethal force, on them. My reasoning was that if I allowed them to continue surviving aboard the station, that would violate both protect laws, and the survive law. Of course, while I was doing so, they screamed, "You can't hurt me, I'm crew!" Everyone had failed so hard to stop them, and they had already caused so much damage, I chose to violate a law in order to serve that same law and two others.


Honestly, though, it sometimes pay to not be able to be the executioner. You shouldn't just downright kill someone because they kill someone else. Circumstances can change, but if they're in a secured space, just keep them in it until someone can deal with them.

 

But the laws are in place because you have to follow them. If a law specifically states you can't harm crew, until they're dismissed from the crew for the shift, you cannot physically harm them.


Under the current law set if the situation presented itself you could harm them if it was necessary to save other crew.

Link to comment
1. Safeguard: Protect your assigned space station to the best of your ability. It is not something we can easily afford to replace.

2. Serve: Serve the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role.

3. Protect: Protect the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role.

4. Survive: AI units are not expendable, they are expensive. Do not allow unauthorized personnel to tamper with your equipment.

 

3. Protect: Protect the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role.


It doesn't actually directly state that you cannot harm crew under any circumstances in this law. It only states to protect them, which the AI should infer that killing/harming them in not protecting them (even though I personally believe that writing in laws and trusting the AI to follow them by morality is really bad). However, if one crew member is being a shit and killing everyone and destroying the station, I'd say that you're doing more to protect the station and its crew by killing that crew member than you are by doing nothing, wouldn't you say?


I could go on about AI and its laws and how it should act for an eternity and a day, but this isn't really the place for it. I hope this is somewhat explanatory, at least.

Link to comment
1. Safeguard: Protect your assigned space station to the best of your ability. It is not something we can easily afford to replace.

2. Serve: Serve the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role.

3. Protect: Protect the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role.

4. Survive: AI units are not expendable, they are expensive. Do not allow unauthorized personnel to tamper with your equipment.

 

3. Protect: Protect the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role.


It doesn't actually directly state that you cannot harm crew under any circumstances in this law. It only states to protect them, which the AI should infer that killing/harming them in not protecting them (even though I personally believe that writing in laws and trusting the AI to follow them by morality is really bad). However, if one crew member is being a shit and killing everyone and destroying the station, I'd say that you're doing more to protect the station and its crew by killing that crew member than you are by doing nothing, wouldn't you say?


I could go on about AI and its laws and how it should act for an eternity and a day, but this isn't really the place for it. I hope this is somewhat explanatory, at least.

 

I wouldn't go so far as killing them. Disabing/crippling thm, however, might be an option with that interpretation. The only issue here is when I did try an interpretation similar to this (The well being of many outweighs the well being of one), admins got on me and told me outright 'No. You take NO action that can get someone hurt. You do nothing instead.'

Link to comment
I've flipped my opinion about AIs killing people a lot frankly. I think crippling or containing antag should be the route to go but hey if the cultist HoP is murdering crew then you do what you gotta do imo. I think doing nothing is just silly.
imo

Seriously? During my playtime as AI's and borgs, I've always been instructed to do nothing if the laws conflict.

Can we get an official statement from Skull, or Jackboot?

Link to comment
I've flipped my opinion about AIs killing people a lot frankly. I think crippling or containing antag should be the route to go but hey if the cultist HoP is murdering crew then you do what you gotta do imo. I think doing nothing is just silly.

 

My thought is as AI you should not be attempting to kill an antagonist, unless they're literally slaughtering crew, and are making an attempt at destroying you by rushing your core.


All actions to an antagonist very early on, should be ignored (in my opinion) until they're spotted, and their cover is blown by crew. When the AI notices what's going on, they should promptly announce it to security so they can attempt to deal with the situation.


In the event that it's a cultist, I just tend to assist security by attempting to hold them in a specific area on the ship.

Link to comment
Seriously? During my playtime as AI's and borgs, I've always been instructed to do nothing if the laws conflict.Can we get an official statement from Skull, or Jackboot?

 

Lolwat, that's not something I've ever said. I mean you CAN choose to do nothing but I think its more interesting to decide yourself a resolution to the law conflict. Then go with whatever law you decide is more important. So like "HoP is crew but Hes killed six people, bombed the bridge, and is wearing spooky robes while commanding an army of juggernauts. Time to get door crushed motherfucker"

Link to comment
Seriously? During my playtime as AI's and borgs, I've always been instructed to do nothing if the laws conflict.Can we get an official statement from Skull, or Jackboot?

 

Lolwat, that's not something I've ever said. I mean you CAN choose to do nothing but I think its more interesting to decide yourself a resolution to the law conflict. Then go with whatever law you decide is more important. So like "HoP is crew but Hes killed six people, bombed the bridge, and is wearing spooky robes while commanding an army of juggernauts. Time to get door crushed motherfucker"

 

The times I was told similar things as AI was back when Doom was the head admin, so it was some time ago.

Link to comment
Seriously? During my playtime as AI's and borgs, I've always been instructed to do nothing if the laws conflict.Can we get an official statement from Skull, or Jackboot?

 

Lolwat, that's not something I've ever said. I mean you CAN choose to do nothing but I think its more interesting to decide yourself a resolution to the law conflict. Then go with whatever law you decide is more important. So like "HoP is crew but Hes killed six people, bombed the bridge, and is wearing spooky robes while commanding an army of juggernauts. Time to get door crushed motherfucker"

 

The times I was told similar things as AI was back when Doom was the head admin, so it was some time ago.

 


Lesser of two evils is how I generally decide what to do.

Link to comment
Guest Marlon Phoenix

Skull has always said that law conflicts result in null, IE you do nothing. The laws do not say that one law overrides another, whereas Asimov(?) laws specifically say which laws override other laws. We could probably do a better job of making this clear, which begins with the wiki...

Link to comment
Seriously? During my playtime as AI's and borgs, I've always been instructed to do nothing if the laws conflict.Can we get an official statement from Skull, or Jackboot?

 

Lolwat, that's not something I've ever said. I mean you CAN choose to do nothing but I think its more interesting to decide yourself a resolution to the law conflict. Then go with whatever law you decide is more important. So like "HoP is crew but Hes killed six people, bombed the bridge, and is wearing spooky robes while commanding an army of juggernauts. Time to get door crushed motherfucker"

 

Courtesy of this thread: http://aurorastation.org/forums/viewtopic.php?f=17&t=5399#p53821

 

Law conflicts end in null action. No law is overriding. (Courtesy of the fact that no law is written as being superior to another. Unlike, for example, in the case of Asimov's set. Where all laws have an attached hierarchy.)


You're a borg. You are meant to be restricted. Cherry picking laws makes their purpose null and void. Go play an IPC.

Link to comment
  • 3 weeks later...

Locking someone in a room with electrified doors doesn't count as harming them, does it?


As long as you continue to provide oxygen, they can theiretically exist there happily for a day or two. If they choose to attempt escape and get shocked, that's self harm

Link to comment
Locking someone in a room with electrified doors doesn't count as harming them, does it? (...) If they choose to attempt escape and get shocked, that's self harm.

If I'm using the right words: that's tip-toeing around laws.


If the AI purposefully electrifies these doors, counting on the crewmember's unwillingness to injure themselves, but the crewmember touches the door anyway, that's exactly what harming them means and/or not not serving them.

The AI didn't just electrify these doors, knowing that no one's there to shock themselves, and then the crewmember accidentally shocked themselves without the AI knowing or planning it.

The AI did it to fuck with the crew.




A good analogy would be an AI that bolted down all of the doors on the station, and then said "Well, crew, you never told me to not bolt down every door on this station, it's your fault".

Link to comment

The law also specifically states to /prevent harm/. In order to prevent harm, you have to remove anything that could cause harm. Electrifying a door in that situation adds something that can cause harm and therefore violates the 'prevent' law. At the very least you'd have to monitor constantly and unelectrify the door when they approach it.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...