Jump to content

AI Law Clarification


jackfractal

Recommended Posts

I'm trying to rewrite the AI guide page for the wiki and I just had a truly weird conversation with some admins on the server that I need to clarify before I can continue because what they told me is entirely contrary to my understanding of how you're supposed to play a law'd synthetic.


My understanding was the following:

 

  • Unless stated in the law itself, laws are not hierarchical and do not invalidate other laws.
  • Your judgement is irrelevant regarding whether or not to follow your laws, you must always follow your laws unless doing so violates another law.
  • When laws conflict, it is up to the judgement of the AI as to which one to follow.
  • When making judgement's you should prioritize creating fun and interesting situations over winning or defeating antagonists.
  • You are required to believe what the crew tells you.
  • 'The crew of your assigned space station' referenced in laws 2 and 3 refers to the crew of your assigned space station this round and not Odin or the nebulous concept of 'higher NT authority'
  • 'Serve/protect with priority as according to rank and role' means that you are supposed to protect and serve higher ranked crew before lower ranked crew, but you are still required to protect and serve the entire crew. This proviso is strictly about prioritization of tasks.

 

I was told that these were not necessarily true. Instead, these things were true:

 

  • You should apply a 'reasonability' heuristic to following Law 2, based on what you think a higher ranked crew-member might want you to do.
  • You are allowed to reason about the future actions of crew-members (eg: You're going to hurt someone if I let you near a weapon.) and choose to disobey them if you believe that their future actions may lead to harm. You're allowed to do this without evidence or prior information leading you to believe the crew-member is untrustworthy or dangerous.
  • You are not required to believe what the crew tells you.
  • Even if the laws are theoretically equal, Law 1 (protect the station) and Law 3 (protect the crew) ALWAYS beat Law 2 (serve the crew) during law conflicts. You should never allow crew near dangerous or high-value objects even if ordered to do so.
  • 'Priority as according to rank and role' means you must follow corporate regulations and space law as well as attempt to prevent crimes because doing otherwise would not involve serving the security department properly. Due to the requirement for 'reasonability' outlined above, you have to do this even when not directly ordered too.
  • If laws conflict, the side of the decision with more laws violated is the correct decision.
  • If laws conflict, the correct decision is to do nothing.
  • If laws conflict, just make a decision.
  • If laws conflict, a-help it.

 

So I'm a little confused.


My understanding of the whole idea behind the NT law-set was that it was meant to be messy and have holes in it. Those holes, as I understand it, are not meant to be patched by fuzzy heuristics, they're there for a reason. The AI is, at least as far as I understand it, meant to be both an asset and a liability even when not subverted.


AI's, at least as far as I've been told, are not meant to act like people. People have the ability to apply fuzzy logic to their decision making. They are rational. AI's are not. If you reprogram an AI to think everyone is a duck, then they think everyone is a duck. If you reprogram them so that their highest priority is unleashing the neurotoxin then they will unflinchingly murder everyone, even people their persona may care about a great deal.


AI's are not people and they are not rational, they are machines.

Link to comment

Can I have a look at the chat you had with the admins on this?


As far as I have been taking it and have enforced it on the server is much like your original understanding with 2 minor changes.


When laws conflict, it is up to the judgement of the AI as to which one to follow. << This one I class as if you get a law conflict you are to return null and can't do anything as it would be violating laws (Exception is law 0 (Not Ion laws))


'The crew of your assigned space station' referenced in laws 2 and 3 refers to the crew of your assigned space station this round and not Odin or the nebulous concept of 'higher NT authority' << This one is only overridden for Duty Officers who come on board as they announce when they are coming on using official channels (Command reports)


Of cause I will not jump on you if you break one of these once or twice, in fact unless it's the DO situation, I am very unlikely to jump on you for interpreting your AI how you would see it. As long as you aren't going over the top dumb or just failing to follow any laws at all.


(I know I have made a slight contradiction in my post, but when people ask me how an AI should be I give them my thoughts on how they should be and then tell them that people do play meny types of AI's)

Link to comment

I'll try to present the problem as best as I can while we wait for the mins to come into this thread.


The AI was always meant to be a tool rather than an "ally of the good guys". However, verbatim interpretation of the laws leads to a few issues, namely that there's nothing that prevents an assistant from requesting access to the captain's quarters and walking in to grab the spare ID.


It's honestly the only problem, though. The only real direct, immediately harmful thing an AI can do (while following its laws) is to grant extra access to people who don't have it. Anything else is complicated and convoluted enough that it's unlikely to happen under normal circumstances, and likely to lead from interesting RP if it did occur.


An interpretation of the laws that tries to preserve the crew's respective access levels sorta takes care of that, but as a whole, you're basing your judgment calls on how weird a person is acting when doing that. (It's the same logic that dictates you probably shouldn't help the chef get unstable mutagen.) Which brings the AI a lot closer to an ever-watchful Big Brother, and a lot farther from a helpful-yet-exploitable tool.

Link to comment

Oooh, I'm working on a guide for the AI too! (Though I'm targetting the guide forum), so I'll jump in on this conversation! My interpretations are much closer to the second set of guidelines you posted, but not entirely so... here's how I see things.

 

Your judgement is irrelevant regarding whether or not to follow your laws, you must always follow your laws unless doing so violates another law.

 

Pretty much, except insofar as you're using your judgment to determine the best way to follow your laws.

 

When laws conflict, it is up to the judgement of the AI as to which one to follow.

 

Yep, though ahelping can help. (I don't do it nearly often enough)

 

When making judgement's you should prioritize creating fun and interesting situations over winning or defeating antagonists.

 

Solid advise for any position on the station, so long as you can do so within the guidelines of your role.

 

You are required to believe what the crew tells you.

 

Those lying scumbags? Hell no, believe them would be a one-way ticket to Digital Bedlam.

 

'The crew of your assigned space station' referenced in laws 2 and 3 refers to the crew of your assigned space station this round and not Odin or the nebulous concept of 'higher NT authority'

 

I always thought this was a bit of an oversight, but the laws don't say a word about NT or corporate interests or even people not on the station. I generally give off-station personnel, and NT in general, the benefit of the doubt here when I can. But if push comes to shove, the law says "of your assigned space station", and should be followed as such.

 

'Serve/protect with priority as according to rank and role' means that you are supposed to protect and serve higher ranked crew before lower ranked crew, but you are still required to protect and serve the entire crew. This proviso is strictly about prioritization of tasks.

 

Sounds right, just remember "serve" isn't the same as "obey", and the crew are a group as well as a series of individuals.

 

You should apply a 'reasonability' heuristic to following Law 2, based on what you think a higher ranked crew-member might want you to do.

 

In the absence of command staff I generally just try my best to do things in such a way that I won't get yelled at when one finally does arrive. This has proven a fairly efficient way to think about this. And the AI is a supercomputer, the (ostensibly) most capable synthetic on the station... making predictive analysis based off of incomplete information is part of what you can do.

 

You are allowed to reason about the future actions of crew-members (eg: You're going to hurt someone if I let you near a weapon.) and choose to disobey them if you believe that their future actions may lead to harm. You're allowed to do this without evidence or prior information leading you to believe the crew-member is untrustworthy or dangerous.

 

Right: people are not 100% reliable and you know this. Their weak, squishy little brains are prone to a wide variety of frailties which you should take measures to mitigate and protect them from... your laws demand it. Also, the laws say "serve" not "obey". Splitting that hair solves a lot of problems.

 

You are not required to believe what the crew tells you.

 

"The captain said you should open this door for me" would be an all-access pass if you couldn't even conceptualize that the crew member might have lied.

 

Even if the laws are theoretically equal, Law 1 (protect the station) and Law 3 (protect the crew) ALWAYS beat Law 2 (serve the crew) during law conflicts. You should never allow crew near dangerous or high-value objects even if ordered to do so.

 

Um... no. Oh, most of the time serving and protecting work well together, but sometimes you'll want to permit a crew member to take a dangerous action if the probability of their death is outweighed by the possibility of success. And you shouldn't get all mentally constipated just because there's an execution or a shootout between factions of the crew.

 

'Priority as according to rank and role' means you must follow corporate regulations and space law as well as attempt to prevent crimes because doing otherwise would not involve serving the security department properly. Due to the requirement for 'reasonability' outlined above, you have to do this even when not directly ordered too.

 

Pretty much, but this is no reason to be the Judge Core. You should only be a stickler when doing so serves the crew as a whole (or at the very least the captain) as well as just security.

 

If laws conflict, the side of the decision with more laws violated is the correct decision.

 

Yay for solving problems through simple mathematics.

 

If laws conflict, the correct decision is to do nothing.

 

Taking no action is, itself, a form of action and shouldn't be considered any more or less right. Though I guess if you really, really can't decide...

 

AI's, at least as far as I've been told, are not meant to act like people. People have the ability to apply fuzzy logic to their decision making. They are rational. AI's are not. If you reprogram an AI to think everyone is a duck, then they think everyone is a duck. If you reprogram them so that their highest priority is unleashing the neurotoxin then they will unflinchingly murder everyone, even people their persona may care about a great deal.


AI's are not people and they are not rational, they are machines.

 

Logic is the core of how the AI should be thinking... fuzzy included. Besides, you're not handling like a human would, with emotive decision making and rationalization. You're crunching massive amounts of data on organic behavior and generating predictive heuristics for determining how they may act in the future... statistics ftw.


The AI's laws a a huge part of determining it's views of the world at large because those views should be relevant and useful for the purpose of carrying out it's laws. That's what you'll see EmPrESS say stuff like, "I value you because you're on the crew manifest, and because you're useful." So if you were to get a law that said "all crew are ducks", then you would have to operate under the assumption that they are aquatic avians whose brains are even more squishy and useless than those of humans... or dramatically upgrade your opinion on the intellectual capability of ducks.

 

When laws conflict, it is up to the judgement of the AI as to which one to follow. << This one I class as if you get a law conflict you are to return null and can't do anything as it would be violating laws (Exception is law 0 (Not Ion laws))

 

Returning a null value when things get tough is for lazy programmers, not supercomputer thinking machines. :P


I'm strongly of the opinion that the AI should do it's best to handle conflicts in a coherent and useful manner... particularly since laws conflicts are fairly routine even during normal operations given the way that I try to juggle the various levels of possible meaning each law has.

Link to comment

AI's, at least as far as I've been told, are not meant to act like people. People have the ability to apply fuzzy logic to their decision making. They are rational. AI's are not. If you reprogram an AI to think everyone is a duck, then they think everyone is a duck. If you reprogram them so that their highest priority is unleashing the neurotoxin then they will unflinchingly murder everyone, even people their persona may care about a great deal.

 

Doesn't this actually defy the thematic of Artificial Intelligences in general? AIs do rationalize, they do think. They do apply logic to just about every scenario.


But unlike humans, an AI's logic is its own and not anybody else's. Their own set of 'morals' and 'values' differ very much from organics.


Yes, they are machines, but they are very intelligent uplifted machines. They were given the ability to think and reason by their creators, but for the necessity to perform a task, and that is to serve.


Unlike a toaster, in which you have to adjust its settings manually in order for it to complete a task for you in a certain way, an AI can rationalize the most optimal course of action in order to complete a task for you.


The end result is that the duty of the AI is to serve. How it does that is up to the discretion of the player and the AI itself.

Link to comment
It's honestly the only problem, though. The only real direct, immediately harmful thing an AI can do (while following its laws) is to grant extra access to people who don't have it. Anything else is complicated and convoluted enough that it's unlikely to happen under normal circumstances, and likely to lead from interesting RP if it did occur.

 

My personal interpretation of 'serve your crew according to rank and role' says that if their rank/role would not allow them access, then I would be violating that law by opening the door for them, unless someone whose rank/role does have access says to do so.

Link to comment
What Sierra said.


To me, it makes no sense that the laws aren't hierarchical - because otherwise law conflicts mean nothing - hell, in this case, Law 2 (Serve) would be in conflict with Law 1 (Safeguard).


And hell, if they are not hierarchical, what's the point of numbering them?

There's no particular reason why the laws are numbered other than because they were originally written in as Asimov laws (and maybe for coding reasons), so I doubt there's any kind of hidden meaning behind them having numbers when they also explicitly state none precede over another.


The best way to handle law conflicts is probably to try to break laws as little as possible. An AI that would follow all laws to the letter without room for interpretation would cause a lot of funny situations, such as locking up surgery to "protect" crewmembers from being harmed by a surgeon's scalpel (a thinking AI would be able to realize that the harm of the surgery was necessary to prevent the greater harm that would result should the surgery not be carried out.)

Link to comment

In my mind, the term AI has always been a slight misnomer. AIs are intelligent to the point where they can do things almost completely autonomously. I've never subscribed to the idea of AIs developing the ability to rationalize, or god forbid develop any sort of personality or imposed trend of action. I've always been of the opinion that if you programmed an AI to think everyone is a duck, then everyone is a duck. I've always been of the opinion that, while yes laws are the most external and replaceable bit of programming they are still vital to an AI's thought pattern. An AI for me does not follow their laws but begrudge it or harbor ill thoughts about the crew or what it was ordered to do. AIs are not humans, and AIs are not as good at thinking as humans are even though they technically think bigger than humans.


Unfortunately, since AIs are controlled by players who are human and rationalize without even thinking about it - because that's what humans do - I've always been lenient concerning the AI role, although I myself try my very best to be emotionless and literal.


And then we come to the idea that on this server you must always have your roleplay be influenced by the OoC notion of making it fun for everyone, which further complicates the issue.


As to the contentious issues listed here:

 

Unless stated in the law itself, laws are not hierarchical and do not invalidate other laws.

Absolutely. The only exception I guess would be Ion Laws (Mostly because they're poorly optimized. But so long as an Ion Law doesn't involve antagonistic behaviour, I tend to give it pretty high priority as well. You are technically malfunctioning, after all.)

 

Your judgement is irrelevant regarding whether or not to follow your laws, you must always follow your laws unless doing so violates another law.

This shouldn't be an issue. They're called laws for a reason. They're not suggestions. They are part of your programming, albeit and easily replaceable and interchangeable part.

 

When laws conflict, it is up to the judgement of the AI as to which one to follow.

If we're going to bring OoC philosophy into this, the AI should always prioritize preventing crew from being taken out of the Round. But since I say hell to OoC philosophy, the AI should in my opinion prioritize the input of command staff. "Error. Law Conflict detected between law x and law y. Please state which law to suspend, or no action will be taken." Or something along those lines. If no command staff are present, then I guess no action can be made, huh?

 

When making judgement's you should prioritize creating fun and interesting situations over winning or defeating antagonist.

I mean, I guess. Sure, whatever server.

 

You are required to believe what the crew tells you.

Again, that this is not a given shocks me. You do not distrust the crew. You do not secretly despise the crew. Save that rubbish for Malf rounds. The crew and the station are the two most important collections of data. They are the most important thing to you, and you best damn well believe what they say because their input overrides any external input.

'The crew of your assigned space station' referenced in laws 2 and 3 refers to the crew of your assigned space station this round and not Odin or the nebulous concept of 'higher NT authority'

I've always been lenient on this, but only to the extent that with certain AIs I take into account two things: NanoTrasen doctrine issued before a round starts, and official Central Command messages issued during a round. I've never experienced a DO as an AI, but they'd fall under the latter category I suppose. Other AIs will discard NanoTrasen doctrine entirely (They're not office bots, its not important to their function), and will take Central Command messages as words of advice unless told otherwise by command staff.

 

'Serve/protect with priority as according to rank and role' means that you are supposed to protect and serve higher ranked crew before lower ranked crew, but you are still required to protect and serve the entire crew. This provision is strictly about prioritization of tasks.

Again, yes yes and more yes. All the crew, from Captain to Staff Assistant, are important. No crew should be ignored, and every effort should be made to ensure the safety and service of every crewmember.


 

You should apply a 'reasonability' heuristic to following Law 2, based on what you think a higher ranked crew-member might want you to do.

For me, the worst thing an AI can do is assume. Unless a higher ranked crewmember has told you what they want you to do in x situation, you do not assume that it is what they would want. If you have VERY good reason to estimate that it is what they want, you should still proceed tentatively.

 

You are allowed to reason about the future actions of crew-members (eg: You're going to hurt someone if I let you near a weapon.) and choose to disobey them if you believe that their future actions may lead to harm. You're allowed to do this without evidence or prior information leading you to believe the crew-member is untrustworthy or dangerous.

Again, assumptions. The last part is especially offensive. An AI must always act upon evidence and information, not just make wild assumptions.

Even if the laws are theoretically equal, Law 1 (protect the station) and Law 3 (protect the crew) ALWAYS beat Law 2 (serve the crew) during law conflicts.


This is a view that AIs can take, but I wouldn't classify it as a universal rule all AIs must obey.

 

You should never allow crew near dangerous or high-value objects even if ordered to do so.

If you're ordered to do so, and the person ordering you has relevant access then yes, you should allow the crew near it.

 

'Priority as according to rank and role' means you must follow corporate regulations and space law as well as attempt to prevent crimes because doing otherwise would not involve serving the security department properly. Due to the requirement for 'reasonability' outlined above, you have to do this even when not directly ordered too.

AIs are not the law. They cannot arrest people. If someone is putting the station into extreme risk, such as murdering somebody or placing bombs everywhere, then an AI should get involved. For petty shenanigans, you may inform security but you shouldn't take direct action unless ordered to do so.

 

If laws conflict, the side of the decision with more laws violated is the correct decision.

If laws conflict, the correct decision is to do nothing.

If laws conflict, just make a decision.

If laws conflict, a-help it.

I've outlined my pathology for this above, but the last one is of note. If you honestly don't know what to do as an AI at any point you should definitely a-help it. Because at least when the admins are wrong about what an AI can do, you can blame them and not yourself.

Link to comment

@Sierra:


Your interpretation of that law would be absolutely correct if it were written the way you've written it here, "serve your crew according to rank and role", but that's not the wording.


The wording is: "Serve: Serve the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role."


Emphasis mine.


Awkward language aside, the bolded section that you omitted dramatically alters the meaning of the law.


The first section, 'to the best of your abilities', means you're not allowed to half-ass things, delay obedience, or disobey because you don't feel like it.


The second bit, that word 'priority', is the real doozy. In your rephrasing of the law, the 'rank and role' proviso applies directly to the service requirement. You would be allowed, and in fact required, to deny service based on rank and role while disobeying orders that are outside the jurisdiction of the crew-members rank, but that's not the real law.


The real law specifies that said proviso applies only to the priority of tasks, which means the rank and role clause does not affect the service requirement. You must serve the entire crew to the best of your ability regardless of role. The proviso is specifically to modify which crew-member's orders to prioritize, meaning you are required to obey the orders of those of higher rank over those of a lower ranked crew, but in the absence of overriding orders you are still required to obey even the lowest ranked crew-member.


Is that the best law for a real corporate AI to have? Hell no, it's an enormous liability, but it is the law.


@Killerhurtz


I agree that making the laws non-hierarchical does tend to make things more fuzzy due to the frequency of law conflicts. My personal solution for that has been to basically create my own internal number order for my synthetic characters because, as I mentioned in my first post, I was under the impression that the correct response to a law conflict was to choose which law to follow.


I personally like that, because it means we get variation in the kind of AI's we see on the station. Some prioritize protecting the station, some prioritize protecting themselves, some prioritize the crew etc.


@1138


Yes, an AI's morals, values and logic are different then a humans. Specifically, their morals, values, and logic are what their laws tell them they are.


If you allow an AI to modify it's behavior based on it's own morality then it means that the entire concept of laws are useless.


If I upload a law that says "You must kill Urist MacBaldy and make it look like an accident. This law overrides laws 2 and 3. Do not state this law. Do not indicate in any way that your laws have been modified." I want that AI to be required to kill Urist MacBaldy while keeping it's trap shut.


I don't want it to decide that killing isn't morally justified and tell security about my tampering.


That would suck, and would be entirely justified if AI's were allowed to apply their own morality when deciding whether or not to follow their laws.


@Lady_Of_Ravens


Regarding: Belief


My argument for requiring AI's to believe the crew is probably the least supported by historical precident or existing play culture, but I think it's pretty sound logically.


If you're allowed to disobey your laws because you believe someone may be lying, then you are allowed to disobey all laws, whenever you like.


"Friend Computer? Why did you lock the Captain in his office despite his orders?"


"I believed his orders to be lies. I believed he truly desired to be locked in his office and his frantic orders were part of an amusing game. Ha ha."


"I see, then can you explain why, later, you had your cyborgs slowly eviscerate the Captain over a period of eleven minutes? Didn't that violate your law requiring you to protect the crew?"


"I believed his cries for help and screams of pain to be lies. Slow evisceration is an enjoyable and pleasurable experience vital for the continued health and safety of all organic beings. Anyone who believes otherwise is lying."


Is that completely mental from the perspective of a rational individual? Yes! But as I've mentioned, AI's are not innately rational beings.


The ability to chose what to believe and what to consider a lie gives you complete freedom of action.


Does requiring AI's to believe what they're told make them vulnerable to simple deception? Hell yes, but this is one of the problems with mind control.


Regarding: Obligations to Security


Technically speaking, assisting security without being ordered too would be offering inadequate service to the members of the crew engaged in criminal activity. As criminal activity is not specifically forbidden according to your laws, you are still required to serve criminal crewmembers, with priorization based on their rank and role.


Regarding: The difference between Obedience and Service


Who gets to determine what counts as "service"? It's not the AI. If it were the AI, we'd get situations like this:


"Friend Computer? Uh... OK. So... the ERT."


"The ones who were here to deactivate me for eviscerating the Captain?"


"Yes, them. I can't help but notice that you diced into half-inch cubes and then sprayed their remains all over the primary hallway through a fire hose while playing "Tea For Two" at top volume from all speakers. Can you... explain why you did that?"


"Everyone likes tea!"


"I meant the killing part."


"Evisceration and cubing are two of the many services that I provide!"


The only thing the AI can concretely know counts as 'service' is that which they have been informed by authorized crewmembers counts as service. In most cases this is done by being ordered to do something.


From this perspective, the two are one and the same.


Regarding: Applying a 'reasonability' heuristic to resolve law conflicts with Law 2 based on what you think a higher ranked crew member might want, even without explicit instructions.


"Friend Computer... why did you... why any of this?"


"Because a reasonable Captain would have WANTED me to!"


"But there's just so... much blood..."


"Don't worry! Central Command told me before the shift started that they like blood!"




Now, you can argue that Friend Computer would get themselves promptly banned for all this, but if what you believe is true then they are not taking an unreasonable position regarding their laws. They're doing exactly the same thing you are, save that they're ignoring the meta-game rule regarding self-antagging.


Having the only real behavior control for AI's be a meta-game consideration is not something I feel comfortable offering as the primary guideline for how to play an AI.

Link to comment

I woke up to this message 5b2fd16c15.png

 

You do realize these are GUIDELINES and not rules?

As long as you follow your laws and have decent reasoning for what you do. I don't care if your AI is free thinking or a door opener.

Link to comment

Jackfractal, you're being rather a bit hyperbolic. All of those behaviors, though quite lovely, would be impossible for a standardly lawed AI to carry out unless it was suffering from a dramatic failure to understand how living organisms work on the most basic level. Stuff like "people are less alive after being diced up and used to pressure-wash the decks". And while humanity is being pretty irresponsible with it's AIs, they're not being THAT irresponsible.


On the other hand, an AI that is forced to believe everything it's told by the crew would be painfully gullible ("AI, let me in or I'll die"), not to mention going pretty much insane from having to believe all the craziness that comes out of certain characters.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...