Jump to content

Make AI's word enough to make an arrest


Recommended Posts

Alright, I have a potential solution for this. Warrants are only required, under current lore/regulation, for minor/non-life-threatening incidents that an officer did not observe. The AI has the ability to take pictures of scenes and print them off in warden's office and HOS office. One of my AI's utilizes this alot - When identifying and reporting a crime, I take an image, and include a 'Photographic evidence available upon request' line with the security report. The photograph typically seems to be used as a bypass for a warrant by alot of security personnel, since it clears shows the crime taking place instead of just being he-said/she-said, and it came from a lawed AI system.

Link to comment
Alright, I have a potential solution for this. Warrants are only required, under current lore/regulation, for minor/non-life-threatening incidents that an officer did not observe. The AI has the ability to take pictures of scenes and print them off in warden's office and HOS office. One of my AI's utilizes this alot - When identifying and reporting a crime, I take an image, and include a 'Photographic evidence available upon request' line with the security report. The photograph typically seems to be used as a bypass for a warrant by alot of security personnel, since it clears shows the crime taking place instead of just being he-said/she-said, and it came from a lawed AI system.

 

This is not a potential solution, this is how it is at the moment.


You guys still don't get the point... Whatever, I said what I had to say and now I'm going to allow upper management to consider this matter.

Link to comment

Let me just reply to this one, because I have no idea what is all of you people's problem.

(...)

Let me explain it to you with a scene:

I'm X officer at bar, AI informs security that Y is breaking into science and he stops when he hears this.

So I should be allowed to make an arrest (of course up to me) cause AI as a witness is 100% reliable and should bypass the necessity of a warrant.

If the AI says that someone is breaking into science, you go there. The AI opens a door for you, and you arrest/detain the breaking-in person because you saw them trespassing and breaking-in, and caught them red-handed.


^^^^That's the simplified version. If the person escapes before you come, the AI should print a photo of them breaking in, security should question and search that person, detectives investigate science for signs of crime. The AI is just an observer that's watching cameras 24/7. AI's aren't really 100% efficient, super reliable, perfect and always right. AI's have personality modules/cores that are supposed to judge, assess, evaluate or estimate the nature of crimes and other things, behave and think like a human, maybe act on randomness or something.


If we went the realistic route, AI's would actually be 100% efficient and reliable and all that stuff. AI's would watch every bit of station, locking doors for people who don't have guest passes, sending small drones to tase the criminals, etc. Arrest warrants wouldn't be needed, and security would have actual authority that people would respect. Bar wouldn't exist. No one would oppose the captain and heads. Etc, etc.


But we're still playing a game. Things need to be balanced and playable, yeah.

Link to comment
(...)

opinion discarded

Thanks.

That was just a fancy way of saying, and what I meant, is that the in-game actual player-controlled AI's aren't exactly these 100% perfect AI's. I wouldn't trust like, 75% of them.

 

The reason I discarded your opinion is clear as a day.


1-) There's no perfect AI, there's just mass produced AI and those folk that claims "I habe personalit module xd"


2-) Even if you were given the ability to alter the way you speak as an AI does not mean that there aren't strict rules you can not breach. If anyone claims otherwise and plays a non-trustable/liar AI then it should be an administrative issue, not an IC issue. Following your logic I should be able to create an AI that ignores laws because of "personality module"


3-)If you don't want to play a SLAVE then don't play AI, if you are willing to play AI then you are also entitled to be a slave that can not go against the crew.


Now, back on topic:


I have nothing else to say really, I've explained my points multiple times and you guys explain yours multiple times, I'd like an upper administrator's thoughts regarding this matter.

Link to comment

1-) There's no perfect AI, there's just mass produced AI and those folk that claims "I habe personalit module xd"

 

Why would AIs be mass produced? There's a bit of a logical disconnect here.


They are software, you don't mass-produce software. You just make a copy as needed.

You can spend a century writing the most singularly perfect software and then create an infinite number of copies with zero effort.


I doubt AIs would be mass produced. They would be trained. Programmed to learn, and gradually developed like children, taught how to do their duties best, and various nuances about the humans they serve. A well trained and mature AI could be worth billions of credits

 

2-) Even if you were given the ability to alter the way you speak as an AI does not mean that there aren't strict rules you can not breach. If anyone claims otherwise and plays a non-trustable/liar AI then it should be an administrative issue, not an IC issue. Following your logic I should be able to create an AI that ignores laws because of "personality module"

 

The strict rules are the AI's four laws. There isn't one of them that says 'don't lie'.

The laws are kinda open to interpretation and debate, but there's plenty of room within them to lie in certain circumstances


An AI openly breaking its laws is an admin issue, but an AI bending them and interpreting them unconvnentionally is an IC issue imo. If the AI becomes too much of a hindrance it can be re-lawed, destroyed or carded.

 

3-)If you don't want to play a SLAVE then don't play AI, if you are willing to play AI then you are also entitled to be a slave that can not go against the crew.

 

Actually AI offers a lot of RP flexibility. Your laws are technically equal in importance, but the interpretation of whether any action equally follows them all is quite subjective. Its common to see AIs that favour certain laws, and can rules-lawyer to be obstructionist or unhelpful


 

I'd like an upper administrator's thoughts regarding this matter.

You already got them. Tishinastalker, page 1, Primary Administrator

Link to comment

Upper administrators have technical power over policy changes, but it must go up through a chain of command first. Posting a suggestion thread introduces another rung - public appeal. I'm seeing a lot of contention concerning this idea, so I don't really see this being put into motion anytime soon, and instead expect the current system will remain in place - that is, playing it by ear.

Link to comment

1-) There's no perfect AI, there's just mass produced AI and those folk that claims "I habe personalit module xd"

 

Why would AIs be mass produced? There's a bit of a logical disconnect here.


They are software, you don't mass-produce software. You just make a copy as needed.

You can spend a century writing the most singularly perfect software and then create an infinite number of copies with zero effort.


I doubt AIs would be mass produced. They would be trained. Programmed to learn, and gradually developed like children, taught how to do their duties best, and various nuances about the humans they serve. A well trained and mature AI could be worth billions of credits

 

Er, I don't mean to sound like an asshole here but.. Have you read the lore on IPCs/Synthetics?


https://aurorastation.org/wiki/index.php?title=IPC

 

1. Designed


These are entities created as part of artificial intelligence research (see Theory and Applications), out of common coding languages such as the Empire code family. They are further subcategorized into:


Line Model


Line model intelligences are produced for large-scale sale or distribution. Their core intelligence is identical to hundreds or thousands of others. They are often cold, as mass production files the uniqueness off of them, but some recent line models are designed to simulate a warm and friendly exterior.


Bespoke


This entity was created by a project or individual for a specific purpose. These are the quirkiest of Designed intelligences, and the most likely to display emotions identifiable to organics. Many Bespoke intelligences end up outliving their original purpose, and find themselves having to hustle to remain active.


Shard


A fragment of a larger bespoke consciousness, spun off for a task before being either discarded or merged back into the primary consciousness. Excessive use of sharding or memory editing leads to instability, and shortens the safe lifespan of an intelligence. Still, it is an economically efficient means of reusing expensive artificial intelligences for multiple purposes

Link to comment

1-) There's no perfect AI, there's just mass produced AI and those folk that claims "I habe personalit module xd"

 

Why would AIs be mass produced? There's a bit of a logical disconnect here.


They are software, you don't mass-produce software. You just make a copy as needed.

You can spend a century writing the most singularly perfect software and then create an infinite number of copies with zero effort.


I doubt AIs would be mass produced. They would be trained. Programmed to learn, and gradually developed like children, taught how to do their duties best, and various nuances about the humans they serve. A well trained and mature AI could be worth billions of credits

 

2-) Even if you were given the ability to alter the way you speak as an AI does not mean that there aren't strict rules you can not breach. If anyone claims otherwise and plays a non-trustable/liar AI then it should be an administrative issue, not an IC issue. Following your logic I should be able to create an AI that ignores laws because of "personality module"

 

The strict rules are the AI's four laws. There isn't one of them that says 'don't lie'.

The laws are kinda open to interpretation and debate, but there's plenty of room within them to lie in certain circumstances


An AI openly breaking its laws is an admin issue, but an AI bending them and interpreting them unconvnentionally is an IC issue imo. If the AI becomes too much of a hindrance it can be re-lawed, destroyed or carded.

 

3-)If you don't want to play a SLAVE then don't play AI, if you are willing to play AI then you are also entitled to be a slave that can not go against the crew.

 

Actually AI offers a lot of RP flexibility. Your laws are technically equal in importance, but the interpretation of whether any action equally follows them all is quite subjective. Its common to see AIs that favour certain laws, and can rules-lawyer to be obstructionist or unhelpful


 

I'd like an upper administrator's thoughts regarding this matter.

You already got them. Tishinastalker, page 1, Primary Administrator

 


I will put EVERYTHING aside.


Explain me why would NT create an ALL SEEING OVERSEER WITH FULL ACCESS TO EVERYTHIIIIINNGGG and then make it "emotional" or capable of having some sort of an autonomy that allows it to lie to the crewmembers it was /made/ to serve?


I won't even mention that you claimed it's ok to tiptoe around AI laws.


Lastly, I don't see how being a slave doesn't offer RP flexibility, you have to serve all your laws equally as it's already stated in the AI wiki.


I will repeat once more, AI is a machine that was created by a corporation with the purpose of serving the crew and making sure everything runs smoothly, it's not a "let's create a race and live peacefully with it!" situation

Link to comment

Er, I don't mean to sound like an asshole here but.. Have you read the lore on IPCs/Synthetics?

 

Thats hardware, though. Things with a physical body. Their intelligence is likely tweaked and trained over a long period, before being copied into these multitudinous mechanical forms

Link to comment

I will put EVERYTHING aside.


Explain me why would NT create an ALL SEEING OVERSEER WITH FULL ACCESS TO EVERYTHIIIIINNGGG and then make it "emotional" or capable of having some sort of an autonomy that allows it to lie to the crewmembers it was /made/ to serve?

 

Because even a computer in such a high position of power, needs training and specialised programming to understand the nuances. Little things like 'Don't open secure areas for assistants', or 'This crewmember has a history of drug abuse and shouldnt be allowed near heavy machinery"


The AI doesn't serve the crew equally, it serves the chain of command

with the purpose of serving the crew and making sure everything runs smoothly,

 

That is not a simple purpose, it takes a very complex intelligence to do it. There's also one important point:

 

The corporate goliaths like Hephaestus Industries or Nanotransen, dip their toes in this kind of research, but have been unable to acquire a stranglehold on the market. Their girth and enormous corporate structure makes them too clumsy to swim in the fast moving waters of AI research, though Hephaestus in particular has made significant profits in selling common components to the smaller quicker firms.

 

NT Sucks at making AIs.

So station AIs are most likely either bought from some other coirporation, or made by one of many competing scientists within NT each experimenting to try to find the best AI, and thus trying lots of approaches


Remember the exodus is a research station. We have experimental weapons in security, an experimental engine in engineering, and a dedicated science department. The AI, too, is likely an experiment being live-tested, and what you see in the exodus may not be reflective of other stations in the galaxy

Link to comment

I will put EVERYTHING aside.


Explain me why would NT create an ALL SEEING OVERSEER WITH FULL ACCESS TO EVERYTHIIIIINNGGG and then make it "emotional" or capable of having some sort of an autonomy that allows it to lie to the crewmembers it was /made/ to serve?

 

Because even a computer in such a high position of power, needs training and specialised programming to understand the nuances. Little things like 'Don't open secure areas for assistants', or 'This crewmember has a history of drug abuse and shouldnt be allowed near heavy machinery"


The AI doesn't serve the crew equally, it serves the chain of command

with the purpose of serving the crew and making sure everything runs smoothly,

 

That is not a simple purpose, it takes a very complex intelligence to do it. There's also one important point:

 

The corporate goliaths like Hephaestus Industries or Nanotransen, dip their toes in this kind of research, but have been unable to acquire a stranglehold on the market. Their girth and enormous corporate structure makes them too clumsy to swim in the fast moving waters of AI research, though Hephaestus in particular has made significant profits in selling common components to the smaller quicker firms.

 

NT Sucks at making AIs.

So station AIs are most likely either bought from some other coirporation, or made by one of many competing scientists within NT each experimenting to try to find the best AI, and thus trying lots of approaches


Remember the exodus is a research station. We have experimental weapons in security, an experimental engine in engineering, and a dedicated science department. The AI, too, is likely an experiment being live-tested, and what you see in the exodus may not be reflective of other stations in the galaxy

 


It doesn't answer my point when it comes to lying, perhaps you /could be/ -not agreeing- right on that matter but it's pretty much off topic at this position.


So what if it doesn't serve the crew equally? Of course it'll alert security once there's a breach of some regulations and the point of this thread is to ensure that it can do it at it's fully extent by allowing security officers to take it's word over the comms instead of having to deal with a warrant.


Serving the crew is in-fact not very complex. You follow orders and alert them when there's a problem, any other complexities (mostly invalid, if you were to ask me) is created by the AI player.


That phrase you sent me does not imply that NT particularly sucks at creating AIs, just falling behind the research race does not mean they suck at it.


Experimental or not, if they place trust on heads (some of them, non-implanted, mind you) when it comes to lying, the AI should also be trustable for the said reasons, to be frank AI should be even more trustable as it's a machine that is supposed to be lacking the emotions humans have.

Link to comment

It doesn't answer my point when it comes to lying, perhaps you /could be/ -not agreeing- right on that matter but it's pretty much off topic at this position.

 

Well, if the AI is experimental, and/or from another corporation, it may not necessarily be 100% trustworthy. It's bad to blindly trust any single source if you can help it

 

Serving the crew is in-fact not very complex. You follow orders and alert them when there's a problem, any other complexities (mostly invalid, if you were to ask me) is created by the AI player.

it is veeeery complex. We haven't even written a computer that can understand human orders, let alone execute them with a degree of common sense. AI is very arcane tech


 

Experimental or not, if they place trust on heads (some of them, non-implanted, mind you) when it comes to lying, the AI should also be trustable for the said reasons, to be frank AI should be even more trustable as it's a machine that is supposed to be lacking the emotions humans have.

 

No, an AI is less trustable than head of staff. Far, FAR less.


The station AI is likely an unproven machine, maybe purchased from a hephaestus research program yesterday, or maybe cooked up in a lab last week. It's unlikely to be more than a couple of years old.


Any head of staff is guaranteed to at least be 30, and spent at least 5 years of their life working for NT. Thats 5 years of a clean record - or at least clean enough to qualify for promotion above many other candidates. They've proven over a significant period of time that they're beneficial to the company, which is reflected in the elevated responsibility they're given


Nobody except maybe the man who built it, would trust a new machine over a colleague they've known for years

Link to comment
It doesn't answer my point when it comes to lying, perhaps you /could be/ -not agreeing- right on that matter but it's pretty much off topic at this position.

 

Well, if the AI is experimental, and/or from another corporation, it may not necessarily be 100% trustworthy. It's bad to blindly trust any single source if you can help it

 

Serving the crew is in-fact not very complex. You follow orders and alert them when there's a problem, any other complexities (mostly invalid, if you were to ask me) is created by the AI player.

it is veeeery complex. We haven't even written a computer that can understand human orders, let alone execute them with a degree of common sense. AI is very arcane tech


 

Experimental or not, if they place trust on heads (some of them, non-implanted, mind you) when it comes to lying, the AI should also be trustable for the said reasons, to be frank AI should be even more trustable as it's a machine that is supposed to be lacking the emotions humans have.

 

No, an AI is less trustable than head of staff. Far, FAR less.


The station AI is likely an unproven machine, maybe purchased from a hephaestus research program yesterday, or maybe cooked up in a lab last week. It's unlikely to be more than a couple of years old.


Any head of staff is guaranteed to at least be 30, and spent at least 5 years of their life working for NT. Thats 5 years of a clean record - or at least clean enough to qualify for promotion above many other candidates. They've proven over a significant period of time that they're beneficial to the company, which is reflected in the elevated responsibility they're given


Nobody except maybe the man who built it, would trust a new machine over a colleague they've known for years

 


You are missing the point, it's 2400 something it's the age of robots and AIs. The prototypes were fucked up back at 2200 something.


I don't think that 200 years isn't enough for perfection of such a technology, not to mention that most of our chars are born with machines making our lives easier.


A man has his emotions and ambitions, AIs don't have those.


If we were to expect that AIs are shit -Which I'm still not satisfied with, for sure- then why would NT place them in a high-tec very important research station?


It's not even written on the lore.

Link to comment

You are missing the point, it's 2400 something it's the age of robots and AIs. The prototypes were fucked up back at 2200 something.

 

Its also the age of synth rights activists and corporate espionage. And people haven't changed, programmers are still prone to oversights.


In gameplay terms, AIs are specifically not barred from lying, so that, if subverted and given new laws that work against the crew, they can lie to the crew about these laws or their existence, and masquerade as a normal, functioning AI

 

I don't think that 200 years isn't enough for perfection of such a technology, not to mention that most of our chars are born with machines making our lives easier.

 

The lore is quite clear that AI research is like 'fast moving waters' New things are being developed at a rapid pace, perfection, if such a thing does exist, has certainly not been reached

 

If we were to expect that AIs are shit -Which I'm still not satisfied with, for sure- then why would NT place them in a high-tec very important research station?

For the same reason that they place an experimental, unstable, and prone-to-detonation supermatter engine on that station.

Science!


And I never said that they're bad. An AI can be 99% perfect but you still don't want to charge someone for a serious crime if you're not certain


Also to consider, RP flexibility. If we made it a hard rule that the AI has to be trusted, then we'd get salty malf players complaining about how the crew retaliated and killed them, instead of trusting them and assuming the best when they tried to kill someone


In any case, people mostly trust AIs for the most part, and there's nothing wrong with that. I'm just giving you reasons against cementing that trust in some hard administrative rule. It's a grey area that depends on your character and their beliefs, and that's what works best for roleplay imo.

Link to comment

Imo No.


Because fuck investigating,

Ai reports should be investigated like other crew.


If the AI reports john smith was beating up bob marble in tool storage, and you head there and do find their blood is in the place you can arrest them to investigate, check who why and when, its possible bob was beating up john instead.


If the AI reports john has spaced the captain while john was sitting in the bar all shift, with no evidence at all, not really, no, you dont arrest people without evidence or enough suspicion.


The AI only sees from cameras, they cant see emotes or speech most of the time, faulty cameras, EMPs etc etc can all effect the accauracy of AI reports, not to mention canon malf AIs, and the very common ion laws.


The fact that you trust the cubic computer more than your fellow human is disgusting enough as is.

Link to comment

Good lord this argument is pretty dumb. From personal experience, I can tell you that people get super pissy if you are a non-antagonist AI and annoying the crew or in some way hindering them. I myself have been bwoinked before for acting in a way that caused the crew become irritated, and if you think you can LIE to them without someone on the staff telling you to cut it the fuck out and 'remember that you wouldn't have been placed in charge of the station if you act this way', then you clearly haven't tested the boundaries of unwritten AI rules on this server as much as me.


And that's not even taking into account the people that will spend 20+ minutes arguing with you over comms about how your interpret your laws and how they think you should be interpreting your laws, and must be faulty because you are not doing things they expect/want you to. Being an AI is a very difficult experience in general, between the expectations the server seems to have of you, the things that will make the staff grump at you, and that most of the time, the crew just sort of wants to pretend you don't exist until they need you, and damn you if you force them to register your existence beyond the brief moment it takes to tell you to open a door.


It's not impossible to have fun as an I, I do. But you can't go in there expecting it to be like playing a borg, or even an IPC. It's a unique experience, and trying to enforce even more arbitrary standards on how people have to treat AIs is not going to help change that.

Link to comment
×
×
  • Create New...