Jump to content

New AI Role: MMI-SCAI (Update #1)


Damarik

Recommended Posts

No. Completely waylay the emotive aspect. It adds nothing to the mechanical implementation and serves only to impede it's value. I agree that AIs should be more mobile, and be more involved, and that AI's should be more maintenance heavy. However, instead of creating an entirely new type of AI, why not just add these features to existing AIs?

Link to comment
No. Completely waylay the emotive aspect. It adds nothing to the mechanical implementation and serves only to impede it's value. I agree that AIs should be more mobile, and be more involved, and that AI's should be more maintenance heavy. However, instead of creating an entirely new type of AI, why not just add these features to existing AIs?

 

Because then you'd just have another Cyborg running around. That would defeat the purpose of having an AI to begin with.


Adding in the implementation of a living, breathing SCAI Unit gives much more flexibility. Especially during rounds where the AI is antag. Instead of just going about it in the most methodical and logical way, an AI who is capable of understanding and feeling a true range of emotions can play to 'mean streaks' and cruelty.


See example below.

 

Say a Malf-AI (or Hell, even the one based on the 'Everything is Expensive' lawset) decides that it's time to act. Either because its laws are screwed, or because it's tired of sitting back and watching the crew putter about. What's a good motivator? What's a good way to get the crew to either Die, Evac or stop causing fiscal mayhem? Gassing would be the easiest and most obvious method in the case of a Malf, but that's against Server Rules, so that's fucked. Blowing shit up? Another viable option that is heavily frowned upon. Auto-Perma Shocking doors is yet another option to the AI that we as a server have essentially blacklisted because "boo hoo, my character had no chance of escape without -insert item here-".


So what's left? Well, it's time to get cruel. You can't vent. You can't gas. You risk banning for shock therapy. Freeing the slimes will defintely get you banned. Sic'ing your borgs on the crew is pretty much guaranteed an A-Help from someone bitching. All that's left is petty passive-aggressiveness.


For starters: douse the lights. Crew can't work in the dark. Crew fixes lights. Overload the APC. Crew is now irritated. Blame the Engineers. Engineers deny and fix the APC. While that's happening, short circuit power to every part of Engineering save the Engine and Atmos (cuz the latter is pretty much the same fate as gassing). Blame Engineers again for their incompetence. Proceed to bolt random doors around the station. (Mostly inconsequential ones, with a one or two major locations to throw people off the trail)


By this point, people are going to start assuming it's you, because we're so used to Malf-AI rounds that it's become predictable. However! Addition of the emotive SCAI can help to mitigate this. If they know that the AI is 100% self aware and able to feel compassion towards the crew, they may be less likely to come barging in your door and either card or EMP your supercomputing self.


"This AI cares about us. Why would it do these things?"

"Gotta be some kind of traitor in our midst! The AI wouldn't hurt us."

"Those Engineers look shifty. I saw one working on a door earlier...might have been trying to break in."


Because of a normal AI's propensity to be forgotten as nothing more than a tool, people automatically assume that when shit goes wrong on the station and it's not caused by some visible and immediate threat, that they must blow the doors down to AI upload and run in, guns blazing. Why? Because we've got a bunch of rambo-wannabes that often tend to forget that a Janitor shouldn't know how to wield a plasma carbine.


Enter the SCAI. An emergency has become prevalent. The AI requests to be removed from its containment, because there is a present and obvious threat to the station. Now, it is vulnerable. It has given up its most secure space in favor of possibly being gunned down for no reason other than "the AI was messing with a computer panel, and then the doors unbolted. It was obviously malfunctioning."


Being able to put the AI into more constant interaction with the crew brings up the feasibly possibility that, despite its two hour time limit outside the chamber, it has lost much of its omnipresence and control. It must now be within direct line of sight to work anything significant, and even then cannot be more than so many meters away to even work the object in general. It must be next to a panel to interface. It can't be more than 4 tiles away to uplink to a camera. Right there, the AI becomes less of a potential threat, and the crew starts looking for threats elsewhere while the AI is free to continue its mayhem. Back to the passive aggressive tendencies.


While the crew is now scrambling around to find the traitor/wizard/syndies/what have you, the AI reverts back to playing cruel pranks. Temp-shocking doors. Switching on a conveyor while someone's standing on it. Dousing the lights in surgery while an operation is taking place. Similar problems beget similar solutions. "The damn engineers are still screwing with the station's power." All the while, each step the AI takes is one step closer to achieving its ultimate goal of either destroying the station and the crew, or making the crew work more efficiently in terms of fiscal responsibility.


All the while, it can be laughing over the comms. Telling jokes to ease the crew's tension. Talking about personal problems with individual crewmates. Why? Because this AI can relate to them, and they can relate to it.


And really. Worst case scenario: If you don't like the SCAI...just card it. It then becomes exactly like an old AI. I really don't see what the issue is here.

 

The difference between an AI and an SCAI is this:


True AI (what we consider to be the ultimate in processing power and decision making) does not need emotion to factor into its calculations. It can 'understand' them by computing the results of various stimuli and responses...but it cannot relate to them and twist them to its own designs because it is incapable of doing so. Anyone who plays an AI that does this has been playing an AI that does not, and likely never will exist, simply because the parameters of an AI have been clearly outlined and established. The moment an AI can do these things is the moment it becomes a sentient, living thing.


Until we move to the SCAI. The SCAI is the next logical progressive step in AI construction. It IS that sentience. The self-awareness to a point a computer cannot understand or calculate. The SCAI is the living computer that will choose to save a life, even if it is one that holds no intrinsic value to its processor, laws, or personal safety. Even if doing so might put others in harm's way. The SCAI is the living computer that is able to dispute Mr. Spock and the Vulcan's philosophy of "the needs of the many outweigh the needs of the few".


If I have not convinced you thoroughly enough by this point, then I doubt I am ever going to. This will be my last attempt to do so, before I move on with discussing this with those who are interested in the mechanic enough to give it a chance. If there ever comes a time where this is in voting for implementation and you still feel against it, I welcome you to cast a negative vote against it and in your favor. No ill-will be found, promise.

Link to comment

You're putting far too much stock into this nebulous region of emotions, and that line of thinking bears two significant errors;


1: That current AI iterations don't display or utilize emotive response. Canon or noncanon, it is inevitable when you have a human controlling them.


B: That people care if the AI has emotions. Emotions will not change malf-meta.


Ultimately, a mobile AI will bear similarities to a cyborg, regardless of whether it 'breathes and has emotions' or not. Tacking on emotions and two additional but ultimately meaningless laws (Unless the AI is an antagonist, but even then they're still meaningless because AI's that are antagonists are assumed to be malfunctioning and thus the player can bullshit their way through explaining their emotions as achieving greater sentiency anyways), only belabours the suggestion.


Looking at the facts, we reach the following conclusions:


People will metagame malfunction rounds, no matter what.


People will never sympathize the AI, no matter how much you tell them 'dude it totally has feelings and stuff!'


People will gladly gun down the AI if it makes itself vulnerable under the slightest suspicion. Hell, people break into the core and kill the AI over innocuous ion laws already.


People will still think of the AI as nothing more than a tool.


The AI will still crack fucking awful jokes that no one invited them to make.

Link to comment

By this point, people are going to start assuming it's you, because we're so used to Malf-AI rounds that it's become predictable. However! Addition of the emotive SCAI can help to mitigate this. If they know that the AI is 100% self aware and able to feel compassion towards the crew, they may be less likely to come barging in your door and either card or EMP your supercomputing self.

 

That's a stretch of an assumption. You're putting too much stock and trust in the crew to not proactively metagame a malf AI. Then again it's literally impossible to metagame a malf AI that is actually harming the station and clearly going against its laws.


Person 1: "WHOA WHY IS THE AI, THE ONE THING PROGRAMMED TO PROTECT US SUDDENLY TRYING TO KILL US ALL, HOW COULD IT DO THAT??"

Person 2: "because it's probably currently programmed to do the latter you fucking idiot"


I mean, it's hardly metagame. If an AI is suddenly executing crewmembers by way of doorcrush, gassing the workplace and setting shit on fire, despite the fact that it's completely against its laws in a normal instance to do that, then it's safe to say the AI is not following its standard laws.


And inevitably, the crew are not going to take a shine to that notion and will gladly make spears, thermite and welder bombs to tear the AI a new asshole, maybe in this case tear out the synthmemechild from its core and go all out

on it. Who cares if the AI is capable of caring, the crew certainly won't.


Reasonable assumption, y/y?

Link to comment

AI's are not supposed to be emotional. You want your station AI to obey its law no matter what's the law. You specifically avoid making borg-AI's (AI's with organic brains, just like IPC's/shells) and sticking entire bodies into giant MMI's because of this. You don't want your AI to walk out of its core, lying on the floor and crying because of mean laws. You want it to work and be expendable, not to be whiney little walking piece of technology.



Also, I think it was somewhere on the wiki or in the rules, round-start AI's are entirely synthetic, lawed borgs can be robots, androids or cyborgs.

Link to comment
For those of you interested in seeing what it would be like to interact with an MMI-SCAI, I will be implementing it into my cyborg character, SynWave, in future rounds. So far, it has received positive in-game results.

It's just a cyborg, then. It's obvious you will receive positive feedback if you've roleplayed it good. If I'm reading it right, it's not any MMI-SCAI or some other bullshit.


"A cyborg is the brain or neural node of a sapient species suspended in a nutrient solution and connected to an artificial body through the use of a brain-to-machine interface device."



For your fancy MMI-SCAI's you probably should have an accepted lore canonization application, since it's a quite

significant addition to the whole synthetics lore.

Link to comment
×
×
  • Create New...