Jump to content

Androids/IPC "feelings"


Recommended Posts

Posted

I've seen a lot of androids/IPCs acting like a normal human recently, whilst I imagine them something like this.

 

hqdefault.jpg

 

So my question is: how is an android/IPC capable of getting "feelings" now? This bugs me because there's been some edgy androids to be honest.

Posted

I've seen a lot of androids/IPCs acting like a normal human recently, whilst I imagine them something like this.

 

hqdefault.jpg

 

So my question is: how is an android/IPC capable of getting "feelings" now? This bugs me because there's been some edgy androids to be honest.

Posted

The basic goal of the AI stuff that I wrote was to give people the opportunity to play the kinds of robots that they wanted. That's why AI's don't have a universal origin, and why they're encouraged to come from other companies then Nanotransen.


As that's the case, I would say that some AI's are more then capable of experiencing emotion, but other are not, both are valid. Others might be capable of simulating an emotional response, while not directly 'experiencing' it as part of their core processes, such an AI might act a lot like an organic sociopath, perfectly warm and loving one minute and then cold and calculating the next.


Specifically regarding 'shells', there are a middling number of them who are actually cyborgs, and in that case they are as capable of experiencing emotions as anyone of their species. That being said, cyborgs are missing most of their endocrine, and peripheral nervous systems, so unless the software or brain-doping simulations of those systems were unusually advanced, they would probably experience a muting of the more extreme emotional reactions. Most cyborgs would be able to feel dread, because that relies on an intellectual anticipation, but they would be unlikely to experience terror, so too with satisfaction and elation.


I do notice a lot of IPC's acting human or very nearly, and I think that's more sad than annoying. It's a missed opportunity to do something more interesting.

Posted

The basic goal of the AI stuff that I wrote was to give people the opportunity to play the kinds of robots that they wanted. That's why AI's don't have a universal origin, and why they're encouraged to come from other companies then Nanotransen.


As that's the case, I would say that some AI's are more then capable of experiencing emotion, but other are not, both are valid. Others might be capable of simulating an emotional response, while not directly 'experiencing' it as part of their core processes, such an AI might act a lot like an organic sociopath, perfectly warm and loving one minute and then cold and calculating the next.


Specifically regarding 'shells', there are a middling number of them who are actually cyborgs, and in that case they are as capable of experiencing emotions as anyone of their species. That being said, cyborgs are missing most of their endocrine, and peripheral nervous systems, so unless the software or brain-doping simulations of those systems were unusually advanced, they would probably experience a muting of the more extreme emotional reactions. Most cyborgs would be able to feel dread, because that relies on an intellectual anticipation, but they would be unlikely to experience terror, so too with satisfaction and elation.


I do notice a lot of IPC's acting human or very nearly, and I think that's more sad than annoying. It's a missed opportunity to do something more interesting.

Posted

I don't think it's a matter of want, it's a matter of have. While you might think that roleplaying as a stone-cold, sociopathic and devoid-of-emotion character is easy, it's actually very hard. Sometimes YOUR emotions might get in the way. You might see something ICly that offends you OOCly, and you might want to take action against that, that's the way I see it with IPCs, it's not that people want to roleplay as human-like (there are humans for that), it's just that people lack the capabilities to roleplay as such a character.


Now while the example I listed above is against the rules, since it could be considered metagaming, it doesn't stop people from doing it. The way I see it, IPC applications should undertake some kind of quiz in-game, you give them situations, and you see how they react. It's time-taxing definitely, but quality over quantity.

Posted

I don't think it's a matter of want, it's a matter of have. While you might think that roleplaying as a stone-cold, sociopathic and devoid-of-emotion character is easy, it's actually very hard. Sometimes YOUR emotions might get in the way. You might see something ICly that offends you OOCly, and you might want to take action against that, that's the way I see it with IPCs, it's not that people want to roleplay as human-like (there are humans for that), it's just that people lack the capabilities to roleplay as such a character.


Now while the example I listed above is against the rules, since it could be considered metagaming, it doesn't stop people from doing it. The way I see it, IPC applications should undertake some kind of quiz in-game, you give them situations, and you see how they react. It's time-taxing definitely, but quality over quantity.

Posted

Whenever I play an IPC, I'm either dreadfully boring or I simulate actual emotions. But they aren't real.


Most of my IPCs aren't advanced enough to do the latter, though. I prefer being a cold steel meme machine.


There's nothing wrong with SIMULATED emotion... but there is an issue with IPCs going around cosplaying as Bender.

Posted

Whenever I play an IPC, I'm either dreadfully boring or I simulate actual emotions. But they aren't real.


Most of my IPCs aren't advanced enough to do the latter, though. I prefer being a cold steel meme machine.


There's nothing wrong with SIMULATED emotion... but there is an issue with IPCs going around cosplaying as Bender.

Posted

With the whole IPC/Android update being invulnerable to taser, flash, pepperspray I guess it's easy to play as a cold hearted robot. The only thing they're vulnerable to: stun batons and serious damages, which they shouldn't "feel" either, just get those "error" systems or whatever.


Why, just why, would NT hire pink cute IPCs walking around and possibly relationship rping with another IPC. Why would a robot scream in grief when a "friend" dies.

Posted

With the whole IPC/Android update being invulnerable to taser, flash, pepperspray I guess it's easy to play as a cold hearted robot. The only thing they're vulnerable to: stun batons and serious damages, which they shouldn't "feel" either, just get those "error" systems or whatever.


Why, just why, would NT hire pink cute IPCs walking around and possibly relationship rping with another IPC. Why would a robot scream in grief when a "friend" dies.

Posted
Why, just why, would NT hire pink cute IPCs walking around and possibly relationship rping with another IPC. Why would a robot scream in grief when a "friend" dies.

 

I've seen otherwise barely-sapient IPCs become highly attached to other characters and cling very closely to them as their only real outwardly "human-seeming" characteristic. Some of the most emotional IPCs I've seen are the most emotional when their job/role is endangered or maligned in some way.


Centurion, for instance, has serious problems with unjust actions being carried out by Security.


It's perfectly natural to me, though, that AI would be based on a human mental template that CAN be emotionally-inclined, even if emotions are optional. Early synthetics outright used human brains, after all. I think it depends a lot on purpose, manufacturer, and specific creator.


Karima Mo'Taki is an example of a Roboticist who wouldn't make a soulless husk of a robot if she had any choice about it. I think, though, that even "emotional" robots probably wouldn't be too terribly inclined to the kinds of irrational outbursts that humans are unless there was something wrong with them.

Posted
Why, just why, would NT hire pink cute IPCs walking around and possibly relationship rping with another IPC. Why would a robot scream in grief when a "friend" dies.

 

I've seen otherwise barely-sapient IPCs become highly attached to other characters and cling very closely to them as their only real outwardly "human-seeming" characteristic. Some of the most emotional IPCs I've seen are the most emotional when their job/role is endangered or maligned in some way.


Centurion, for instance, has serious problems with unjust actions being carried out by Security.


It's perfectly natural to me, though, that AI would be based on a human mental template that CAN be emotionally-inclined, even if emotions are optional. Early synthetics outright used human brains, after all. I think it depends a lot on purpose, manufacturer, and specific creator.


Karima Mo'Taki is an example of a Roboticist who wouldn't make a soulless husk of a robot if she had any choice about it. I think, though, that even "emotional" robots probably wouldn't be too terribly inclined to the kinds of irrational outbursts that humans are unless there was something wrong with them.

Posted

Another major thing to recall now is that Shells can have MMIs. This means IPCs are authorized to be human/taj/unathi/skrell inside a machine. I think you should have both whitelists for this, but that's not my call. But with that said emotions can come from an IPC pretty quickly.

Posted

Another major thing to recall now is that Shells can have MMIs. This means IPCs are authorized to be human/taj/unathi/skrell inside a machine. I think you should have both whitelists for this, but that's not my call. But with that said emotions can come from an IPC pretty quickly.

Posted

AI's being wildly inappropriate or unprofessional is a job for the DO desk. Regardless of their source, age, or design, AI's that work as crew-members still have to be able to work as crew-members.


If you see IPC's acting completely bonkers, that's no different then seeing anyone else acting completely bonkers. Contact the relevant authorities.

Posted

AI's being wildly inappropriate or unprofessional is a job for the DO desk. Regardless of their source, age, or design, AI's that work as crew-members still have to be able to work as crew-members.


If you see IPC's acting completely bonkers, that's no different then seeing anyone else acting completely bonkers. Contact the relevant authorities.

Posted

Wiki lore info: http://aurorastation.org/wiki/index.php?title=SSTA


TL;DR - in the research community, it's a topic up for debate. Artificial intelligence are able to edit portions of their own code, so that they may change and modify their understanding of the world around them. They understand quantitative data, and not so much of the qualitative aspects.


Because of this, failsafe overrides are used to control synthetics. Lawsets being some of the most effective ones. Take away that cornerstone, the unit has to develop its own. Its emotional reactions, its moral tendencies, and its psychological response, are all linked together in a hodgepodge of code that can constantly be reviewed and edited by the synthetic.


It's a very high quality form of mimicry to organics - so high that there is a strong debate if synthetics are actually 'alive'. When a unit exhibits a strong sense of self-awareness, with a strong concrete and favorable response of being able to interpret its directives and lawsets, it's achieved the first step of becoming a free unit.


Whether it's actually alive or just acting like it is, that's when the research team comes in.

Posted

A perfect example of Nebula's thoughts is EDI from Mass Effect. There are several times in the third game where your choices cause EDI to alter the way she perceives situations and she rewrites her own code to generate new thoughts.

Posted
A perfect example of Nebula's thoughts is EDI from Mass Effect. There are several times in the third game where your choices cause EDI to alter the way she perceives situations and she rewrites her own code to generate new thoughts.

 

ugh this is snowflakey and makes no sense, I'm sorry


To basically describe the thought pattern with my judgement, I'll explain myself.


Computers are very diverse and often intangible things. Once they're written in a certain way, it's difficult to change how they react in certain situations.


Most computers are very simple, however. When they are written in a manner to respond to certain things such as a key phrase or even something as simple as, "Hey, server, are you there? If so, respond in 4 letters in this sequence: BIRD." And so, the computer, according to their written code, will be compelled to respond with "BIRD".


Basically what I just said is that the computer responded to someone, but only because they were programmed by another person to respond in that manner.


The matter in which computers can respond to people can be a little more complex for the sake of security. You might want to look up "Heartbleed" for instance.


Humans are arguably masters of forethought and introspection in comparison to a merely close-to-sentient computer, humans WILL react to almost every single situation thrown at them. Computers are programmed to respond or react to specific situations, or have a fallback plan if they don't know how to react to it. Also known as a failsafe to avoid the computer from being stuck in a logical loop or crashing when it is needed the most.


Some computers are complex enough in sci-fi to react to situations in more ways than just one. What I have a little bit of trouble believing is that a walking, talking intelligence has enough processing power to not only react to a multitude of different situations at one time... but to also write its own code.


I don't think I can stress enough how unbelievable that feature is for any sort of machine, ever. A toaster will never be given the function to do more than toast things. A PC will do nothing but process programs and software for its users. An AI will never be programmed to do more than fulfill requests of its masters to the letter.


But why? You might ask.


Well, do you ever think God lives up in heaven, in fear of his own creations?


That's how humanity feels what would happen if we gave toasters self-awareness and sentience. It would not bode well.


Likewise, for this setting. What fuckin' idiot would think it to be a good idea to give a robot all these freedoms, mass-produce them and then unleash them onto the general populace?

Posted

This is what an actual AI is designed to do however. The concept is for it to think rationally about things and to begin developing opinions. Using those opinions it can make calculated decisions about various scenarios. The limiting factor humans put in place is lawsets. Most AI units are lawed, as far as I know. Part of learning is coding those decisions themselves. If they only have preprogramed decisions and can't alter those in any way then an AI is incapable of learning. They aren't intelligences then, simply programs. So no, not a snowflake, simply a theoretical reason for a mechanic. Also, the use of the Mass Effect Character was a comparison, not trying to push for a snowflake option or saying that all IPCs should be precisely like EDI.

Posted

Err… computers are already programming themselves. You need to look up Neural Networks and Evolutionary Computation. In fact, computers are already designing their own hardware, see this study and this.


Computers have been rewriting themselves since the early 1950’s.


Yes, most programs are fairly rigid input/output systems, but we’re not talking about simple programs, we’re talking about general purpose AI’s, who, simply by their design requirements, must be able to react dynamically to rapidly changing external circumstances.


While our toasters still, generally speaking, just toast things, a better analogy for AI’s would be cell-phones. We have cell-phones right now that are super-computers, televisions, portable phones, mail servers, voice recorders, video recorders, cameras, GPS’s, data storage systems, radios, books, games, and flashlights. We’ve done that just by using miniaturization.


On a more dynamic front, we also have things like FGPA chips which can, based on the software loaded, act as a modem, a voice-recognition unit, an audio processor, and pretty much any other simple component. Network a bunch of the n-th generation descendants of modern FGPA chips to an evolving neural network system and you have a device that can react appropriately to literally any situation (bounded perhaps by it's power supply and the availability of resources). We don’t have things like that in Aurora because they wouldn’t be fun to play against, but they’re likely to show up in the real world sooner than you’d think.


Is God scared of his creations? In the Aurora universe, the answer is yes. The Skrell are terrified of unbound

AI’s and runaway singularities. What humanity is doing with their massive proliferation of artificial intelligence is dangerous and short-sighted.


The freaking idiot’s who have created this situation that, yes, could go off like a bomb under the seats of everyone within thousands of light-years, are the opportunistic greedy humans involved in the AI construction boom, of whom the universe has a nearly inexhaustible supply.

Guest
This topic is now closed to further replies.
×
×
  • Create New...