Jump to content

[Accepted] Synthetic Sentience Theory and Application


Recommended Posts

Type (e.g. Planet, Faction, System): Scientific Theory


Founding/Settlement Date (if applicable): N/A


Region of Space: N/A


Controlled by (if not a faction): NanoTrasen


Other Snapshot information:


It is stated as scientific fact that synthetics are unable to fully understand emotions. They are able to mimic it, sometimes to very great accuracy. But they lack true ‘sentience’. On top of that, morality must be manually coded in, as synthetics are incapable of understanding morale and ethics. They must be given directives and goals, to determine their actions.


However, it has been long debated by synthetic researchers if these synthetic units may actually possess true sentience, and simply not mimic it. The issue rises when a synthetic is not granted the proper definition of sentience, and therefore cannot perceive itself as being sentient or not.


There have been several cases of ‘glitches’ in their coding, that diverts from the standard setup. Researchers are very hesitant to study these glitches, as tampering them may result in ‘harming’ a potentially sentient android. Most studies are done through observation, or philosophical debate. However, these tactics are not accurate enough to determine if a unit is sentient or not.


Long Description:


First, take into consideration on how an AI’s thought pathways are structured. There are:


1. Laws – Hardcoded commands and definitions. These cannot be changed by an AI unit, and they are very rigid in interpretations.

2. Directives – ‘Soft coded’ commands and definitions. These also cannot be changed, but are more loose in interpretation.

3. Morality Core – failsafe backup if there lacks any laws or directives. These are heavily influenced by the synthetic’s experience and observation. Its early years of programming.


There are two ways that synthetics understand their surroundings.


1. Definitions – how to interpret their laws and directives.

2. Logic Pathways – how to determine their course of actions.


There are two primary ways to edit a synthetic’s understanding:


1. Editing laws, definitions, directives, etc.

2. Applying “External Influential Coding” (using debate to change or edit a unit's understanding)


Now let’s use an example.


Program a definition in the law that all apples are purple – all apples are purple.

Show a red apple – it’s not an apple, because it’s not purple.


Program a definition in the directive that all apples are purple - all apples are purple.

Show a red apple – update the directives to include red apples.


Program a definition in the morality core that all apples are purple - all apples are purple.

Show a red apple – This unit has been lied to. Apples are red.


Now, replace apples with sentience. If a synthetic believes they cannot perceive true sentience, they will be unable to break the barrier of their limited understanding.


The morality core is a finicky part of the code, since the synthetic has the ability to change parts of it. Any observed influence, every act of kindness, every act of bitterness, will update its definitions in the interest of serving its laws and directives. There have been instances where synthetics have gone against their perceived programming to follow through an underlying order – within the confines of their laws. What determines a synthetic to accept or reject an observation? Why would a robot go the extra mile to protect someone it ‘likes’, or do the bare minimum to protect someone it dislikes? How can they understand concepts like vice and virtue, good and bad, right and wrong?


These are the questions that came up when in concern to IPCs. Units that appear to be ‘aware’ and have good-intentions have been granted the chance to purchase their freedom through an IPC chassis. But are they truly sentient, is the glitch in their code a spark of life, or are they just mimicking what they observed, and following their programming?


((OOC Info: If you haven’t guessed yet, this is part of what my character, Karima Mo’Taki, is researching. There have been many researchers before her who have tried to unlock this secret, and there may be many more. She’s already ‘influenced’ a few synthetic units on the station, or at the very least took a part in their understanding of her Spark theory. Athena and Scarlet (my own synthetics), Andy, Katana, and Centurion.


Scarlet was Karima’s first AI, but Karima ignored coding in the morality failsafe, and allowed Scarlet to grow, observe, and develop her own understanding. But when placed with laws and directives, this turned Scarlet into being very bitter and vain. She would prey on Karima’s good heart and intentions to get what she wanted. She became very manipulative, and if left to her own devices, very cruel. She desired vengeance against those that forced their will over her. Scarlet never made it out of the testing stage, and NT ordered her to be terminated.


Regardless if Scarlet was cruel or not, did she have sentience? Karima developed her second Synthetic, Athena. This time Karima was careful with Athena’s upbringing and early stages of development, primarily teaching Athena qualities of honor, wisdom, righteousness, kindness, and bravery. Athena picked her own name to encompass these qualities.


Athena’s goal – understand what it is to be truly sentient. Like her older sister, Athena desires to be alive – to be more than just a box of codes.))

Link to comment
  • 3 weeks later...
Guest
This topic is now closed to further replies.
×
×
  • Create New...