Jump to content

Dalekfodder

Members
  • Posts

    37
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Dalekfodder's Achievements

Station Engineer

Station Engineer (8/37)

  1. Well, if the AI is experimental, and/or from another corporation, it may not necessarily be 100% trustworthy. It's bad to blindly trust any single source if you can help it it is veeeery complex. We haven't even written a computer that can understand human orders, let alone execute them with a degree of common sense. AI is very arcane tech No, an AI is less trustable than head of staff. Far, FAR less. The station AI is likely an unproven machine, maybe purchased from a hephaestus research program yesterday, or maybe cooked up in a lab last week. It's unlikely to be more than a couple of years old. Any head of staff is guaranteed to at least be 30, and spent at least 5 years of their life working for NT. Thats 5 years of a clean record - or at least clean enough to qualify for promotion above many other candidates. They've proven over a significant period of time that they're beneficial to the company, which is reflected in the elevated responsibility they're given Nobody except maybe the man who built it, would trust a new machine over a colleague they've known for years You are missing the point, it's 2400 something it's the age of robots and AIs. The prototypes were fucked up back at 2200 something. I don't think that 200 years isn't enough for perfection of such a technology, not to mention that most of our chars are born with machines making our lives easier. A man has his emotions and ambitions, AIs don't have those. If we were to expect that AIs are shit -Which I'm still not satisfied with, for sure- then why would NT place them in a high-tec very important research station? It's not even written on the lore.
  2. Because even a computer in such a high position of power, needs training and specialised programming to understand the nuances. Little things like 'Don't open secure areas for assistants', or 'This crewmember has a history of drug abuse and shouldnt be allowed near heavy machinery" The AI doesn't serve the crew equally, it serves the chain of command That is not a simple purpose, it takes a very complex intelligence to do it. There's also one important point: NT Sucks at making AIs. So station AIs are most likely either bought from some other coirporation, or made by one of many competing scientists within NT each experimenting to try to find the best AI, and thus trying lots of approaches Remember the exodus is a research station. We have experimental weapons in security, an experimental engine in engineering, and a dedicated science department. The AI, too, is likely an experiment being live-tested, and what you see in the exodus may not be reflective of other stations in the galaxy It doesn't answer my point when it comes to lying, perhaps you /could be/ -not agreeing- right on that matter but it's pretty much off topic at this position. So what if it doesn't serve the crew equally? Of course it'll alert security once there's a breach of some regulations and the point of this thread is to ensure that it can do it at it's fully extent by allowing security officers to take it's word over the comms instead of having to deal with a warrant. Serving the crew is in-fact not very complex. You follow orders and alert them when there's a problem, any other complexities (mostly invalid, if you were to ask me) is created by the AI player. That phrase you sent me does not imply that NT particularly sucks at creating AIs, just falling behind the research race does not mean they suck at it. Experimental or not, if they place trust on heads (some of them, non-implanted, mind you) when it comes to lying, the AI should also be trustable for the said reasons, to be frank AI should be even more trustable as it's a machine that is supposed to be lacking the emotions humans have.
  3. Why would AIs be mass produced? There's a bit of a logical disconnect here. They are software, you don't mass-produce software. You just make a copy as needed. You can spend a century writing the most singularly perfect software and then create an infinite number of copies with zero effort. I doubt AIs would be mass produced. They would be trained. Programmed to learn, and gradually developed like children, taught how to do their duties best, and various nuances about the humans they serve. A well trained and mature AI could be worth billions of credits The strict rules are the AI's four laws. There isn't one of them that says 'don't lie'. The laws are kinda open to interpretation and debate, but there's plenty of room within them to lie in certain circumstances An AI openly breaking its laws is an admin issue, but an AI bending them and interpreting them unconvnentionally is an IC issue imo. If the AI becomes too much of a hindrance it can be re-lawed, destroyed or carded. Actually AI offers a lot of RP flexibility. Your laws are technically equal in importance, but the interpretation of whether any action equally follows them all is quite subjective. Its common to see AIs that favour certain laws, and can rules-lawyer to be obstructionist or unhelpful You already got them. Tishinastalker, page 1, Primary Administrator I will put EVERYTHING aside. Explain me why would NT create an ALL SEEING OVERSEER WITH FULL ACCESS TO EVERYTHIIIIINNGGG and then make it "emotional" or capable of having some sort of an autonomy that allows it to lie to the crewmembers it was /made/ to serve? I won't even mention that you claimed it's ok to tiptoe around AI laws. Lastly, I don't see how being a slave doesn't offer RP flexibility, you have to serve all your laws equally as it's already stated in the AI wiki. I will repeat once more, AI is a machine that was created by a corporation with the purpose of serving the crew and making sure everything runs smoothly, it's not a "let's create a race and live peacefully with it!" situation
  4. opinion discarded Thanks. That was just a fancy way of saying, and what I meant, is that the in-game actual player-controlled AI's aren't exactly these 100% perfect AI's. I wouldn't trust like, 75% of them. The reason I discarded your opinion is clear as a day. 1-) There's no perfect AI, there's just mass produced AI and those folk that claims "I habe personalit module xd" 2-) Even if you were given the ability to alter the way you speak as an AI does not mean that there aren't strict rules you can not breach. If anyone claims otherwise and plays a non-trustable/liar AI then it should be an administrative issue, not an IC issue. Following your logic I should be able to create an AI that ignores laws because of "personality module" 3-)If you don't want to play a SLAVE then don't play AI, if you are willing to play AI then you are also entitled to be a slave that can not go against the crew. Now, back on topic: I have nothing else to say really, I've explained my points multiple times and you guys explain yours multiple times, I'd like an upper administrator's thoughts regarding this matter.
  5. This is not a potential solution, this is how it is at the moment. You guys still don't get the point... Whatever, I said what I had to say and now I'm going to allow upper management to consider this matter.
  6. That means we're back to square one and it means that AI is still useless on that end. I don't understand why you guys wish to limit AI in this manner, as I said, it's an all seeing eye that can witness everything going on on the station, you may hate them but you have no reason to distrust them as even they hate you back they can not break your trust and as proven the last incident with AIs happened 200 years ago and I suppose 200 years is enough to build trust in AI. I mean it's 400 years later, robotic age and all.
  7. But AI is tasked to oversee the crew. So of course it's going to be more efficient to allow security officers to arrest based on AIs word. So where's logic in, "Let's make a machine that can see everything but put limitations infront of it so it cant be at 100% efficiency" Wait. Are you saying based on evidence that the AI provides or just if the AI tells someone to arrest someone? Because, if the AI can prove that there is a threat, Security Officers can act on it. However, the AI cannot and will not be allowed to command around the crew. Many species that are employed by NanoTrasen wouldn't want to give an AI that amount of power to order around people. I honestly, think it is fine how it is now. Let me explain it to you with a scene: I'm X officer at bar, AI informs security that Y is breaking into science and he stops when he hears this. So I should be allowed to make an arrest (of course upto me) cause AI as a witness is 100% reliable and should bypass the necessity of a warrant.
  8. But AI is tasked to oversee the crew. So of course it's going to be more efficient to allow security officers to arrest based on AIs word. So where's logic in, "Let's make a machine that can see everything but put limitations infront of it so it cant be at 100% efficiency"
  9. See the Skrell lore page. AIs have malfunctioned before. Just because there's a non-canon round type where that also happens doesn't negate the fact that machines break down. It should honestly be up to the person making the arrest whether an AI's testimony, with no further evidence, would be enough to actually convict someone. Oh yes, 4 centuries are nothing. What you are telling me right now is that, "a machine broke down 400 years ago, they are unreliable" Unless it's a Chinese Brand AI I don't think that it's very likely to break under canon conditions.
  10. As I didn't specify your name (unlike you did to me) I did not blame you regarding this incident as this is a fault in the system, not yours, I was only telling where I learned this.
  11. The stuff about an AI's role wasn't really directed at anyone, just my feelings about the AI's place. Building on that though with your specific example, I think the arrest may still be questionable. Obviously you don't always get to have an awesome HoS, or things are just too busy, but in an ideal situation I believe the best course would be 1) Alert the HoS that you are on your way to detain, 2) Once detained, ask for permission to arrest (this is something I haven't seen, you do not need a warrant to declare you are detaining someone afaik), 3) Proceed as instructed. I feel like a lot of security officers are too independent of their chain of command (not an accusatory statement, just a general observation). That's moderately off topic though. I agree that the AI's observations should be trusted, but I don't think an AI should ever attempt to interpret a situation. An AI should not announce to security "Urist McCriminal is making a bomb" but rather "Urist McCriminal is mixing X, Y, Z reagents in such and such location". This is more of an RP opinion, but still it solves the issue of the AI's trustworthiness. We use computers to give and sort the data on the weather, but the actual interpretation of the results is best left to people. Dude what you are saying is completely unreleated right now. What I'm saying is, if AI tells us that SOMEONE has committed a crime, it should be considered a valid excuse to make an arrest, it's still upto the officers to trust them. So, we made a robot that cannot lie, can see everything and is enslaved to us /without/ any questions? I know what to do! Let's hinder it BECAUSE it's too easy for our security and put the need of a piece of paper. But as I said, it's not an order, it's more of a witnessing situation and it should be upto the security officer to whether arrest or not, personally, since we consider malf AI rounds non-canon, there's literally no reason to not trust it, as it's 100% successfull everytime when it comes to malfunctioning/lying
  12. I feel like that issues arises more from a lack of command chain. The way I see it, the AI says "Oi, this thing went down" to the HoS, then the HoS orders the arrest. The AI is not a member of security, nor does it hold actual rank over anyone on the station. As such I feel like an AI telling people what to do is faulty, and instead by design they aught to be more about telling what has happened. I did not claim that AI told me to arrest. AI told me that someone had committed a crime and they ran away to x place, so I went to X place and decided to arrest the person as an AI IS A ROBOT THAT IS LAWED (WHICH HE CANNOT DISOBEY AND HAS 100% SUCCESS RATE) to protect the station and CAN NOT EVEN LIE.
  13. The problem is I was an officer and I arrested someone according to AI's words, then the warden was like "no release him is invalid" whatever. Distrusting AI is sort of valid but to an extent, if you distrust an AI you should have a very good reason.
  14. A specific warden disregarded my arrest and told me to release because there were no warrants after AI showed me to perp. AFTERWARDS I spoke to administration and got this response.
×
×
  • Create New...