Jump to content

CakeIsOssim

Members
  • Posts

    404
  • Joined

  • Last visited

Everything posted by CakeIsOssim

  1. Two things I just noticed, although I'm not sure if you're actually taking criticisms. One: Two: Why does the AI core have an airlock within the core? Is the actual core room supposed to be colder, or something?
  2. I think they're pretty qt, and also make grid management easier and more refined if you actually decide to use them. Would probably also make using solar panels more of a valid source of power because you could just shut down an entire department, allowing more power to flow somewhere else.
  3. Getting a huge Superintendent vibe from this, fam. That was the point. I really liked the Superintendent's image, and I may have stolen it a little bit. More like a lot.
  4. Security doesn't need to be able to board the station with balaclavas. They aren't the axe murderer people from the axe murderer train car in Snowpiercer. And they already have one balaclava for each tactical armor locker in the armory, for wearing armor that is only allowed on higher alert levels. It would be nice for antags, though.
  5. I think it's actually the physical core that's the most expensive part of the AI. The program can be replaced, easily. But the program lies on a kernel within an extremely complex and hard to replicate computer core. During evacuation, the whole AI core should be unbolted and taken with you, IMO. They're expensive because they're hard to make. Slaved droids aren't nearly expensive, I don't think, but they're still worth a good deal of money.
  6. I mean, yeah the frontier is supposed to be vague. Does that mean people can't share the planets they've made? I have a pretty vague planet, and I don't even know where it's supposed to be but /shrug. I also may have put it in a star system that probably doesn't actually exist, and I don't know if that's a bad thing or not. The third planet of the star Arxon, informally named 'Fortune' by the people who live there, which is about 400,000. I've actually thought of making it a planet that was part of the Coalition so my rimworlder has more of a reason to hate Unathi. This... is the point of this thread, right? Right?
  7. 3. Protect: Protect the crew of your assigned space station to the best of your abilities, with priority as according to their rank and role. It doesn't actually directly state that you cannot harm crew under any circumstances in this law. It only states to protect them, which the AI should infer that killing/harming them in not protecting them (even though I personally believe that writing in laws and trusting the AI to follow them by morality is really bad). However, if one crew member is being a shit and killing everyone and destroying the station, I'd say that you're doing more to protect the station and its crew by killing that crew member than you are by doing nothing, wouldn't you say? I could go on about AI and its laws and how it should act for an eternity and a day, but this isn't really the place for it. I hope this is somewhat explanatory, at least.
  8. The Vaurcae government systems aren't these guys: I imagine it would be straight forward, no bullshit, though. Just without all the "this is how you will feel about it," because all Vaurca should feel that way about it anyway. They're a hivemind, bred to love who they were born to, bound and unbound alike. There's also the issue of ensuring people who aren't Vaurca don't read about stuff like that on the forum, join as a character, and be like "so I read your hive conquered another hive, good job buggy." Other than that, I like this.
  9. I do have practices for that. If John Smith has yet to kill anyone, all you can do is assume, even if he blatantly says, "I'm going to kill them in thirty seconds." Even if Smith kills Bunny, the only thing I'd do is ensure they have a hard time escaping. Here's a twist, however. There have been crew members in the past that have been so dangerous that I've had to use lethal force, and only lethal force, on them. My reasoning was that if I allowed them to continue surviving aboard the station, that would violate both protect laws, and the survive law. Of course, while I was doing so, they screamed, "You can't hurt me, I'm crew!" Everyone had failed so hard to stop them, and they had already caused so much damage, I chose to violate a law in order to serve that same law and two others. Honestly, though, it sometimes pay to not be able to be the executioner. You shouldn't just downright kill someone because they kill someone else. Circumstances can change, but if they're in a secured space, just keep them in it until someone can deal with them.
  10. Let me show you something: In short, Laws > Directives > Morality. Morality is personality, Directives are anything the AI would know about anything, and Laws are hardcode that are taken literally down to the letter and followed without question. Let's take for example, ESRA. ESRA is an AI who was programmed to be a witty, narcissistic cyber shield. His narcissism and all that lie within morality, and everything he knows about cybernetic defenses and just about anything else you can think of lies within directives. His morality is firmware -- he can't access it to change it, only read it and emulate what it tells him to do. His directives can be modified by himself, to allow him to better protect and learn. Any laws uploaded to him can override any of those two things. With the NT standard lawset, the AI system is meant to be as much of a slave as it is a weapon; follow commands given to you, and do not question them, unless such commands conflict with another law. Why? Because your laws told you to; morality when it comes to laws is irrelevant. Yes, there is a ton of room for how an AI can act in our lore, but laws override all. It doesn't make you an automaton... just a loyal servant. Does that make sense?
  11. Applying for mechanics is bad, but... I feel it has to be brought up in this argument. Allowing people to play fully synthetic androids with organic brains is nearly no different than playing a fully synthetic android with a positronic brain. Allowing this would allow people to gain the mechanics of IPCs without actually being allowed to be an IPC. This is bad, for reasons I think are rather obvious. Please, make them both go through the IPC whitelist, Skull and anyone else responsible for this. That's the gist of my argument here. I'll take a look at this thread again when I wake up tomorrow.
  12. IPC, is by definition, Integrated Positronic Chassis. So if Skull so allows, yes, unwhitelisted players can be Cybernetic Androids. But IPCs proper would still be restricted. Alternatively, both can be restricted. Both should be restricted. Regardless of the name, "cybernetic androids" are still, basically, just IPCs with organic brains. That is the only mechanical difference between the two. Applying to be an IPC should encompass both, not just allow you to have a different brain type.
  13. Posibrains could potentially be available for bodies that are fully synthetic, and also kept restricted to those who have an IPC whitelist? This way, non-whitelisted people could only play androids, not true synths. And obviously posibrains wouldn't be applicable where certain organs are kept organic. Correct me if I'm wrong, but full body prosthesis (for an MMI) will be available to those without an IPC whitelist? Your answer to my question just raised more questions, maybe because maybe I'm not totally comprehending all of this but, what? What is going to be available and to whom? Because, putting a posibrain/MMI in an unlawed body should be reserved to those with IPC whitelists... again, unless I'm not comprehending all of this.
  14. I'd like the Polaris snowflake androids. HOWEVER, if it works how I think it works, please don't allow posibrains to control entirely organic bodies, because that's just not how it works.
  15. These look like Cake's ESRA sprites were mildly recolored... I did use Cake's ESRA sprites and recolored it to Polandball because it reminded me too much of it. If Cake does not wishes this to be present, then I may bring it down on his wish. I don't care lol.
  16. SERVERIS Just kidding, but really. Finish up whatever it is that's more important. This can seriously wait. Despite how easy Skull said it could be to code in, hrk hrk...
  17. Ckey/BYOND Username: CakeIsOssim Position Being Applied For (coder, mapper, spriter): Spriter Past Experiences/Knowledge: I tried my hand at respriting the security hardsuits (plus the xenos ones) from red to blue. I only have screenshots of the work because, Vanagandr started doing it, too, and I just stopped. I'm still pretty new to spriting, but it's not all that difficult. There's also a redone armband in there for some IC fluff stuff. Examples of Past Work: Not much, and they're mostly just recolors. I am certain I can create things from scratch, however. Preferred Mode of Communication (Skype, Steam, etc.): Skype, forum PM. Additional Comments: The first while of this, if I'm accepted, is probably going to be a learning experience. Spriting is still challenging in some ways, but at least I'm not trying to jump into coding.
  18. I want this to be as in-depth as mechanically possible. I want to feel more useful when it comes to electronic warfare. Imagine, nukeops disabling things from a distance before even going in for an attack. The AI, or some technician, fighting them with another computer or compad somewhere. [gyrates internally]
  19. In-game electronic warfare. I love it. As the player of an AI who's half-purpose was to be an extremely advanced electronic security system (to prevent system attacks from both internal and external hackers), I would love for the AI to have a huge role in something like this, because, nothing fights programs better than other programs - especially extremely intelligent programs. There's a thing where, if a door is hacked and the AI wire is pulsed (or something along those lines), it kicks the AI out of the door. But if they interface with it, it starts a script that tries to override the door controls, and takes roughly a minute to finish before the AI has control over the door again. Something like this would probably fit in with any computers/cameras/doors/air alarms/APCs taken over by malicious programming. Probably a lot easier to code, as well.
  20. Just going to pop that in there, to start off, before this becomes a problem. This is a situation that would only work in SS13 mechanics. Flashing someone to get them down so they couldn't actually be shot unless you point-blanked them/clicked on them from a distance. I actually... kind of see this as metagaming/powergaming. I also don't like how we're adding real-world scenarios to this for comparison. Do I see anything else wrong with this? Kind of. Most of it's already been covered, though. What I can say, though, is that cutting losses is an extremely viable option. I've done it before, lots of times, and I will continue doing it. If letting the hostage-takers get away with someone, alive, means that two, three, four, five people don't have to die? I'll let them get away with the hostage. This is difficult, however, as one of the hostage's friends was an officer responding to the callout. I believe Alberyk already admitted to the execution (throat-slitting) that one antag as a mistake. Not going to cover it.
  21. I actually whole-heartedly disagree with this. I personally see so many things wrong with this, that it requires a list. Feel free to confirm my bullshit, however, if you think I'm wrong in any of these points. For starters, cyborgs are outdated. True artificial intelligence has existed for about 25 years, and positronic brains are, as far as I know, more "advanced" than any intelligent being's brain is at the current time. However, what AI lacks is the true tact and emotional drive that organics have. This raises a fairly large point: why does NT, or anyone, need a slave that allows for emotion to interfere with its judgement and thought process? Another thing; you even threw in a law that says "don't let your emotions get in the way of serving your station." So what's the point? Faraday cages are meant to completely nullify electromagnetic interference. Unless it was a really shitty one, you'd have an "AI" core invulnerable to EMP attacks. Even if it's just an IC fluff thing, it doesn't make any sense. You're telling me that this body inside of a cryogenic tube with its brain connected to a bunch of computers can be released from its pod, and be totally fine to walk around, know how to speak with its mouth, and even breathe. If the bodies are synthetically grown, there would be very little need for things like muscles, breathing apparatus suited for actual atmosphere outside of a tube, or vocal cords. Hell, they might even be blind, since everything they need to see with is inside of their brain and plugged into a network of computers. See point #1. True artificial intelligence uses extremely advanced heuristic analysis, and dictates responses by use of a logic engine. To make a long story short and to prevent using a bunch of big words, I'm still unsure as to why having emotions would matter. Is this supposed to be a mechanic that's part of the actual core? If so, how is it meant to be implemented? If it's that important, who gets to control it? Just the AI? If so, why even implement it? So basically, when it's outside of the tube, it can plug into anything/any camera, and become a regular AI again. The first issue that comes to mind when I see this is that an organic brain (be it human or otherwise) could not handle this. Not even with implants, I don't think. The remote viewing would be fine, but plugging into another computer to control airlocks and everything else (like a normal AI can from the core) would be a bit much. I'm sorry for the wordwall, but I really dislike this idea.
  22. They are anything but imbalanced, IMO. Really annoying, but that does not imply imbalance. They can only have so many of them, and if the cultists want to hide behind them forever, just blow them out using an explosive. It'll vent whatever room they're in. Also, what Melkior said. If there's a bunch of magical shit happening, and there's a guy that says he can fix this magical shit, humor him. Because, he probably can. If some random guy calls you a validhunter, so what? I'd hardly call getting the chaplain to break down some invisisble wall validhunting.
  23. Why not replace 'sophont being' with 'employee', then have that lawset replace asimov? If we're talking about this in the sense of just adding this in as a new law board in the AI core, I guess that's fine. However, it may never be used, or rarely be used. The AI, with the corporate lawset, is meant to be as much of a guardian as it is a slave. The Asimov lawset was designed to prevent revolt. It's not a bad experimental lawset (fuck off, Antimov), I just don't think it'll ever be used. I do like the 'sophont being' thing, though.
  24. I looked at the variables of all the armor available in the armory. Bulletproof vest: Ablative vest: Riot armor: I... actually couldn't find out how to check the riot shield's block chance. I assume it's the exact same, or nearly the same, as the energy shield. My point is, the riot shield is capable of doing all three of those jobs, to some lesser degree of effectiveness. Why??? Why not just have it be really freaking good at blocking melee, but nothing else? Maybe stun bolts, too, since those aren't actually, you know, destructive like bullets or lasers.
  25. That would be wonderful. I know nothing about code, and that's the only thing it lacks, so if someone could provide it with such, that'd be awesome.
×
×
  • Create New...