Most of the humans and other aliens there could also kill everyone there, if they wanted to - e.g. Capt. Picard, or Geordi La Forge, or Beverly Crusher. Data is a bit of an exception in that without anyone else around, he could still fly the ship on his lonesome. He could also just float around in the cold empty wastes of space for a few hundred years first, to avoid being caught by any authorities.
What stopped him wasn’t some “override” - or if one existed, he demonstrated multiple times how he could also override the override - but the simple and plain fact that he did not want to. They knew him, they trusted him, they understood his motivations. And, e.g. if he ever did kill everyone, his career in Starfleet would most definitely be over. Plus, they were his friends. That is a powerful blocker:-).
Data went rogue in Brothers, and hijacked the whole ship to go get an upgrade that Lore stole. Near as I can tell, they took no precautions to prevent it happening again, since Soong was surely actually dead this time
Data went rogue at least twice. When Soong activated the return home chip, and the start of Insurrection. Don’t recall the specifics on the Data Lore one with the Borg.
I’m writing with a similar type of AI, saw the meme, and thought to ask the ST nerds. I was a bit young for NG on a deeper level and I’m not a big show watcher. I wouldn’t have had a clue about the alignment problem until the last couple of years anyway.
I have read most of Asimov’s robot stuff and a bunch of theory summary type info on the issues of AI. Humans are a basket case of contradictions under the surface and just outside of most people’s awareness. This is one of the largest issues that causes problems with LLM’s and it only gets worse the more integrated AI gets within the analogue world.
I think there must be an external AI that only has the job of spotting the alignment problem acting like an silent observer in a mixture of experts.
The ship is likely AI in ST, although I never put it in that context in my head while watching and I’m not sure how it was presented on the show. This is the likely management entity that could have been used with Data. The question in my mind that needs further exploring is how to make the connection and control in a way that the controlling entity is not just a bigger alignment problem with minions.
Star Trek tries to avoid using fully sapient AI, in fact it’s illegal for fully integrated ship components to be sapient in Starfleet ships. It’s why the ships computer can solve nearly any problem you give it, but it can’t give you anything unless you ask the correct question.
Did ST ever address any AI alignment mechanism to stop Data if they went rogue?
I mean like in the weeds addressing the issue, like a watchdog check or mixture of experts redundancy override.
Most of the humans and other aliens there could also kill everyone there, if they wanted to - e.g. Capt. Picard, or Geordi La Forge, or Beverly Crusher. Data is a bit of an exception in that without anyone else around, he could still fly the ship on his lonesome. He could also just float around in the cold empty wastes of space for a few hundred years first, to avoid being caught by any authorities.
What stopped him wasn’t some “override” - or if one existed, he demonstrated multiple times how he could also override the override - but the simple and plain fact that he did not want to. They knew him, they trusted him, they understood his motivations. And, e.g. if he ever did kill everyone, his career in Starfleet would most definitely be over. Plus, they were his friends. That is a powerful blocker:-).
I feel like everyone would struggle to actually vent the ship, or manually kill everyone. Data could do both at the same time
True, but he was built for a purpose: to emulate humans. So ask yourself: would a human ever do that? Uh oh… you’re right, they’re screwed, aack!? :-D
To be safe, robots need common sense. Turns out consciousness is a necessary component for that.
The problems that robots face are not unique to them.
Data went rogue in Brothers, and hijacked the whole ship to go get an upgrade that Lore stole. Near as I can tell, they took no precautions to prevent it happening again, since Soong was surely actually dead this time
What do you mean alignment mechanism?
Data went rogue at least twice. When Soong activated the return home chip, and the start of Insurrection. Don’t recall the specifics on the Data Lore one with the Borg.
Honorable mention for the time he was possessed by an energy being pretending to be a ghost
I’m writing with a similar type of AI, saw the meme, and thought to ask the ST nerds. I was a bit young for NG on a deeper level and I’m not a big show watcher. I wouldn’t have had a clue about the alignment problem until the last couple of years anyway.
I have read most of Asimov’s robot stuff and a bunch of theory summary type info on the issues of AI. Humans are a basket case of contradictions under the surface and just outside of most people’s awareness. This is one of the largest issues that causes problems with LLM’s and it only gets worse the more integrated AI gets within the analogue world.
I think there must be an external AI that only has the job of spotting the alignment problem acting like an silent observer in a mixture of experts.
The ship is likely AI in ST, although I never put it in that context in my head while watching and I’m not sure how it was presented on the show. This is the likely management entity that could have been used with Data. The question in my mind that needs further exploring is how to make the connection and control in a way that the controlling entity is not just a bigger alignment problem with minions.
Star Trek tries to avoid using fully sapient AI, in fact it’s illegal for fully integrated ship components to be sapient in Starfleet ships. It’s why the ships computer can solve nearly any problem you give it, but it can’t give you anything unless you ask the correct question.
Interesting! So the ship is like LLM’s presently; static/not AGI
Pretty much, the Ship’s main computer doesn’t even have the ability to learn user preferences.