this post was submitted on 17 Jan 2026
35 points (90.7% liked)

Fuck AI

6613 readers
2283 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

So I'm curious — we're all here because we at least hate the current state of AI with hallucinating facts, being used to undress women and children, and all the fuckery that goes along with it.

I grew up watching Star Trek: The Next Generation, which takes place on a ship with a perfect AI that does everything right and basically does nothing wrong. It never hallucinates information; it's always right. It has never been used to undress people against their will; however, the Holodeck is kind of an extension of that and was used for that on Deep Space Nine, when operated by a Ferengi (capitalist alien race in a world where humans are communist). But the Enterprise holodeck would never do that. The shipwide AI also does not traditionally carry on conversations. The one time it does, the human was hallucinating — sort of. The doctor was in a pocket universe, people were disappearing, and at one point the AI told her she was the only crew person on the Enterprise, and no, that did not make sense, but that that was still how it was. Because, in her pocket universe, it was true.

So the question is... would you want a perfect AI that was incapable of lying or harbouring anything untrue? Basically you could ask it anything and it would give you the correct answer.

The one fault I can find with that fictional AI is when Data (the android), dressed like Sherlock Holmes, asked the computer to "create an enemy which rivals my intelligence." He meant to say Sherlock Holmes's intelligence, who he was cosplaying, but the computer made a self-aware malicious AI that got out of the Holodeck and tried to destroy the ship... because it was told to do so. Other than that, though.

...I'm not trying to mislead anyone, so I will drop the other shoe, answer the begged question now. I've always felt that to get to that level of AI, we need to wade through the shit we're in now. So yeah, before you ask, that's kind of the point of the thought exercise. However, I will also say that I do not think we will get to Star Trek AI, I think we will get to Terminator AI, destroying the world rather than lifting people up. I think maybe in the Star Trek universe, AI didn't really take off until people realised that war wasn't the answer, after WW3/the Eugenics Wars, and so they were making AI to make things better, not worse. We are not in that timeline. I look at what is happening now, IRL, and the timeline in the Terminator franchise, and it's clear to me that that one is more realistic.

That said, I still wonder if anyone would want AI if it did not have any of the problems.

top 27 comments
sorted by: hot top controversial new old
[–] CarbonIceDragon@pawb.social 24 points 2 months ago

The thing with comparing sci-fi AI to modern LLMs is that virtually no science fiction AI that I know of actually acts the way LLMs do. They tend to be good at things modern AI is bad at, like logical reasoning or advanced math, but bad at things that AI can already do, like generate images that at least look like they could be human art, or write text that appears emotionally charged. They also tend to be directly programmed in ways that a singular (usually genius, but still) individual can pick through and understand, rather than being trained in a black box sort of manner that is very difficult for a human to reverse-engineer.

That isnt surprising, sci-fi writers arent oracles after all and just having AI of some kind probably makes a story more realistic than just assuming the technology never gets invented even far into the future, but in my view these kinds of sci-fi AI are basically a different, hypothetical technology going for the same end result. As such, I dont really expect even a very advanced iteration on what we have will look like star trek AI, any more than modern cars tend to fly or run off miniature nuclear reactors the way sci-fi of decades ago saw cars of the future. I dont think it will look like skynet either. I do think we might get some interesting science fiction in the coming decades exploring what a very advanced version of the technology we do have might end up like though. It probably wont be terribly accurate either, but Id bet it will be closer than works where the basis for extrapolating AI tech is "what if the calculator could talk and think".

[–] shirro@aussie.zone 10 points 2 months ago* (last edited 2 months ago)

Star Trek, at least before Roddenberry's vision was corrupted, was a fictional post scarcity socialist utopia. The AI served the crew and society. It wasn't there to exploit and replace them on behalf of a handful of Ferengi billionaires.

Star Trek's ship AI is nothing like the huge financial scams, exploitation and concentration of power, wealth and political influence we are currently seeing. AI hate mostly isn't about machine learning and its applications. It's about the people behind it.

When we have warp drives, fusion, replicators, universal basic income and free universal health care we may have a different view.

[–] gustofwind@lemmy.world 9 points 2 months ago (1 children)

Star fleet has rigorous education and training

AI does not replace their critical thinking

[–] cerebralhawks@lemmy.dbzer0.com 1 points 2 months ago (1 children)

True, but Federation worlds do use AI in education. We saw the Vulcans doing it in Star Trek (2009) (though to be fair a lot of older fans don't like the Abramsverse). I feel like we saw some of it in Discovery and we will see it in Starfleet Academy. For education, I can only think of Wesley Crusher, and then Nog on DS9 when he enlisted. We didn't see much of Nog's education (except practical stuff) and I don't recall with Wesley.

[–] gustofwind@lemmy.world 1 points 2 months ago

Again it doesn’t replace their critical thinking in any way and they are all trained to do the science and math themselves despite the existence of ai

So it doesn’t matter

[–] Kirk@startrek.website 7 points 2 months ago* (last edited 2 months ago) (1 children)

I thought I was in !startrek@startrek.website for a moment...

My take is that even if you consider LLMs to fall under the umbrella of "AI" (I don't), they appear to be a completely different technology than the Enterprise-D computer, which is more like highly advanced natural language processing.

would you want a perfect AI that was incapable of lying or harbouring anything untrue?

It's not really possible for an AI to know what's true with 100% accuracy, but I do think it's technically possible to invent an AI that is honest. It's important to remember that LLMs are actually "hallucinating" 100% of the time. The only reason they are ever correct is because the training data was correct.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 2 months ago (1 children)

They have said that the Enterprise computers contain the whole of human knowledge.

The text of Wikipedia (EN) alone is something like 16GB, and that can be archived. Thus, you can have most of that human knowledge on any smartphone. Most of them can access it, but there are devices being sold that have Wikipedia EN downloaded, plus a bunch of survival stuff. On a Raspberry Pi. I doubt the microSD card is bigger than 32GB and might just be 16GB.

[–] Kirk@startrek.website 6 points 2 months ago

Sure, but Wikipedia does not care about "truth". "It's true" is not a valid citation on Wikipedia (and "knowledge" is not the same as "truth"). Wikipedia is built on references from experts from people that can be honest while still being factually incorrect.

It's an important distinction because an LLM can be correct but it can never be honest. The hypothetical Enterprise-D computer appears to be able to be honest, even when incorrect.

[–] Windex007@lemmy.world 7 points 2 months ago (1 children)

Calling the ship voice command interface an AI is quite a stretch... even with the much more lenient definitions getting thrown around these days.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 2 months ago (1 children)

So what is the ship's computer, if not AI? It's shown it can think for itself. It's more advanced than the AI we have now. Are you saying it's lesser?

[–] Windex007@lemmy.world 3 points 2 months ago* (last edited 2 months ago)

Always struck me as a rich command interface with a natural language processor slapped on the front.

And, taking the technobabble for what it's worth, it's always described as having deterministic outputs. I don't think it's fair to say it's ever evidenced as having "thought for itself". Any time one might be tempted to suggest it had, I'd argue it was still following a deterministic algorithm, written and designed for whatever it was doing... rather than relying on a black-box model to generate outputs for an unanticipated input.

You can have generative algorithms without things like LLMs or difussion models.

Categorizing it as "lesser" is extremely subjective. Lesser in what way? Do I think that it's functionally superior as a source of information than an LLM? Yes. As an operational interface for a machine (the ship)? Yes. Do I think it has the flexibility of an LLM? No.

[–] mrmaplebar@fedia.io 6 points 2 months ago (3 children)

I don't think there is much evidence of the Enterprise's computer being used to do much more than provide or process basic information. You basically never see the characters in Star Trek rely on the computer for creativity or solutions at all, from what I remember. On top of that, we don't know how the computer on the Enterprise was created or what the ethical implications of it may be.

Star Trek is a show that delves into ethical dilemmas often, and the problem with today's generative AI is an ethical and legal one, not necessarily a technological one.

Today's generative "AI" is much more like the Borg than Commander Data or the Enterprise-D: it is powered by the forceful assimilation of human culture for the benefit of those that own and control it. We are also quite literally being told (by the stakeholders) that resistance against this new technology is futile and that we must adapt to a new reality in which our work will be assimilated whether we like it or not. There is no consent, let alone compensation. They are simply on a neverending mission to take everything within reach for their own benefit...

[–] SharkAttak@kbin.melroy.org 2 points 2 months ago

omg I like the Borg-AI example, it's so fitting.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 2 months ago (1 children)

Creativity, no. Solutions? I feel like the computer was asked for solutions quite a bit, though, so was Data.

Data has to read an instrument panel though, while the ship computer is tapped into sensors, so asking Data in the hallway about a planet would require him to have taken the initiative to read that information ahead of time. The ship would just look and tell you. (Data at his computer near the viewscreen would tell you as well, since he's at his computer.)

[–] mrmaplebar@fedia.io 1 points 2 months ago

I don't remember there being many (if any) episodes where the crew faces a major dilemma which they solve by asking the computer to solve it for them.

[–] glasratz@feddit.org 1 points 2 months ago (1 children)

The Holodeck has generative AI and is often used as such. You can give it a voice promt and it will create a full storyline, characters and scenery out of it. You don't seem to need any kind of special training, as crew members can easily create their own programmes. That this may be problematic has been the storyline of several episodes.

[–] mrmaplebar@fedia.io 1 points 2 months ago (1 children)

How was the holodeck's AI trained? Was it trained like today's models, on the non-consensual assimilation of all art and culture? And are their laws around its use?

I don't know.

[–] glasratz@feddit.org 1 points 2 months ago

"Hollow Pursuit" suggests that there's no awareness for any kind of social problems surrounding the holodeck and content generated there. Which is kind of silly. So I think the correct answer is probably that the authors did not think about it at all. But humans in Star Trek live in a quasi-communist society, so it would probably just be common practice that creative works are owned by the public. You probably don't have much of a choice if you want to publish your works. However, you practically never see any contemporary human literature or something like "holo novels". So my personal ad-hoc theory is that gen AI at this level has killed the literary process as a whole among the human race.

[–] smiletolerantly@awful.systems 5 points 2 months ago

Mate I'd live in Banks' Culture without a heartbeat's thought if I could.

The problem isn't AI as a concept, it's the underlying societal disregard for ethics in the face of profits.

[–] Tattorack@lemmy.world 4 points 2 months ago (1 children)

In Star Trek AI is used in many applications that a humanoid cannot reasonably do themselves entirely manually (not without serious augmentation, which is heavily frowned upon).

The amount of heavy lifting the ship's computer does for navigating the galaxy at warp speeds, or to process some incredibly advanced calculations almost instantly. This would go significantly slower if a humanoid had to do that all manually.

And yet...

The crew of the Enterprise D aren't merely prompting the ship's AI. Yes, there is that too, but they're typically doing that while also being very hands-on with the ship's systems too. The crews in Starfleet have the expertise of deep level computer programming, to the point of physically arranging computer systems if need be.

There is no "vibe coding" in Starfleet.

But then there is the Holodeck.

I'm OK with certain applications of the Holodeck, like spontaneously creating virtual activity or recreation areas. These things aren't considered works of art, and aren't considered worthy substitutes over the real thing either. You find them on ships and stations because they're the best available substitute, the alternative being crewman slowly going mad from seeing nothing but sterile corridors all day. I don't think I've heard of recreational Holodecks for regular individuals (unlikely due to some regulation, more likely that fulfilment is achieved in other ways if you're surrounded by a real planet).

However, Voyager sort of fucks with this idea with the crew "creating holonovels". This is essentially a vibe coder's dream; being able to create fully interactive narrative driven videogames with nothing but prompts. That said... Even in Voyager, using someone's likeness is heavily frowned upon. So there is still an expectation of originality, rather than merely rearranging a reference dataset. I can maybe forgive Voyager on the basis of its premise; they're stuck on the other side of the galaxy far from home. What else would they even do? But that's about as lenient as I can be.

On a societal level, however, real skill in a craft is still greatly appreciated, perhaps even preferred, over something computer generated. Despite there being replicators and no money, there are still bars, pubs, restaurants, wine makers, beer brewers, jewelers, you name it, there are still people going to these places to get a real cooked meal and have a real experience in a real location. Craft, skill, humanoid created instead of generated, these things are still valued enough for there to exist whole streets full of shopkeepers and hospitality providers on Federation worlds.

"AI" as it exists now is created to deliberately replace humans skill, to take from human skill without offering credit or appreciation, to make humans obsolete in the creation of immediate end results that can be sold as products or give instant gratification. The Federation as a society is shifted massively away from such a mentality.

[–] cerebralhawks@lemmy.dbzer0.com 2 points 2 months ago

To be fair, Voyager's holo-novelist is the Doctor, who is also an AI. So he could be writing very quickly since he lives in the computer and isn't constrained by human limits.

Or was Paris doing it as well? I don't quite remember.

[–] brucethemoose@lemmy.world 4 points 2 months ago* (last edited 2 months ago)

There's nothing realistic about Star Trek.

This needs to be hammered more. It's an awesome setting for exploring contemporary social issues; that's the point. But no matter how technologically advanced we get, it's just not based on even plausible physics/engineering. Neither is Terminator.


I think the most plausible 'extrapolation' I've seen is Orion's Arm:

https://www.orionsarm.com/eg-article/486e75a54a1ae

https://www.orionsarm.com/xcms.php?r=oa-timeline

And, as an aside, I adore this Mass Effect story: https://archiveofourown.org/works/42006774/chapters/105462066

They extrapolate alteration of human biology, nanotechnology, plantery engineering, STL space travel, and AI, and that future looks nothing like more-or-less unaltered baseline humans walking around on a space boat with glorified voice assistants. Consciousness is uploaded and downloaded. Artifical realities are vast. Whole celestial bodies are manipulated for all sorts of purposes. People can inhabit an array 'bodies' and realities that make stuff like starship bridges/hallways and colonies on planets seem silly. There are 'stratas' of consciousness literally orders of magnitude apart, states of being incomprehensible to each other, all coexisting in an expanding bubble of civilization thats still younger and (in some ways) more primitive than the TNG Federation.

And we aren't that far from that. Including the "techpocalypse" it predicts.

Terminator and Star Trek, and classic sci fi, don't depict this because they're stories aimed at humans living right now, and their interpersonal relationships and contemporary social/political issues. More realistic extrapolations are tougher settings for that.


So the question is… would you want a perfect AI that was incapable of lying or harbouring anything untrue? Basically you could ask it anything and it would give you the correct answer.

Where I'm going with this is that this is that, to me, this not a realistic question. Practically, it'd be silly to relegate a "true" AI as a dumb voice assistant on a space boat; they're conscious beings, even if they're shackled or so highly specialized.

They're not so different from human beings at that point. In a more realistic situation, you could be uploaded and repurposed as the Enterprise voice assistant, and have your consiousness duplicated and mashed with complex systems; pondering if you'd want to be Siri seems kind of silly.

[–] teft@piefed.social 3 points 2 months ago (1 children)

AI that was incapable of lying

What does that actually entail? The AI that is programmed with only right wing talking points is going to think that those are the objective truths even if they aren't so if you put guardrails on it to only say the "truth" you're going to get lies.

I think the best we can hope for is minimal bias in the AI we develop.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 2 months ago

I don't think bias was part of the equation on Star Trek because we didn't see humans/Earthlings fundamentally disagreeing about basic science like we have in the real world.

But, yes, the AI does have bias on Star Trek. I'm sure it would describe the Founders from DS9, or the Borg from multiple series, in a less-than-objective manner.

A better example would be if, post-Voyager, you were to ask it about how Janeway handled the Omega Directive. It would correctly tell you that Janeway destroyed the warp technology of a whole civilisation, setting them back a few generations, because that civilisation used a kind of warp that created omega particles that made traditional warp travel impossible. The Omega Directive, used only that one time AFAIK, overrides the Prime Directive and says a captain/crew can violate the Prime Directive to stop a civilisation from using omega particles. The recounting of the incident would be very pro-Starfleet. The civilisation Janeway sabotaged would have a very different account of it.

I'd also bet on the ship computer not telling you half the shit Sisko got up to on DS9. I'm not talking about In the Pale Moonlight where he deleted the entire log (thus, the event was never recorded). How about when he killed a whole planet to get at the Maquis? Starfleet probably classified that in a big hurry.

[–] glasratz@feddit.org 3 points 2 months ago* (last edited 2 months ago) (1 children)

I have a theory here:

There is definitely problematic gen AI in the Star Trek universe, but it's only ever adressed through the holodeck. It's made pretty clear that everyone can create programmes there with simple voice prompts. It has also been shown that there are no formal rules for using the likeness of living people in those programmes. This is an oversight in my opinion because that problem would be a common concern. The existence of this kind of technology suggest that any kind of entertainment media can be easily created on a prompt, even through the ship's computers.

On the other hand, we rarely hear about contemporary human-made literature. When literature is mentioned it's usually alien or 20th century. Wouldn't this suggest that it plays no role anymore? Maybe there are still human writers, but the general public isn't interested in such things, since they can get what ever they want from a computer.

So my bottom line is that concerning generative AI Star Trek actually shows how problematic it is, probably by accident. I wouldn't want Star Trek-level AI, but at least it doesn't kill everyone.

[–] cerebralhawks@lemmy.dbzer0.com 1 points 2 months ago (1 children)

Again (unrelated to your comment sorta), Star Trek avoids stuff that isn't in the public domain, which is why Sherlock Holmes was commonly used in the Holodeck.

As far as Holodeck stuff of existing people, they did exactly that on DS9. Quark was offering pornographic holo-vids of Kira, the station second-in-command, and when she found out, she demanded he deleted the program. I think he also paid someone to scan her for the program.

[–] glasratz@feddit.org 3 points 2 months ago

Next Generation already had Barclay recreate caricatures of Enterprise officers for stress relief in "Hollow Pursuits". Here they stressed that there were actually no regulations for recreating officers on the holodeck. Kind of bad writing.