this post was submitted on 07 Feb 2026
129 points (97.8% liked)

Ask Lemmy

37764 readers
1505 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

It always feels like some form of VR tech comes out with some sort of fanfare and with a promise it will take over the world, but it never does.

you are viewing a single comment's thread
view the rest of the comments
[–] Perspectivist@feddit.uk 0 points 1 week ago (4 children)
[–] Goldholz@lemmy.blahaj.zone 18 points 1 week ago

The cost to maintain it? The enviormental impact? The impact its enormouse energie consumption on everyday people (rising costs imensly)?

[–] Rothe@piefed.social 15 points 1 week ago

It can't really reliably do any of the stuff which it is marketed as being able to do, and it is a huge security risk. Not to mention the huge climate issues for something with so little gain.

[–] Tar_alcaran@sh.itjust.works 9 points 1 week ago (1 children)

AI is great, LLMs are useless.

They're massively expensive, yet nobody is willing to pay for it, so it's a gigantic money burning machine.

They create inconsistent results by their very nature, so you can, definitionally, never rely on them.

It's an inherent safety nightmare because it can't, by its nature, distinguish between instructions and data.

None of the company desperately trying to sell LLMs have even an idea of how to ever make a profit off of these things.

[–] Perspectivist@feddit.uk -4 points 1 week ago (1 children)

LLMs are AI. ChatGPT alone has over 800 million weekly users. If just one percent of them are paying, that's 8 million paying customers. That's not "nobody."

That sheer volume of weekly users also shows the demand is clearly there, so I don't get where the "useless" claim comes from. I use one to correct my writing all the time - including this very post - and it does a pretty damn good job at it.

Relying on an LLM for factual answers is a user error, not a failure of the underlying technology. An LLM is a chatbot that generates natural-sounding language. It was never designed to spit out facts. The fact that it often does anyway is honestly kind of amazing - but that's a happy accident, not an intentional design choice.

[–] Tar_alcaran@sh.itjust.works 12 points 1 week ago* (last edited 1 week ago)

ChatGPT alone has over 800 million weekly users. If just one percent of them are paying, that's 8 million paying customers. That's not "nobody."

Yes, it is. A 1% conversion rate is utterly pathetic and OpenAI should be covering its face in embarrassment if that's. I think WinRAR might have a worse conversion rate, but I can't think of any legitimate company that bad. 5% would be a reason to cry openly and beg for more people.

Edit: it seems like reality is closer to 2%, or 4% if you include the legacy 1 dollar subscribers.

That sheer volume of weekly users also shows the demand is clearly there,

Demand is based on cost. OpenAI is losing money on even its most expensive subscriptions, including the 230 euro pro subscription. Would you use it if you had to pay 10 bucks per day? Would anyone else?

If they handed out free overcooked rice delivered to your door, there would be a massive demand for overcooked rice. If they charged you a hundred bucks per month, demand would plummet.

Relying on an LLM for factual answers is a user error, not a failure of the underlying technology.

That's literally what it's being marketed as. It's on literally every single page openAI and its competitors publish. It's the only remotely marketable usecase they have, because these things are insanely expensive to run, and they're only getting MORE expensive.

[–] Chais@sh.itjust.works 5 points 1 week ago (1 children)

It's quite bad at what we're told it's supposed to do (producing reliably correct responses), hallucinating up to 40% of the time.
It's also quite bad at not doing what it's not supposed to. Meaning the "guardrails" that are supposed to prevent it from giving harmful information can usually be circumvented by rephrasing the prompt or some form of "social" engineering.
And on top of all that we don't actually understand how they work in a fundamental level. We don't know how LLMs "reason" and there's every reason to assume they don't actually understand what they're saying. Any attempt to have the LLM explain its reasoning is of course for naught, as the same logic applies. It just makes up something that approximately sounds like a suitable line of reasoning.
Even for comparatively trivial networks, like the ones used for written number recognition, that we can visualise entirely, it's difficult to tell how the conclusion is reached. Some neurons seem to detect certain patterns, others seem to be just noise.

[–] Perspectivist@feddit.uk -1 points 1 week ago (2 children)

You seem to be focusing on LLMs specifically, which are just one subcategory of AI. Those terms aren't synonymous.

The main issue here seems to be mostly a failure to meet user expectations rather than the underlying technology failing at what it's actually designed for. LLM stands for Large Language Model. It generates natural-sounding responses to prompts - and it does this exceptionally well.

If people treat it like AGI - which it's not - then of course it'll let them down. That's like cursing cruise control for driving you into a ditch. It's actually kind of amazing that an LLM gets any answers right at all. That's just a side effect of being trained on a ton of correct information - not what it's designed to do. So it's like cruise control that's also a somewhat decent driver, people forget what it really is, start relying on it for steering, and then complain their "autopilot" failed when all they ever had was cruise control.

I don't follow AI company claims super closely so I can't comment much on that. All I know is plenty of them have said reaching AGI is their end goal, but I haven't heard anyone actually claim their LLM is generally intelligent.

[–] Chais@sh.itjust.works 5 points 1 week ago* (last edited 1 week ago)

I know they're not synonymous. But at some point someone left the marketing monkeys in charge of communication.
My point is that our current "AI" is inadequate at what we're told is its purpose and should it ever become adequate (which the current architecture shows no sign of being capable) we're in a lot of trouble because then we'll have no way to control an intelligence vastly superior to our own.

So our current position on that journey is bad and the stated destination is undesirable, so it would be in our best interest to stop walking.

[–] Tar_alcaran@sh.itjust.works 5 points 1 week ago* (last edited 1 week ago) (1 children)

If people treat it like AGI - which it's not - then of course it'll let them down.

People treat it like the thing it's being sold as. The LLM boosters are desperately trying to sell LLMs as coworkers and assistants and problemsolvers.

[–] Perspectivist@feddit.uk -5 points 1 week ago (2 children)

I don't personally remember hearing any AI company leader ever claim their LLM is generally intelligent - and even the LLM itself will straight-up tell you it isn't and shouldn't be blindly trusted.

I think the main issue is that when a layperson hears "AI," they instantly picture AGI. We're just not properly educated on the terminology here.

[–] Tar_alcaran@sh.itjust.works 4 points 1 week ago* (last edited 1 week ago) (1 children)

I don't personally remember hearing any AI company leader ever claim their LLM is generally intelligent

Not directly. They merely claim it's a coworker that can complete complex tasks, or an assistant that can do anything you ask.

The public isn't just failing here, they're actively being lied to by the people attempting to sell the service.

For example, here's Sammy saying exactly that: https://www.technologyreview.com/2024/05/01/1091979/sam-altman-says-helpful-agents-are-poised-to-become-ais-killer-function/

And here's him again, recently, trying to push the "our product is super powerful guys" angle with the same claim: https://www.windowscentral.com/artificial-intelligence/openai-chatgpt/sam-altman-ai-agents-hackers-best-friend

[–] Perspectivist@feddit.uk -3 points 1 week ago* (last edited 1 week ago)

But he is not actually claiming that they already have this technology but rather that they're working towards it. He even calls ChatGPT dumb there.

and ChatGPT (which Altman referred to as “incredibly dumb” compared with what’s coming next)

[–] Repelle@lemmy.world 4 points 1 week ago

"GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert." - Altman

During the launch of Grok's latest iteration last month, Musk said it was "better than PhD level in everything" and called it the world's "smartest AI".

https://www.bbc.com/news/articles/cy5prvgw0r1o.amp

“PhD level expert in any topic” certainly sounds like generally intelligent to me. You may not have heard them saying it, but I feel like I’ve heard a bunch of these statements.