this post was submitted on 27 Jan 2026
-15 points (29.7% liked)

Technology

79355 readers
4240 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 13 comments
sorted by: hot top controversial new old
[–] nyan@lemmy.cafe 9 points 20 hours ago

If we actually had superintelligent AI, I might be concerned. But what we have instead is stochastic parrots with no innate volition. In and of themselves, they aren't dangerous at all—it's the humans backing them that we have to be wary of.

[–] verdi@tarte.nuage-libre.fr 8 points 1 day ago (1 children)

These fucking grifters really don't know shit about the snake oil they are selling huh... 

[–] Perspectivist@feddit.uk -3 points 1 day ago* (last edited 1 day ago) (1 children)

This comes across more as a warning than a sales pitch.

If only I could convince myself to be as dismissive about the threats of AGI as the average user here seems to be...

[–] verdi@tarte.nuage-libre.fr 3 points 1 day ago (1 children)

There is factually 0 chance we'll reach AGI with the current brand of technology. There's neither context size or compute to even come close to AGI. You'd have to either be selling snake oil or completely oblivious about the subject to even consider AGI as a real possibility. This tells me the average user really doesn't know shit... 

[–] Perspectivist@feddit.uk -1 points 23 hours ago* (last edited 23 hours ago) (1 children)

It's perfectly valid to discuss the dangers of AGI whether LLMs are the path there or not. I've been concerned about AGI and ASI for far longer than I've even known about LLMs, and people were worried about exactly the same stuff back then as they are now.

This is precisely the kind of threat you should try to find a solution for before we actually reach AGI - because once we do, it's way, way too late.

Also:

There is factually 0 chance we'll reach AGI with the current brand of technology.

You couldn't possibly know that with absolute certainty.

[–] verdi@tarte.nuage-libre.fr 3 points 18 hours ago (1 children)

>You couldn't possibly know that with absolute certainty.

I recommend you read Cameron's very good layman's explanation.

Adding to that framework, there is not enough data, compute and context size to reach AGI, for the current level of technology to reach anywhere near an AGI.

[–] Perspectivist@feddit.uk -1 points 17 hours ago* (last edited 17 hours ago) (1 children)

Nobody knows what it actually takes to reach AGI, so nobody knows whether a certain system has enough compute and context size to get there.

For all we know, it could turn out way simpler than anyone thought - or the exact opposite.

My point still stands: you (or Cameron) couldn't possibly know with absolute certainty.

I'd have zero issue with the claim if you'd included even a shred of humility and acknowledged you might be wrong. You made an absolute statement instead. That I disagree with.

[–] verdi@tarte.nuage-libre.fr 2 points 15 hours ago* (last edited 15 hours ago)

This is science, not religion.

Do take refuge in form when you can't dispute content though, while you're at it, remember to pray too, because I can tell you god doesn't exist so that's another fear you can add to the fray. 

[–] LordMayor@piefed.social 10 points 1 day ago (1 children)

Fucking delusional twat. Either he’s on some serious drugs or this is some PR bullshit made to sound like he’s genuinely worried that Nobel laureate-level AI is right around the corner so that people will throw more money into his ~~pocket~~ company. Or both, could be both. But, I think he’s simply an asshole.

[–] DoctorNope@lemmy.world 7 points 1 day ago* (last edited 1 day ago) (1 children)

I think you're 100% right, and boy, this piece made me big mad. Yet another outlet breathlessly publishing fucking nonsense for a ghoul, who by uncritically publishing said ghoul’s dire warning of the imminent birth of a superintelligent malign(?) entity, serves as his unpaid marketing firm. Axios should be embarrassed. If anyone who wasn’t the head of an LLM company spouted this drivel, they’d be locked away in a padded room and Axios would rightly be called out for exacerbating the mental health crisis of a paranoid schizophrenic.

The whole essay reads like, "Here at Anthropic, we're doing our best to create the Torment Nexus, but if anybody else were to successfully create the Torment Nexus, that would represent an existential risk for humanity. We're doing our best to create it first, so please give us more money. To save humanity. From the Torment Nexus that we created." It would be utter lunacy if he actually believed it.

[–] Perspectivist@feddit.uk 0 points 1 day ago

If anyone who wasn’t the head of an LLM company spouted this drivel, they’d be locked away in a padded room and Axios would rightly be called out for exacerbating the mental health crisis of a paranoid schizophrenic.

Like Eliezer Yudkowsky, Roman Yampolskiy, Stuart Russell, Nick Bostrom, Yoshua Bengio, Geoffrey Hinton, Max Tegmark and Toby Ord?

[–] JailElonMusk@sopuli.xyz 1 points 1 day ago

Spoiler Alert: We don't.