Sure you do. It's not at all a transparent attempt to prolong the bubble.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
I only have a rather high level understanding of current AI models, but I don't see any way for the current generation of LLMs to actually be intelligent or conscious.
They're entirely stateless, once-through models: any activity in the model that could be remotely considered "thought" is completely lost the moment the model outputs a token. Then it starts over fresh for the next token with nothing but the previous inputs and outputs (the context window) to work with.
That's why it's so stupid to ask an LLM "what were you thinking", because even it doesn't know! All it's going to do is look at what it spat out last and hallucinate a reasonable-sounding answer.
LLMs aren't AI, let alone AGI.
They're fucking prediction engines with extra functions.
The best description I've ever heard of LLMs is "a blurry jpeg of the internet". From the perspective of data compression and retrieval, they're impressive... but they're still a blurry jpeg. The image doesn't change, you can only zoom in on different parts of it and apply extra filters, and there's nothing you can truly do about the compression artifacts (what we call "hallucinations"). It can't think, it can't learn, it just is, and that's all it will ever be.
So why do we need Jensen Huang?
Exactly. CEO is maybe the easiest job for an AI to take over, so an AGI is possibly the most perfect candidate for that role.
Put up or shut up, tech bro CEOs. Replace yourself if it's so fucking amazing.
AIs can't play golf.
Just replacing one eco horror with another.
Why do we need any of them? They've completed the job. All future plans cancelled.
Fridman, the podcast’s host, defines AGI as an AI system that’s able to “essentially do your job,” as in start, grow, and run a successful tech company worth more than $1 billion. He then asks Huang when he believes AGI will be real — asking if it’s, say, five, 10, 15, or 20 years away — and Huang responds, “I think it’s now. I think we’ve achieved AGI.”
So we've achieved AGI in the sense that it could replace a nonsensical fart-sniffing clown, hyping a horde of morons into valuating a company at orders of magnitude its actual worth?
I think you're a bullshitting con artist.
Grifter gonna grift
Geez. You can almost smell the desperation on this guy.
Well, he wears the same leather jacket 24/7 so he can't smell good.
If I was a NVDA investor, I'd be worried. This clown is doing nothing but gaslighting and lying these days.
But you're wrong, you're all wrong!

Average Gaslighting Idiot.
AKA "a CEO."
Oh yes we have achieved AGI! But what we really need is Artificial General Super Intelligence! Just another trillion and it will be useful bro!
No.. you haven’t.
Literally the story above this in my feed is OpenAI shutting down expensive services 😂
You goofy goobers
These fuckers will claim whatever nonsense to keep themselves relevant enough to take on more debt before they collapse.
This guy has completely lost the plot. I don't think it's possible to be even more disconnected from reality.
The Turing thing again, how good a system is at mimicking a human? Like, lot's of dog owners could swear; the dog is smarter than a cat. But dogs are only better at reading their human.
I'll believe him, if he let's the LLM do his job.
Cats may be able to read their human just as well or better, but as they don't give a shit, there's no feedback to base anything on.
if agi then why still jobs?
Fun fact: if true AGI were a thing, those AI programs would be people and not paying them for their work would be slavery.
This is honestly one of the scarier parts about the rhetoric, they're basically implying they would happily enslave a sentient being.
I just dropped an AGI down the toilet AMA
>You think you've achieved AGI
>I know you haven't
We are not the same
Guys i think i just found AGI in my gramp's old stuff.
fart sniffer
"my chatbot told me so!"
His can we take this idiot seriously; slop DLSS, tgen telling us we are wrong about this (the buddy telling me what I prefer), then we achieved AGI...
How low can he falls?
Worth a read if anyone is interested: https://www.newyorker.com/magazine/2026/02/16/what-is-claude-anthropic-doesnt-know-either
My favorite part is Anthropic has a bot in the cafeteria that orders what staff request and if the bank balance goes to zero or negative, then it loses and has to close up shop.
This far, nearly all employees have a 1” tungsten cube on their desk that some managed to get for free with a fake 100% off coupon.
It’s a fun experiment in what happens when these agents start doing things in the real world and I commend Anthropic for putting it on display. A real hype train killer.
As a technologist, I work with them all day, every day. I wouldn’t trust them to do my laundry without oversight, let alone run a business.
How many R's are in strawberry?
Doubt
AI is Wack
I'll believe him when he tears off his skin suit.