this post was submitted on 01 Aug 2025
319 points (100.0% liked)
A Boring Dystopia
13232 readers
647 users here now
Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.
Rules (Subject to Change)
--Be a Decent Human Being
--Posting news articles: include the source name and exact title from article in your post title
--If a picture is just a screenshot of an article, link the article
--If a video's content isn't clear from title, write a short summary so people know what it's about.
--Posts must have something to do with the topic
--Zero tolerance for Racism/Sexism/Ableism/etc.
--No NSFW content
--Abide by the rules of lemmy.world
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ah man, what an absolute moron. History will remember this guy betting $4 Trillion USD on a dark horse and losing.
AI as it currently exists is a bust. It's less accurate than an average literate person which is basically as dumb as bears. The LLM models will never be able to reach human accuracy as detailed in studies publisbed by OpenAI and Deepmind years ago: it would take more than infinite training.
As it samples itself it will get worse. LLM and similar generative AI is not the future, it is already the past.
Could we start calling it 'degenerative AI'?
LLMs are actually really good at a handful of specific tasks, like autocomplete. The problem arises when people think that they're on the path to AGI and treat them like they know things.
Nah mate, its shit for autocomplete. Before LLMs autocomplete was better with a simple dictionary weighted to use percentage.
I've found it better than the weighted dictionary for prose, and way better for code. Code autocompletion was always really limited, but now every couple dozen lines it suggests exactly what I was going to type anyway. Never on anything particularly clever, mind you, but it saves some tedium.
It also sometimes halucinates entire libraries and documentation and is single handedly responsible for massive sector wide average vulnerabilities increase.
Did you make sure to subtract all of that negative value before you even considered it as "good"?
Oh, it's fucking horrible at writing entire codebases. I'm talking about specifically tab completion. You still have to read what it's suggesting, just like with IntelliSense and other pre-LLM autocomplete tools, but it sometimes finishes your thoughts and saves you some typing.
Hard agree. Whole codebase in AI is a nightmare. I think MS's 25% is even WAY too much, based on how shitty their products are becoming. But for autocompleting the line of code I'm writing? It's fucking amazing. Doesn't save any thought, but saves a while bunch of typing!
I don't think the aforementioned vulnerabilities were caused by the AI writing entire codebases.
Just because a hammer makes for a lousy screwdriver doesn't mean it's not a good hammer. To me, AI just another tool. Like any other tool, there's things it is good at and there are things it is bad at. I've also found it can be pretty good as a code completion engine. Not perfect, but there's plenty of boilerplate stuff and repetitive things where it can figure out the pattern and I can bang out the lines of code pretty quickly with the AI's help. On the other hand, there's times it's nearly useless and I switch back to the keyword completion engine as it's the better tool for those situations.
If you invent a hammer which reduces the average structural stability anywhere from 5% to 40% then it should be banned.
Dunno why the downvotes. I think it's useful for menial stuff like "create a json list of every book of the Bible with a number for the book and a true or false if it's old or new testament" which it can do in seconds. Or to quickly create a template.
This is such a delusional and uninformed take that I don't know where to start.
The people behind LLMs are scientists with PhDs. The idea that they don't know how to uncover and repairs biases in the models, which is what you're suggesting, is patently ridiculous. There's already plenty of benchmarks to disprove your stupid theory. LLM tech is evolving at an alarming rate. To the point that almost anything 1-2 years old is considered obsolete.
LLMs are useful tools, if you actually know what the fuck you're doing. They will continue to get more useful as more research uncovers different ways to use it, and right now, there's a metric shitton of money being poured into that research. This is not blockchain. This is not NFTs. This is not string theory. This is actual results with measurable impacts.
I'm not trying to defend this rich asshole CEO's comments. Satya can go fuck himself. But, I'm not so delusional that I'm going to ignore the tech as some NFT-like gamble.
Seethe Cope Mald
Skill Issue of the cognitive type.