this post was submitted on 31 Mar 2026
1 points (54.5% liked)
Change My View
41 readers
29 users here now
A place to learn something new, or strengthen your own position. Progress is impossible without a willingness to change.
#Rules
-
Remain civil and friendly. Personal attacks, excessive snark, or similar will not be tolerated. Downvoting based on disagreement (rather than quality of discourse) may also be bannable.
-
All posts should contain a view as the title, and should have an explanation of the reasoning in the body.
-
All top level comments should address the original viewpoint, either challenging it, or seeking clarification.
founded 2 days ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Deep neural networks didn't work until quite recently. The theory was there. Single-layer models existed, but were limited to toy applications like single-character OCR. Now there's a whole ecosystem to go from 'what's Python?' to a working prototype within the week. The durable product of this trillion-dollar bubble will be a mountain of whitepapers for how to efficiently design and train models of bewildering complexity.
If the big boys stop releasing local versions, they will cease to matter. They've already created the tools for interested randos to continue development, after the bubble bursts. If they'd like to become irrelevant while they still have funding then that's their prerogative. Qwen Image 2 might never come out, at this rate, but we already know it's a fraction the size of prior models, and outperforms all of them... so there's no point pursuing big-iron mainframe models, once the community has to roll their own.
Pessimistic studies are mostly overblown. 'Doctors using detector get worse at eyeballing things,' no mention of accuracy whilst using that detector. 'Expert programmers slowed down by virtual amateur,' yeah I'll bet, like with a real amateur. 'Artist unimpressed by automated version of thing he's good at,' okay seriously - why do we keep asking professionals about these tools? They already learned things the hard way, at the highest level humans can reach. If they were getting shown-up, there'd be nothing to discuss.
I'm seeing videos where 'and then I vibe-coded the mechanical integration' is mumbled like a punchline. If you truly understand what you want then existing models can probably just do that. It turns doing things the normal way into a fallback. Like whining that you have to do the dishes by hand, when the dishwasher breaks.
The gaming industry has been a hellscape for decades. (Same with buying gizmos that spy on you.) This hype cycle obviously has not helped, but shit's been fucked since before that. If civilization on the whole is turbo-fucked then it's not primarily attributable to spicy autocomplete.
Oh and to be clear, it's not spicy autocorrect that is the disease, it's just a symptom. It's the combination of late stage capitalism and the incipient death of the USA as a haegemonic force. I'm almost certain China will throw a bunch of safeguards and restrictions on "AI" once they are assured of their economic, cultural and political dominance.
What this all comes down to is that we are talking about tech that has existed for decades, the big difference is that we now have the capability to run massive parallel computation, yes there have been advancements in technique and efficiency, but one of the major reasons we aren't seeing widescale software patents on all this stuff is that it's all fundamentally existing art done much faster and wider than we could before.
The biggest thing that is being enabled by all these technologies is the grift economy. Name me any other technology where we would accept circular "investments" to make up a significant proportion of the world's economic activity.
I am less interested in the anecdotal "evidence" on either side of the argument, I consider individual artists not liking the output of LLMs to be about as worthwhile as the former big business lackeys turned AI start up founders who talk about how employees should be using agentic workflows or get left behind. I am concerned about the teachers who keep telling me that they have kids who are allowing chatbots to entirely replace their critical faculties, the managers who are frothing to sack all their human staff to replace them with barely, if at all, functional agents. I worry about how many people are letting themselves get caught up in the fantasy that there is some sort of intelligence in this high speed Chinese room.
I also worry that worldwide we are going whole hog on a technology that just isn't what people are being tricked into thinking it is.
I personally think that the generative AI bubble is this generations leaded petrol or radium. Eventually we will realise how hideously corrosive it is and by that stage the generational damage will be done, and as bad as it is for the economy, it's going to be far worse for our kids and I don't think it's OK that we are throwing them under the bus so that Nvidia can be the richest company ever and the US can pretend to not be in the grip of a recession. I don't think a bunch of virtual junior devs is worth it at that cost.
Parallelism was a big deal several times before this boom, and what they lacked vis-a-vis neural networks is differentiable activation functions. That's why OCR was a thing for ages but Not Hotdog was recent and sudden.
No kidding this is the world's most obvious bubble. And yet: the tech does the thing. You can in fact generate photorealistic video, in seconds, on consumer hardware, even if all you have is a description. Three years ago "Will Smith eating spaghetti" produced amusingly shite gloopy nonsense. As of a year ago all you could nitpick was the shape of his chin. Five years ago the cutting edge bragged about icon-sized images of an avocado chair. Everything has moved at a breakneck pace, and will continue grinding forward after whatever fresh hell follows several predictable collapses.
In LLMs specifically, they're still dumb, but they're smart enough that we can say they're dumb. They have a measurable IQ. These chatbots vastly exceed anything made through human cleverness alone, and they've disproved many assertions that a computer could never [blank] unless it was truly conscious. Simply typing 'rewrite this in Rust' might Just Work and provide significant performance benefits. Like compilers slowly obviating assembly hackers, we have to contend with the rising capabilities of software that writes software.
John Searle was a troll. The Chinese Room should never be taken seriously, because he pointed at a hard drive and said "processor." Demanding a blind idiot instruction-follower must understand the whole intent of the software happening to it... is just Cartesian dualism. Except instead of a soul, you get a Steve, and he better be paying attention! If he gets the same results while zoning out, they don't count. As if some guy emulating a calculator app would follow the low-level floating-point bit-banging necessary to find the area of a circle.
Sorry. Pet peeve.
We do need to distinguish intelligence, comprehension, consciousness, and sapience. LLMs definitely aren't alive, in any sense. But saying they lack all intelligence veers toward saying calculators only simulate math. Intelligence is a process, and if it's necessary for certain observed decisions in humans... we also observe those decisions in some models. Like figuring out you can't walk to a car wash.
I cannot get excited over yet another moral panic where kids these days have a crutch for... whatever. The internet, calculators, slide rules, the printing press, the written word. Some little shits are always giving teachers a hard time. Surely they're still routinely judged with nothing but pencil and paper in a silent room.