It doesn't matter. Management wants this and will not stop until they run against a wall at full speed. 🤷
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
The output from tools infected with LLMs can intrinsically only ever be imprecise, and should never be trusted.
As an unemployed data analyst / econometrician:
lol, rofl, perhaps even... lmao.
Nah though, its really fine, my quality of life is enormously superior barely surviving off of SSDI and not having to explain data analytics to thumb sucking morons (VPs, 90% of other team leads), and either fix or cover all their mistakes.
Yeah, sure, just have the AI do it, go nuts.
I am enjoying my unexpected early retirement.
Before anything else: whether the specific story in the linked post is literally true doesn’t actually matter. The following observation about AI holds either way. If this example were wrong, ten others just like it would still make the same point.
What keeps jumping out at me in these AI threads is how consistently the conversation skips over the real constraint.
We keep hearing that AI will “increase productivity” or “accelerate thinking.” But in most large organizations, thinking is not the scarce resource. Permission to think is. Demand for thought is. The bottleneck was never how fast someone could draft an email or summarize a document. It was whether anyone actually wanted a careful answer in the first place.
A lot of companies mistook faster output for more value. They ran a pilot, saw emails go out quicker, reports get longer, slide decks look more polished, and assumed that meant something important had been solved. But scaling speed only helps if the organization needs more thinking. Most don’t. They already operate at the minimum level of reflection they’re willing to tolerate.
So what AI mostly does in practice is amplify performative cognition. It makes things look smarter without requiring anyone to be smarter. You get confident prose, plausible explanations, and lots of words where a short “yes,” “no,” or “we don’t know yet” would have been more honest and cheaper.
That’s why so many deployments feel disappointing once the novelty wears off. The technology didn’t fail. The assumption did. If an institution doesn’t value judgment, uncertainty, or dissent, no amount of machine assistance will conjure those qualities into existence. You can’t automate curiosity into a system that actively suppresses it.
Which leaves us with a technology in search of a problem that isn’t already constrained elsewhere. It’s very good at accelerating surfaces. It’s much less effective at deepening decisions, because depth was never in demand.
If you’re interested, I write more about this here: https://tover153.substack.com/
Not selling anything. Just thinking out loud, slowly, while that’s still allowed.
How much do you want to bet they also rolled out bonsuses based on this bogus data? The one saving grace is they started using the new LLM tooling mid-Q4 so any quarterlies would at least be partially based on real data
I hope they sue whoever sold it to them. it's not artificial intelligence, it's a machine learning chat bot. they may as well be running their company with a magic eight ball.
I was trying to figure out why the stock mark is so high.
This would suggest the leadership positions aren't required for the function of the business.
I have been saying for years now that the kind of work that LLMs are best suited for replacing and also would by far be their most cost effective use case from a business stand point is...
Well its the most expensive employees who basically just spend most of their time having meetings or writing emails about things they only understand at a very birds eye view level.
You know, C Suite, upper management.
This has always been the case, in every industry.
Why did mods remove the OP?
Can you literally not badmouth AI on Reddit?
Nice. Really, I like it when management is dumb as fuck. It's a world of never ending joy.
Jesus Christ, you have to have a human validate the data.
But that would mean paying someone for work. The CEOs want to replace humans.
LLMs can't really do math, so if there is any analysis being done, the numbers will typically be junk. Unless the LLM is writing the code to do the math, but then you have to validate the code.
Exactly, this is like letting excel auto-fill finish the spreadsheet and going "looks about right"
And that's a good analogy, as people have posted screenshots of Copilot getting basic addition wrong in Excel.
Whoever implemented this agent without proper oversight needs to be fired.
Except the ceo and executives ultimately responsible will blame their underlings that will be fired, even though it was an executive level decision. They didn't get to the pinnacle of corporate governance by admitting mistakes. That's not what they were taught at their ivy league schools, they were taught to lie and cheat to steal, and further slander their victims to excuse it.
It was bad before the current president set his outstanding example for the rest of the country. See what being a lying cheating piece of shit gets you? Everything. Nothing matters. We have the wrong people in charge across the board, from business to government to institutions.
I've said it time and time again: AIs aren't trained to produce correct answers, but seemingly correct answers. That's an important distinction and exactly what makes AIs so dangerous to use. You will typically ask the AI about something you yourself are not an expert on, so you can't easily verify the answer. But it seems plausible so you assume it to be correct.
Even worse is that over time, the seemingly correct answers will drift further away from actually correct answers. I'm the best case, it's because people expect the wrong answers as that's all they've been exposed to. Worse cases would be the answers skew toward a specific end that AI maker wants people to think.
AIs aren't trained to produce correct answers, but seemingly correct answers
I prefer to say “algorithmically common” instead of “seemingly correct” but otherwise agree with you.
My own advise for people starting to use AI is to use it for things you know very well. Using it for things you do not know well, will always be problematic.
The problem is that we've had a culture of people who don't know things very well control the purse strings relevant to those things.
So we have executives who don't know their work or customers at all and just try to bullshit while their people frantically try to repair the damage the executive does to preserve their jobs. Then they see bullshit generating platforms and see a kindred spirit, and set a goal of replacing those dumb employees with a more "executive" like entity that also can generate reports and code directly. No talking back, no explaining that the request needs clarification, that the data doesn't support their decision, just a "yes, and..." result agreeing with whatever dumbass request they thought would be correct and simple.
Finally, no one talking back to them and making their life difficult and casting doubt on their competency. With the biggest billionaires telling them this is the right way to go, as long as they keep sending money their way.
The problem is, every time you use it, you become more passive. More passive means less alert to problems.
Look at all the accidents involving "safety attendants" in self-driving cars. Every minute they let AI take the wheel, they become more complacent. Maaaybe I'll sneak a peak at my phone. Well, haven't gotten into an accident in a month, I'll watch a video. In the corner of my vision. Hah, that was good, gotta leave a commen — BANG!
They are designed to convince people. That's all they do. True, or false, real or fake, doesn't matter, as long as it's convincing. They're like the ultimate, idealized sociopath and con artist. We are being conned by a software designed to con people.
Thankfully, AI is bad at maths for exactly this reason. You don't have to be an expert on a very specific topic to be able to verify a proof and - spoiler alert - most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop's claims that it is vastly superior to previous models.
I've been through the cycle of the AI companies repeatedly saying "now it's perfect" only admitting it's complete trash when they release the next iteration and claim "yeah it was broken, we admit, but now it's perfect" so many times now...
Problem being there's a massive marketing effort to gaslight everyone and so if I point it out in any vaguely significant context, I'm just not keeping up and most only have dealt with the shitty ChatGPT 5.1, not the more perfect 5.2. Of course in my company they are about the Anthropic models so it is instead Opus 4.5 versus 4.6 now. Even proving the limitations in trying to work with 4.6 gives anthropic money, and at best I earn a "oh, those are probably going to be fixed in 4.7 or 5 or whatever".
Outsiders are used to traditional software that has mistakes, but those are straightforward to address so a close but imperfect software can hit the mark in updates. LLMs not working that way doesn't make sense. They use the same version number scheme after all, so expectations should be similar.
most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop's claims that it is vastly superior to previous models
Both of those can be true.
I mean yeah, but they specifically mentioned its amazing performance in tasks requiring reasoning
This person that posted this, along with everyone else who knew they were using LLM's like this, is an incompetent idiot that should lose their job.
Dumbasses. Mmm, that's good schadenfreude.