This would suggest the leadership positions aren't required for the function of the business.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
This has always been the case, in every industry.
Jesus Christ, you have to have a human validate the data.
Exactly, this is like letting excel auto-fill finish the spreadsheet and going "looks about right"
And that's a good analogy, as people have posted screenshots of Copilot getting basic addition wrong in Excel.
Whoever implemented this agent without proper oversight needs to be fired.
Except the ceo and executives ultimately responsible will blame their underlings that will be fired, even though it was an executive level decision. They didn't get to the pinnacle of corporate governance by admitting mistakes. That's not what they were taught at their ivy league schools, they were taught to lie and cheat to steal, and further slander their victims to excuse it.
It was bad before the current president set his outstanding example for the rest of the country. See what being a lying cheating piece of shit gets you? Everything. Nothing matters. We have the wrong people in charge across the board, from business to government to institutions.
Fair points all around.
And you're not wrong. I work for a law firm and we were tracking his EO's until mid-2025, and they were so riddled with typos, and errors, and URL's pointing to the wrong EO, that we actually ended up having to hide the URL's in the database we built so clients wouldn't think it was us making these errors.
Excel already has advanced math functions, and even a python integration as of recently. It is theoretically possible to set a document to calculate all prime numbers in a 10000 range. There is no need to integrate genai.
Yup, but stupid people can't be bothered to go read a five-minute tutorial. Story of our species.
Dumbasses. Mmm, that's good schadenfreude.
I've said it time and time again: AIs aren't trained to produce correct answers, but seemingly correct answers. That's an important distinction and exactly what makes AIs so dangerous to use. You will typically ask the AI about something you yourself are not an expert on, so you can't easily verify the answer. But it seems plausible so you assume it to be correct.
My own advise for people starting to use AI is to use it for things you know very well. Using it for things you do not know well, will always be problematic.
AIs aren't trained to produce correct answers, but seemingly correct answers
I prefer to say “algorithmically common” instead of “seemingly correct” but otherwise agree with you.
Thankfully, AI is bad at maths for exactly this reason. You don't have to be an expert on a very specific topic to be able to verify a proof and - spoiler alert - most of the proofs ChatGPT 5 has given me are plain incorrect, despite OpenSlop's claims that it is vastly superior to previous models.
I use it to summarize stuff sometimes, and I honestly spend almost as much time checking it's accurate than I would if I had just read and summarized.
It is useful for 'What does this contain?' so I can see if I need to read something. Or rewording something I have made a pig's ear out of.
I wouldn't trust it for anything important.
The most important thing to do if you do use AI is to not ask leading questions. Keep them simple and direct
It is useful for ‘What does this contain?’ so I can see if I need to read something. Or rewording something I have made a pig’s ear out of.
Skimming and scanning texts is a skill that achieves the same goal more quickly than using an unreliable bullshit generator.
What dumbass decided to implement an experimental technology and not test it for 5 minutes to make sure it's accurate before giving it to the whole company and telling them to rely upon it?
Joke's on you, we make our decisions without asking AI for analytics. Because we don't ask for analytics at all
I don't need AI to fabricate data. I can be stupid on my own, thank you.
I feel like no analytics is probably better than decisions based on made-up analytics.
Yep, without analytics you at least are likely going on anecdotal feel for things which while woefully incomplete is at least probably based on actual indirect experience, like number of customers you've spoken with, how happy they have seemed, how employees have been feeling, etc.
Could be horribly off the mark without actual study of the data, but it is at least roughly directed by reality rather than just random narrative made by a word generator that has nothing to do with your company at all.
I'm not sure, because you see I'm not C-level by far, but I feel the decisions in such cases are made based on imaginary version of clients, and what tops feel the clients want (that is what they think they would want if they were clients)
And they may guess right or wrong, though I agree that they may be more likely to guess right than an LLM, being humans and all
Fuck Reddit and Fuck Spez.
I somehow hope this is made up, because doing this without checking and finding the obvious errors is insane.
As someone who has to deal with LLMs/AI daily in my work in order to fix the messes they create, this tracks.
AI's sole purpose is to provide you a positive solution. That's it. Now that positive solution doesn't even need to be accurate or even exist. It's built to provide a positive "right" solution without taking the steps to get to that "right" solution thus the majority of the time that solution is going to be a hallucination.
you see it all the time. you can ask it something tech related and in order to get to that positive right solution it'll hallucinate libraries that don't exist, or programs that don't even do what it claims they do. Because logically to the LLM this is the positive right solution WITHOUT utilizing any steps to confirm that this solution even exists.
So in the case of OPs post I can see it happening. They told the LLM they wanted analytics for 3 months and rather than take the steps to get to an accurate solution it ignored said steps and decided to provide positive solution.
Don't use AI/LLMs for your day to day problem solving. you're wasting your time. OpenAI, Anthropic, Google, etc have all programmed these things to provide you with "positive" solutions so you'll keep using them. they just hope you're not savvy enough to call out their LLM's when they're clearly and frequently wrong.
Probably the skepticism is around someone actually trusting the LLM this hard rather than the LLM doing it this badly. To that I will add that based on my experience with LLM enthusiasts, I believe that too.
I have talked to multiple people who recognize the hallucination problem, but think they have solved it because they are good "prompt engineers". They always include a sentence like "Do not hallucinate" and thinks that works.
The gaslighting from the LLM companies is really bad.
"Prompt engineering" is the astrology of the LLM world.
It has happened in the open, so I don't see why it wouldn't happen even more behind closed doors:
Deloitte will provide a partial refund to the federal government over a $440,000 report that contained several errors, after admitting it used generative artificial intelligence to help produce it..
Use of AI in companies would not save any time if you were checking each result.
This is probably real, as it isn't the first time it happened: https://www.theguardian.com/technology/2025/jun/06/high-court-tells-uk-lawyers-to-urgently-stop-misuse-of-ai-in-legal-work
Yeah.
Kinda surprised there isn't already a term for submitting / presenting AI slop without reviewing and confirming.
Negligence and fraud come to mind
Slop flop seems like it would work. He’s flopped the slop. That slop was flopped out without checking.
I suspect this will happen all over with in a few years, AI was good enough at first, but over time reality and the AI started drifting apart
They haven't drifted apart, they were never close in the first place. People have been increasingly confident in the models because they've increasingly sounded more convincing, but the tenuous connection to reality has been consistently off.
AI is literally trained to get the right answer but not actually perform the steps to get to the answer. It's like those people that trained dogs to carry explosives and run under tanks, they thought they were doing great until the first battle they used them in they realized that the dogs would run under their own tanks instead of the enemy ones, because that's what they were trained with.
Holy shit, that's what they get for being so evil that they trained dogs as suicide bombers.
And then, the very same CEOs that demanded the use of AI in decision making will be the ones that blame it for bad decisions.
while also blaming employees
Of course, it is the employees who used it. /s

But don't worry, when it comes to life or death issues, AI is the best way to help