this post was submitted on 05 Nov 2025
106 points (96.5% liked)

Fuck AI

4512 readers
1372 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

Just heard this today - co worker used an LLM to find him some shoes that fit. Basically prompted it to find specific shoes that fit wider/narrower feet and it just scraped reviews and told him what to get. I guess it worked perfectly for him.

I hate this sort of thing - however this is why normies love LLM's. It's going to be the new way every single person uses the internet. Hell, on a new win 11 install the first thing that comes up is copilot saying "hey use me i'm better than google!!"

Frustrating.

top 50 comments
sorted by: hot top controversial new old
[–] ryanvgates 81 points 4 days ago (8 children)

I don't think people realize what's going to quickly happen is that people making the models will start extorting brands to get a better ranking. You want our model to recommend your brand then pay us $X. Then any perceived utility about reading reviews vanishes kind of like with fake reviews today.

[–] wizardbeard@lemmy.dbzer0.com 23 points 4 days ago

Most AI company executives have already spoken openly about how that's their plan for future financial growth: advertisements delivered naturally in the output with no clear division between ads and the content.

[–] bridgeenjoyer@sh.itjust.works 24 points 4 days ago

Oh 100%, we already see what fElon programmed HitlerBot to do. It's going to be an ultra-capitalists wet dream once the internet is destroyed and people only have access to Corpo LLM for only the cost of 3 pints of blood a month!

[–] makeshiftreaper@lemmy.world 8 points 4 days ago

Arguably this is already happening. AIs are trained mostly by web scraping and specifically scraping Reddit which has a known astroturfing problem. So it's already being fed non-genuine inputs and isn't likely isn't being used with tools to flag reviews as fake

[–] driving_crooner@lemmy.eco.br 8 points 4 days ago

Already happening. I was using chatgpt to make a script to download my YouTube music liked videos and it's keep giving me a pop up with the message "use spotify instead"

[–] obsoleteacct@lemmy.zip 3 points 4 days ago

Worse than that people and brands are going to enshitify the internet In an effort to get their products and brands into the training data with a more positive context.

Just use one AI to create hundreds of thousands of pages of bullshit about how great your brand is and how terrible your competitors brands are.

Then every AI scraping those random pages trying to harvest as much data as possible folds that into their training data set. And it doesn't just have to be things like fake product reviews. Fake peer-reviewed studies and fake white papers. It doesn't even have to be on the surface. It can be buried in a 1000 web servers accessible to scrapers but not to typical users.

Then all the other brands will have to do the same to compete. All of this enshitifying the models themselves more and more as they go.

Self-inflicted digital brain tumors.

[–] brucethemoose@lemmy.world 5 points 4 days ago* (last edited 4 days ago)

This isn't so dystopian if open-weight LLMs keep their momentum. Hosts for models become commodities, not brands to capture users, if anyone can host them, and it gives each host less power for extortion.

load more comments (2 replies)
[–] theunknownmuncher@lemmy.world 39 points 4 days ago* (last edited 4 days ago) (2 children)

hey use me i'm better than google

I don't use copilot (or windows), nor do I believe that LLMs are appropriate for use as a search engine replacement, but to be fair, google is really bad now, and I wouldn't be surprised if people are having a better experience using LLMs than google.

[–] thesohoriots@lemmy.world 12 points 4 days ago

I think the other half of this is the confidence with which it’s programmed to give the answer. You don’t have to go through a couple individual answers from different sites and make a decision. It just tells you what to do in a coherent way, eliminating any autonomy you have in the decision process/critical thinking, of course. “Fix my sauce by adding lemon? Ok! Add a bay leaf and kill yourself? Can do!”

[–] Grimy@lemmy.world 6 points 4 days ago* (last edited 4 days ago) (1 children)

I have a feeling google is slowly reducing its own quality so using llms become the norm, since it's even easier to inject ads. Might be paranoia tho.

[–] deliriousdreams@fedia.io 3 points 4 days ago

Google has already been caught out doing this. They reduced the quality of search results and placed ads and SEO (companies that pay to be first in the SEO rankings) ahead of other results. This was happening before they had a Gen AI LLM.

It's intent is to keep you on the search page longer, viewing ads so they can get more ad revenue.

They're an ad aggregation company first and foremost and search (along with their other suite of products) is how they serve those ads.

[–] SkunkWorkz@lemmy.world 16 points 4 days ago (1 children)

This happens because search engines have become worse and worse over the years. Unless you’ve installed adblockers or have a pi-hole. Not to mention that many search results Google returns are just AI generated websites. And the average person isn’t going to pay for Kagi to get a better search engine.

[–] stabby_cicada@slrpnk.net 2 points 3 days ago

Enshittification squared. Create a service that customers come to rely on. Then turn the service into shit to squeeze more profit out of it. Then create a new service that replicates the functionality of the old service customers relied on. Then enshittify that. And so on.

[–] TheImpressiveX@lemmy.today 20 points 4 days ago (5 children)

This is how my younger brother already is. Whenever something goes wrong, his first instinct is to open the ChatGPT app and ask it what to do.

Car engine making weird noises? Ask ChatGPT. Botched dinner recipe? Ask ChatGPT how to fix it.

Evidently, he sees nothing wrong with this, and does not consider the fact that the answers regurgitated by the LLM may not even be relevant/correct/up-to-date. I imagine this is how most people today treat LLMs.

[–] brokenwing@discuss.tchncs.de 13 points 4 days ago (1 children)

I feel a major reason why people do this is, they don't want to waste time and effort going to one or two websites to actually know about something.

Honesty this is sad. Modern media are conditioning their brains for instant gratification and LLMs provide just that with quick and direct answers. But may not be the correct one. But people are willing to over look that fact.

[–] bridgeenjoyer@sh.itjust.works 10 points 4 days ago

This is totally it.

I dont care, ill continue researching actual data made by humans on websites and deem if they are trustworthy.

[–] prole@lemmy.blahaj.zone 4 points 4 days ago

People are literally outsourcing their critical thinking to LLMs that constantly hallucinate.

What could go wrong?

[–] technocrit@lemmy.dbzer0.com 3 points 3 days ago

Does your brother pay for these services? If not, how much would they be willing to pay?

Eventually these "AI" grfiters will run out of "investor" money to hand out.

[–] SGforce@lemmy.ca 8 points 4 days ago

That's what Google was

load more comments (1 replies)
[–] solomonschuler@lemmy.zip 5 points 3 days ago (1 children)

I'm a student, I've strictly removed myself from this fucked up story. I'll be it, in my CS class with about 125 student, I'll say 87℅ of them use chatGPT, probably even more. It's easy to see that almost everyone in the class cheats, I spend about 5 days on a project or lab, they spend only 1 day. Its not that they're good at CS, this is after all an "introductory" CS class, and at the moment were learning data structures. Everyone hates data structures, and the people I talked to said "yea I spent a day on the linked list lab and got 100" it is so rare for me to get a 100 in this class, and I spent 5 days on that lab.

I've completely removed myself from using chatGPT and other LLM's. for fuck sake, I'm using a query based search engine called marginalia search, because even the internet has been fucked with unreliable information.

I love it because now I don't have everything at my fingertips, I do have a few things with the search engine, but you have to conserve the words that you use and not everything will pop up. For example, You can't just search "how to concatinate numbers in c++" you need to say "string concatenation c++" since you're using too many keywords

Because of this, I've started checking out library books from my university, and what's so funny is that the books I checkout aren't due for months in advance because no one checks out books anymore. I got a book on C/C++ reference guide and it isn't due till January of next year.

[–] bridgeenjoyer@sh.itjust.works 4 points 3 days ago

You are my spirit animal. Id hire you just based on this comment.

[–] visc@lemmy.world 8 points 4 days ago (3 children)

Yes. That is how the internet will be used, to a significant degree. Even today AI represents an extremely powerful and astonishingly human-adapted user interface.

The internet brought a vast quantity of knowledge together and made it accessible to anyone, in theory. In practice you need arcane knowledge to get what you want. You need to wiggle the mouse just so, need to know the abstract structure of the internet, the peculiarities of search terms, … it’s eminently doable but it’s not natural or intuitive. You must be taught how to use it.

If you put a medieval Tamil farmer in a room with a ChatGPT audio interface they could use it and have access to all of that internet knowledge.

I understand a lot of the backlash against AI but I don’t get hating on it because of how good of an interface it makes.

[–] bridgeenjoyer@sh.itjust.works 6 points 4 days ago (1 children)

Unfortunately youre right. If it wasn't completely owned by massive corps pushing techo-facist ideology and mass surveillance, i could maybe see it as a positive thing. But it will not be used for good in the long run.

[–] visc@lemmy.world 1 points 3 days ago (1 children)

Right. So that is what we need to change.

[–] bridgeenjoyer@sh.itjust.works 1 points 3 days ago

You cant change that, short of sabotaging data centers or getting rid of billionaires.

Better to rally against it and not use it whenever possible.

[–] stabby_cicada@slrpnk.net 1 points 3 days ago* (last edited 3 days ago)

Yeah, and how does that Tamil farmer fact check their black box audio interface when it tells them to spray Roundup on their potatoes, or warns them to buy bottled water because their Hindu-hating Muslim neighbors have poisoned their well, or any other garbage it's been deliberately or accidentally poisoned with?

One of the huge weaknesses of AI as a user interface is that you have to go outside the interface to verify what it tells you. If I search for information about a disease using a search engine, and I find an .edu website discussing the results of double blind scientific studies of treatments for a disease, and a site full of anti-Semitic conspiracy theories and supplement ads telling me about THE SECRET CURE DOCTORS DON'T WANT YOU TO KNOW, I can compare the credibility of those two sources. If I ask ChatGPT for information about a disease, and it recommends a particular treatment protocol, I don't know where it's getting its information or how reliable it is. Even if it gives me some citations, I have to check its citations anyway, because I don't know whether they're reliable sources, unreliable sources, or hallucinations that don't exist at all.

And people who trust their LLM and don't check its sources end up poisoning themselves when it tells them to mix bleach and vinegar to clean their bathrooms.

If LLMS were being implemented as a new interface to gather information - as a tool to enhance human cognition rather than supplant, monitor, and control it - I would have a lot fewer problems with them.

load more comments (1 replies)
[–] Jhex@lemmy.world 6 points 4 days ago

I have found no use for trivial stuff as I don't trust what it outputs. If I have to verify its answer, I might as well just research it on my own, it's literally more work to ask and research than to just research

At work, the only use I have found is to provide fake data so I can run some tests as I work on confidential stuff I cannot use directly. For example, they other day I asked for 30 super hero name, last name, gender and DOB table just to pick from. I could easily do it by hand as it does not have to be accurate at all, but it was faster to prompt than to just randomly type and I did not care if it missed the prompt (very rare scenario IMO)

[–] JackbyDev@programming.dev 6 points 4 days ago

Sponsored search results suck, but sponsored LLM results are gonna be wild.

[–] tty5@lemmy.world 10 points 4 days ago* (last edited 4 days ago)

I spend significantly more time keeping AI at bay than using it.

  • It is not good enough to write code at the expertise level I'm usually required to work at. I've spent more time fixing generated code than it would have taken me to write it from scratch.
  • It hallucinates too much for me to trust any information provided by one
  • Security of AI companies is about the same as IoT companies with the difference being that if IoT leaks my data it's going to be incompetence and not malice - I don't trust AI with any of my local data.
  • AI agents require to be given even more access and permissions and that's just not happening.
  • I contact support when I've exhausted what I can do myself and as a result AI chatbots are an annoying obstacle that can't help me and I have to waste my time going through to reach a person that actually has power to help me
[–] jimmy90@lemmy.world 5 points 4 days ago* (last edited 1 day ago)

LLMs are good at templates or starting points for standard documents, or communications and coding examples

BUT ... you have to double check every single word

[–] technocrit@lemmy.dbzer0.com 4 points 3 days ago* (last edited 3 days ago) (1 children)

Ok, but how much are people willing to pay an "AI" to find sneakers?

How long can "AI" grifters dump money/resources into these free services?

If "normal people" had to pay the actual costs, I'm sure that many of them would look for their own sneakers.

Personally I don't need/want a machine to pick my shoes for me in the first place.

Anyway I guess my wider point is that people will stop using "AI" so much when they actually have to pay for it.

[–] zqps@sh.itjust.works 1 points 3 days ago

People do not pay for online services they're used to getting for free. That means there'll always be free LLMs available, but as a result of cost considerations will be even worse than what we have today.

[–] TootSweet@lemmy.world 10 points 4 days ago

My mother is constantly googling things and reading me the AI overview. And I know LLMs make shit up all the time, and I don't want AI hallucinations to infect my brain and slowly fuck up my worldview. So I always have to drop everything and go confirm the claims from the AI overview. And I've caught plenty of inaccuracies and hallucinations. (One I remember: she googled for when the East Wing of the White House was originally built and the AI overview told her the year of a major renovation, claiming it was the year it was built, but it had been built much earlier.)

[–] hendrik@palaver.p3x.de 9 points 4 days ago (3 children)

I never understood how people order shoes online except if they already have that pair. I go to a real-world shop and try 10 pairs and honestly, 8 of them aren't so great. I wouldn't know how that works with Amazon or an AI.

I just got a pair of Adidas on the Costco website for $16.76. I took a gamble and they are NICE. I can't believe I got that deal. I'll chance it at that cost.

[–] Cevilia@lemmy.blahaj.zone 2 points 4 days ago

I'd never buy any kind of clothing online. I'm not especially vain, I don't even wear makeup, but if I can't tell how it's going to feel and look on my body, why would I trust it?

[–] TubularTittyFrog@lemmy.world 3 points 4 days ago* (last edited 4 days ago) (1 children)

most shoe sizes are the same. i wear 11.5. i've been wearing it for 20 years and probably had 100s of shoes of that size.

only time i ever have a different shoe size is for leather dress shoes or boots. which are typically sized down from a standard shoe size.

load more comments (1 replies)
[–] Khanzarate@lemmy.world 7 points 4 days ago (8 children)

Search engines have cratered.

There's a need to skim through all the crap to find what's valuable, and AI offers to do that.

I have used it to learn things, specifically how to use Angular. Angular has enough versions, all still used somewhere, that genuinely valid and helpful advice from a few years ago is misleading. AI didn't so much take the place of a tutorial, it took my whole code and reviewed it, telling me what was wrong. Then I fact-checked its answers because I don't trust it, and yeah, it wasn't always correct itself, but it was more than 80% of the time, and even when it was wrong, it got me close enough that I could find the right answer where it failed.

Also, it does a great job with CSS right out of the gate, no mistakes yet on that front.

load more comments (8 replies)
[–] artyom@piefed.social 6 points 4 days ago (2 children)

What? As a person with wide feet, there are sizes that are specifically made for wide feet. This isn't anything they couldn't have learned from a normal search engine.

load more comments (2 replies)
[–] bonsai@lemmy.dbzer0.com 5 points 4 days ago

Someone at work is using Chatgpt to guide her retirement accounts. I just... I'm so tired man. I hope it works out for her but like really?

[–] danekrae@lemmy.world 4 points 4 days ago* (last edited 4 days ago) (1 children)

I told my students to use it as a help, not for any thinking (teachers have been told to instruct students on using AI). They used it the other day, by taking a photo of a hand drawn table (database) their math teacher hade drawn. AI then made it into a nice spreadsheet for them to use. They did have to account for it not understanding the letters Æ, Ø, Å.

[–] prole@lemmy.blahaj.zone 2 points 4 days ago* (last edited 4 days ago) (1 children)

And what happens when it does it wrong, but the students have no idea because they never bothered to actually learn to do the thing themselves?

load more comments (1 replies)
[–] ahornsirup@feddit.org 4 points 4 days ago

Considering just how many of the top results in any search engine are AI generated SEO farming I can understand why people do it. Finding decent results is HARD these days, especially on subjects you're not already familiar with (because you'll not notice the red flags).

load more comments
view more: next ›