HedyL

joined 2 years ago
[–] HedyL@awful.systems 7 points 3 weeks ago

there’s no use case for LLMs or generative AI that stands up to even mild scrutiny, but the people funneling money into this crap don’t seem to have noticed yet

This is why I dislike the narrative that we should resist "AI" with all our power because supposedly, if our employers got us to train the chatbots, they would become super smart and would be able to replace us in no time. In my view, this is simply not true, as the past years have shown. Spreading this narrative (no matter how well-intentioned) will only empower the AI grifters and reinforce employers' beliefs that they could easily lay off people and replace them with slop generators because supposedly the tech can do it all.

There are other very good reasons to fight the slop generators, but this is not one of them, in my view.

[–] HedyL@awful.systems 18 points 3 weeks ago* (last edited 3 weeks ago)

I'm old enough to remember the dotcom bubble. Even at my young age back then, I found it easy to spot many of the "bubbly" aspects of it. Yet, as a nerd, I was very impressed by the internet itself and was showing a little bit of youthful obsession about it (while many of my same-aged peers were still hesitant to embrace it, to be honest).

Now with LLMs/generative AI, I simply find myself unable to identify any potential that is even remotely similar to the internet. Of course, it is easy to argue that today, I am simply too old to embrace new tech or whatever. What strikes me, however, is that some of the worst LLM hypemongers I know are people my age (or older) who missed out on the early internet boom and somehow never seemed to be able to get over that fact.

[–] HedyL@awful.systems 19 points 3 weeks ago (2 children)

I don't understand. Everybody keeps telling me that LLMs are easily capable of replacing pretty much every software developer on this planet. And now they complain that $71 a day (or even $200 a month) is too much for such amazing tech? /s

[–] HedyL@awful.systems 9 points 4 weeks ago

In my experience, copy that "sells" must evoke the impression of being unique in some way, while also conforming to certain established standards. After all, if the copy reads like something you could read anywhere else, how could the product be any different from all the competing products? Why should you pay any attention to it at all?

This requirement for conformity paired with uniqueness and originality requires a balancing act that many people who are not familiar with the task of copywriting might not understand at all. I think to some extent, LLMs are capable of creating the impression of conformity that clients expect from copywriters, but they tend to fail at the "uniqueness" part.

[–] HedyL@awful.systems 5 points 4 weeks ago (1 children)

maybe they’ll figure a way to squeeze suckers out of their money in order to keep the charade going

I believe that without access to generative AI, spammers and scammers wouldn't be able to successfully compete in their respective markets anymore. So at the very least, the AI companies got this going for them, I guess. This might require their sales reps to mingle in somewhat peculiar circles, but who cares?

[–] HedyL@awful.systems 12 points 4 weeks ago

It's almost as if teachers were grading their students' tests using a dice, and then the students tried manipulating the dice (because it was their only shot at getting better grades), and the teachers got mad about that.

[–] HedyL@awful.systems 10 points 4 weeks ago (1 children)

This is, of course, a fairly blatant attempt at cheating. On the other hand: Could authors ever expect a review that's even remotely fair if reviewers outsource their task to a BS bot? In a sense, this is just manipulating a process that would not have been fair either way.

[–] HedyL@awful.systems 7 points 4 weeks ago

To me, the idea of using market power as a key argument here seems quite convincing, because if there was relevant competition in the search engine market, Google would probably have had much more difficulty imposing this slop on all users.

[–] HedyL@awful.systems 10 points 1 month ago (1 children)

I disagree with the last part of this post, though (the idea that lawyers, doctors, firefighters etc. are inevitably going to be replaced with AI as well, whether we want it or not). I think this is precisely what AI grifters would want us to believe, because if they could somehow force everyone in every part of society to pay for their slop, this would keep stock prices up. So far, however, AI has mainly been shoved into our lives by a few oligopolistic tech companies (and some VC-funded startups), and I think the main purpose here is to create the illusion (!) of inevitability because that is what investors want.

[–] HedyL@awful.systems 25 points 1 month ago (4 children)

Completely unrelated fact, but isn't the prevalence of cocaine use among U. S. adults considered to be more than 1% as well?

(Referring to this, of course - especially the last part: https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/)

[–] HedyL@awful.systems 30 points 1 month ago

Stock markets generally love layoffs, and they appear to love AI at the moment. To be honest, I'm not sure they thought beyond that.

[–] HedyL@awful.systems 4 points 1 month ago

Yes, they will create security problems anyway, but maybe, just maybe, users won’t copy paste sensitive business documents into third party web pages?

I can see that. It becomes kind of a protection racket: Pay our subscription fees, or data breaches are going to befall you, and you will only have yourself (and your chatbot-addicted employees) to blame.

view more: ‹ prev next ›