this post was submitted on 11 Jan 2026
1012 points (99.0% liked)

Fuck AI

5167 readers
1519 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
1012
On familiarity (infosec.pub)
submitted 20 hours ago* (last edited 19 hours ago) by ThefuzzyFurryComrade@pawb.social to c/fuck_ai@lemmy.world
 

Source (Bluesky)

Transcript

recently my friend's comics professor told her that it's acceptable to use gen Al for script- writing but not for art, since a machine can't generate meaningful artistic work. meanwhile, my sister's screenwriting professor said that they can use gen Al for concept art and visualization, but that it won't be able to generate a script that's any good. and at my job, it seems like each department says that Al can be useful in every field except the one that they know best.

It's only ever the jobs we're unfamiliar with that we assume can be replaced with automation. The more attuned we are with certain processes, crafts, and occupations, the more we realize that gen Al will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to do everything we don't.

top 50 comments
sorted by: hot top controversial new old
[–] AeonFelis@lemmy.world 1 points 8 minutes ago

Hot take: it's reasonable for a comics student to use AI for script-writing and for a screenwriting student to use AI for concept art, not because machine can generate meaningful artistic work at these fields but because these are not the fields they are trying to learn.

In a way, this can be used to level the field. The comics professor can use the same LLM to generate scripts for all their students. It'll be slop script, but the slop will be of uniform quality so no student will have the advantage of better writing and it'd be easier to judge their work based on the drawing alone.

And even if AI could generate true art in some field - why would it be acceptable for a student to use it for the very field they are studying and need to polish their own skills at?

[–] Kolanaki@pawb.social 8 points 1 hour ago (1 children)

AI only seems good when you don't know enough about any given topic to notice that it is wrong 70% of the time.

This is concerning when CEOs and other people in charge seem to think it is good at everything, as this means they don't know a god damn thing about fuck all.

[–] AngryCommieKender@lemmy.world 2 points 1 hour ago

I remember an article back in 2011 that predicted that we would be able to automate all middle and most upper management jobs by 2015. My immediate thought was, "Well these people must not do much, if a glorified script can replace them."

[–] hapablap@lemmy.sdf.org 7 points 1 hour ago (1 children)

The breadth of knowledge demonstrated by Al gives a false impression of its depth.

[–] purplemonkeymad@programming.dev 4 points 1 hour ago (1 children)

Generalists can be really good at getting stuff done. They can quickly identify the experts needed when it's beyond thier scope. Unfortunately over confident generalists tend not to get the experts in to help.

[–] wabasso@lemmy.ca 1 points 12 minutes ago

This makes a lot of sense. A good lesson even outside the context of AI.

[–] grepe@lemmy.world 1 points 1 hour ago* (last edited 1 hour ago)

bingo! it also explains why tech bros consider AI good for everything - they are not really good at or familiar with anything themselves.

on a serious note: to put this in a specific example i will use programing. management/PO/business will have a vague ideas about what needs to be done. they have to explain them to someone in plain human language and let them figure out details of how to turn it into an algorith that does that. that someone has to fill in all the blanks, make many decisisions about unspecified details and implement it. but, in addition to that, they also have to make sure the code is delivered on time, on budget, it will remain easy to maintain and doesn't break anything important. those later details are not something the original stakeholders (business/management/POs) normally have to deal with. it's someone's else's responsibility. it's just annoying to them that they have to deal with another temperamental human and that human can take weeks to make even "simple changes" to their code (because they have to care about all those other things like maintainability)... so when a salesman comes over and offers them they can instead explain the problem to a chatbot and get the code in minutes the proposition sounds irresistible!

[–] jj4211@lemmy.world 0 points 1 hour ago (1 children)

And managers think it's good for everything except being executives and management. Except that's probably the one thing AI can do just as well as a manager who thinks AI is good for a bunch of stuff.

[–] Jyek@sh.itjust.works 1 points 10 minutes ago

Sounds like you may be doing exactly what is being described in this post. Assuming AI can do something you aren't intimately experienced with

[–] pir8t0x@ani.social 2 points 2 hours ago
[–] GreenKnight23@lemmy.world 15 points 5 hours ago (1 children)

let's not confuse LLMs, AI, and automation.

AI flies planes when the pilots are unconscious.

automation does menial repetitive tasks.

LLMs support fascism and destroy economies, ecologies, and societies.

[–] ricecake@sh.itjust.works 3 points 2 hours ago

I'd even go a step further and say your last point is about generative LLMs, since text classification and sentiment analysis are also pretty benign.

It's tricky because we're having a social conversation about something that's been mislabeled, and the label has been misused dozens of times as well.

It's like trying to talk about knife safety when you only have the word "pointy".

[–] db0@lemmy.dbzer0.com 15 points 6 hours ago (1 children)

It's why managers fucking love GenAI.

My personal take is that GenAI is ok for personal entertainment and for things that are ultimately meaningless. Making wallpapers for your phone, maps for your RPG campaign, personal RP, that sort of thing.

[–] pulsewidth@lemmy.world 7 points 4 hours ago (1 children)

'I'll just use it for meaningless stuff that nobody was going to get paid for either way' is at the surface-level a reasonable attitude; personal songs generated for friends as in-jokes, artwork for home labels, birthday parties, and your examples.. All fair because nobody was gonna pay for it anyway, so no harm to makers.

But I don't personally use them for any of those things myself though, some of my reasons: I figure it's just investor-subsidized CPU cycles burning power somewhere (environmental), and ultimately that use-case won't be a business model that makes any money (propping the bubble), it dulls and avoids my own art-making skills which I think everyone should work on (personal development atrophy), building reliance on proprietary platforms... so I'd rather just not, and hopefully see the whole AI techbro bubble crash sooner than later.

[–] db0@lemmy.dbzer0.com 4 points 4 hours ago (1 children)

I figure it's just investor-subsidized CPU cycles burning power somewhere (environmental)

This can be avoided by using local open-weight models and open source technology, which is what I do.

[–] pulsewidth@lemmy.world 3 points 4 hours ago

Yeah, that certainly addresses that issue. I may do the same in the future, just haven't found the need to do so as yet. For most who lean on AI for the simple tasks mentioned above, they use an AI service rather than a local model.

[–] Strider@lemmy.world 3 points 4 hours ago* (last edited 4 hours ago)

It's the dilbert approach

[–] Tattorack@lemmy.world 6 points 6 hours ago (1 children)

The only good AI I've come across is the one I use for denoising cycles renders in Blender3D, as that's something that a human cannot reasonably do.

That's the only scenario something like AI has any use as a "tool"; doing things humans cannot reasonably do.

[–] TomArrr@lemmy.world 2 points 4 hours ago (1 children)

Unexpected blender tip. Thanks for that, hope it improves my render times 😉

[–] Tattorack@lemmy.world 1 points 2 hours ago

Here's another for you:

Play around with sample sizes and render tile sizes in the performance menu (same place where you find the denoising options).

Depending on your set-up, you can see a drastic improvement in render times by choosing smaller tile sizes. Sample size is also counted per render tile, so you could get away with very low sample sizes and have a completed render with an overall higher combined sample size.

[–] 18107@aussie.zone 38 points 9 hours ago

AI has been excellent at teaching me to program in new languages. It knows everything about all languages - except the ones I'm already familiar with. It's terrible at those.

[–] TheEighthDoctor@lemmy.zip 16 points 8 hours ago

So Gen AI is like Dan Brown, the more you know about the subject the more it sucks

[–] plyth@feddit.org 5 points 6 hours ago

It's the same with all automation. Processed food is not as good as homecooked food, tailored suits are better than massproduced ones.

[–] Aceticon@lemmy.dbzer0.com 2 points 5 hours ago* (last edited 5 hours ago) (1 children)

In my experience everybody (myself included) is prone to the Dunning-Kruger Effect in domains outside their expertise.

It doesn't mater if you're a outstanding expert in any one domain: you just look at a different domain and go "yeah, that looks easy".

I'm actually a lot more generalist than usual because of my personality and still have that same tendency to underestimate the complexity of different domains, but because of being a generalist I sometimes for one domain or another go down the route of genuinelly practicing it professionally, and one or two years later I'm invariably thinking "This shit ain't anywhere as simple as I thought!".

And, lo and behond, generative AI is just about good enough to handle the entry level stuff in a domain - the ultimate Junior Professional (not even a very good one) with just about enough "competence" to look capable for domain outsiders or even hobbyists whilst at the same time being obviously mediocre for domain experts.

As most people don't really think about their own knowledge perception in these terms and thus don't try to compensate for it, the reactions described in this post totally make sense.

[–] sfgifz@lemmy.world 1 points 19 minutes ago

The thing that most people here seem to ignore is the AI doesn't have to be the de facto expert on a subject. There are lots of bad programmers and lots of designers and writers that do mediocre work at best and still charge for it. AI can clearly do at par or better work than those and that's sufficient for a lot of clients.

[–] lightnsfw@reddthat.com 19 points 11 hours ago

IDK about that I'm a professional slop maker and I think it could replace me easily.

[–] pupbiru@aussie.zone 12 points 11 hours ago* (last edited 11 hours ago) (3 children)

they’re both wrong, and they’re both right

an AI can create concept art for a writer to better visualise their world to generate ideas in a pinch, but it shouldn’t ever be what you use to show anyone else: you still need real concept art

an AI can also create writing for their art so that they can flesh out a back story to make their visual art more detailed, but it’s not going to write anything that you’d want anyone to read as a book or act in for a movie

both things can be used for the described purpose, and both things are inadequate for quality output

we’ve had this juxtaposition for a while: “redneck X”… they’re scrapped together barely functional versions of the thing you’re trying to do, on the cheap, with home-made tools. you wouldn’t sell it, but it’s kinda fine for this 1 situation with many many asterisks

professionals often don’t like when someone can hack together something functional because they know the many many places where that thing falls down when you talk about long-term, and the general case… but sometimes a hack job solves a specific problem in a specific situation for a moment for cheap and that’s all you need

(just don’t try it with electricity or your health: the consequences of not understanding this complexity is death… of course ;p)

load more comments (3 replies)
[–] Viking_Hippie@lemmy.dbzer0.com 56 points 16 hours ago (3 children)

That's also why the billionaires love it so much:

they very rarely have much if any technical expertise, but imagine that they just have to throw enough money at AI and it'll make them look like the geniuses they already see themselves as.

[–] Tigeroovy@lemmy.ca 8 points 6 hours ago (1 children)

That and it talks to them like every jellyfish yes man that they interact with.

Which subsequently seems to be why so many regular ass people like it, because it talks to them like they’re a billionaire genius who might accidentally drop some money while it’s blowing smoke up their ass.

[–] sp3ctr4l@lemmy.dbzer0.com 3 points 2 hours ago* (last edited 2 hours ago)

I literally have to give my local LLM a bit of a custom prompt to get it to stop being so overly praising of me and the things that I say.

Its annoying, it reads as patronizing to me.

Sure, everyonce in a while I feel like I do come up with an actually neat or interesting idea... but if you went by the default of most LLMs, they basically act like they're a teenager in a toxic, codependent relationship with you.

They are insanely sycophantic, reassure you that all your dumbest ideas and most mundane observations are like, groundbreaking intellectual achievements, all your ridiculous and nonsensical and inconsequential worries and troubles are the most serious and profound experiences that have ever happened in the history of the universe.

Oh, and they're also absurdly suggestible about most things, unless you tell them not to be.

... they're fluffers.

They appeal to anyone's innate narcissism, and amplify it into ego mania.

Ironically, you could maybe say that they're programming people to be NPCs, and the template they are programming to be, is 'Main Character Syndrome'.

[–] Clent@lemmy.dbzer0.com 18 points 13 hours ago

billionaires love it

They think it knows everything because they know nothing.

load more comments (1 replies)
load more comments
view more: next ›