this post was submitted on 11 Jan 2026
1175 points (98.9% liked)

Fuck AI

5167 readers
1789 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Source (Bluesky)

Transcript

recently my friend's comics professor told her that it's acceptable to use gen Al for script- writing but not for art, since a machine can't generate meaningful artistic work. meanwhile, my sister's screenwriting professor said that they can use gen Al for concept art and visualization, but that it won't be able to generate a script that's any good. and at my job, it seems like each department says that Al can be useful in every field except the one that they know best.

It's only ever the jobs we're unfamiliar with that we assume can be replaced with automation. The more attuned we are with certain processes, crafts, and occupations, the more we realize that gen Al will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to do everything we don't.

you are viewing a single comment's thread
view the rest of the comments
[–] HetareKing@piefed.social 1 points 6 hours ago

I'm not sure that the comparison with the weather data works. Tweaking curves to more closely match the test data, and moving around a model's probability space in the hope that it sufficiently increases the probability of outputting tokens that fixes the code's problems, seem different enough to me that I don't know whether the former working well says anything about how well the latter works.

If I understand what you're describing correctly, the two models aren't improving each other, like in adversarial learning, but the adversarial model is trying to get the generative model to zone in on output that produces the user's desired behaviour based on the given test data. But that can only work as well as how much the adversarial model can be relied upon to actually perform the tasks needed to make this happen. So I think my point still stands that the objectivity of your measurements of the test results is only meaningful if the test results themselves are meaningful, which is not guaranteed given what's doing the testing.

How complex is the adversarial model? If it's anywhere near the generative model, I don't think you can have actual meaningful numbers about its reliability that allow you to reason about how meaningful the test results it produces are.