this post was submitted on 09 Sep 2025
99 points (99.0% liked)

Fuck AI

3981 readers
632 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

you are viewing a single comment's thread
view the rest of the comments
[–] theunknownmuncher@lemmy.world 9 points 20 hours ago (1 children)

Hmm, I'm not an expert on image AI, but I think your idea of how this works is basically close enough but not exactly right. The image is encoded into tokens (vectors) by an encoder model, and then those tokens are decoded into a new image. The intermediary tokens aren't really text descriptions of the image but maybe this distinction is kind of pointless? The lossyness is the same either way

[–] renzhexiangjiao@piefed.blahaj.zone 6 points 20 hours ago (2 children)

so you're saying that of I decoded these intermediate tokens I wouldn't get coherent sentences, but rather something completely random that is just a covenient representation of the image, or perhaps some words that relate to the image (sth like "woman" "man" "marriage" "blonde" "dress" etc.)?

[–] Eq0@literature.cafe 3 points 18 hours ago

Somewhat. I am not familiar with this exact type of algorithm, but the global name is “Encoder-Decoder” algorithm. Broadly speaking you have an input (the original image) and you want to create an output (obviously). You want the input and the output to be “very similar” according to some definition, but you imagine that the AI algorithm has two parts, the encoder part that extracts as much meaningful information as possible from the input and a decoder, that takes that information and generates something new out if it. This information is practically stored as a list of numbers, and we do not impose any prior meaning to them (we do not say that the first number for example is the number of people in the image) but the algorithm learns to make the best out of the encoding.

Two different machines that run the same algorithm trained independently might have completely different middle information. The only thing that matters is that the “encoder” and the “decoder” parts both know what’s going on. (Basically, yes, it’s random but the computer knows how to interpret it - where “know” is used very loosely here)

Sorry for the rant! I hope you found it interesting

[–] theunknownmuncher@lemmy.world 4 points 20 hours ago

I believe so, and some may not really translate well into text at all, and instead represent some kind of specific or abstract visual feature. There would be an entire other neural network or part of a neural network specifically for decoding the tokens into text