FaceDeer

joined 2 years ago
[–] FaceDeer@fedia.io 1 points 1 year ago

You joke, but presumably that's when it recharges.

[–] FaceDeer@fedia.io 2 points 1 year ago (1 children)

It's a common pattern. Something actually bad exists, and a word is invented to describe that bad thing. People want to call the things they don't like by that bad word, even if it's not quite right, so the definition starts to widen a bit. It's a very bad thing so it's good to call things you don't like by that word, it makes everyone else hate them too! The word stretches and stretches, and eventually everything vaguely bad is called that word. It loses its meaning.

A new word is invented to describe some specific actually bad thing. Repeat.

[–] FaceDeer@fedia.io 10 points 1 year ago

Things change. There was a period before this information was easily available; this repository only goes back to 2013. Now there's a period after this information, too. Things start and eventually they end.

Here's hoping that some neat new things start up in its place.

[–] FaceDeer@fedia.io 2 points 1 year ago

They're not both true, though. It's actually perfectly fine for a new dataset to contain AI generated content. Especially when it's mixed in with non-AI-generated content. It can even be better in some circumstances, that's what "synthetic data" is all about.

The various experiments demonstrating model collapse have to go out of their way to make it happen, by deliberately recycling model outputs over and over without using any of the methods that real-world AI trainers use to ensure that it doesn't happen. As I said, real-world AI trainers are actually quite knowledgeable about this stuff, model collapse isn't some surprising new development that they're helpless in the face of. It's just another factor to include in the criteria for curating training data sets. It's already a "solved" problem.

The reason these articles keep coming around is that there are a lot of people that don't want it to be a solved problem, and love clicking on headlines that say it isn't. I guess if it makes them feel better they can go ahead and keep doing that, but supposedly this is a technology community and I would expect there to be some interest in the underlying truth of the matter.

[–] FaceDeer@fedia.io 14 points 1 year ago

No, researchers in the field knew about this potential problem ages ago. It's easy enough to work around and prevent.

People who are just on the lookout for the latest "aha, AI bad!" Headline, on the other hand, discover this every couple of months.

[–] FaceDeer@fedia.io 8 points 1 year ago (4 children)

AI already long ago stopped being trained on any old random stuff that came along off the web. Training data is carefully curated and processed these days. Much of it is synthetic, in fact.

These breathless articles about model collapse dooming AI are like discovering that the sun sets at night and declaring solar power to be doomed. The people working on this stuff know about it already and long ago worked around it.

[–] FaceDeer@fedia.io 13 points 1 year ago

Sometimes headshots develop spontaneously. It's a rare condition, but convenient. Some claim John F. Kennedy suffered from this condition.

[–] FaceDeer@fedia.io 12 points 1 year ago (2 children)

Last I heard they hadn't found the knife yet.

[–] FaceDeer@fedia.io 13 points 1 year ago (1 children)

I recall seeing a list of the most dangerous jobs in America and "President of the United States" topped it due to the high percentage of people with that job who've been shot.

[–] FaceDeer@fedia.io 7 points 1 year ago

But at least that crappy bug-riddled code has soul!

[–] FaceDeer@fedia.io 50 points 1 year ago

In Tyreek's post-arrest press conference he asked rhetorically "what would have happened if I hadn't been famous?"

Well, now we see. Wrist-slaps with no actual long-term impact.

view more: ‹ prev next ›