this post was submitted on 11 Jun 2024
88 points (98.9% liked)

technology

23218 readers
2 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

The big AI models are running out of training data (and it turns out most of the training data was produced by fools and the intentionally obtuse), so this might mark the end of rapid model advancement

all 43 comments
sorted by: hot top controversial new old
[–] queermunist@lemmy.ml 60 points 1 year ago (3 children)

Oh look, businesses didn't plan for what to do after the low hanging fruit is gone. Shocker.

[–] context@hexbear.net 41 points 1 year ago

the plan was "and then the line goes up forever"

[–] buh@hexbear.net 4 points 1 year ago

They do have a plan for that, it’s to lay everyone off and use the saved money on stock buybacks

[–] umbrella@lemmy.ml 4 points 1 year ago

you mean we can't get exponential growth forever? what the fuck!

[–] JoeByeThen@hexbear.net 40 points 1 year ago (4 children)

No, it's not. Maybe strictly for LLMs, but they were never the endpoint. They're more like a Frontal Lobe emulator, the rest of the "brain" still needs to be built. Conceptually, Intelligence is largely about interactions between Context and Data. We have plenty of written Data. In order to create Intelligence from that Data we'll need to expand the Context for that Data into other sensory systems; Which we are beginning to see in the combo LLM/Video/Audio models. Companies like Boston Dynamics are already working with and collecting Audio/Video/Kinesthetic Data in the Spatial Context. Eventually researchers are going to realize (if they haven't already) that there's massive amounts of untapped Data being unrecorded in virtual experiences. Though I'm sure some of the delivery/ remote driver companies are already contemplating how to record their Telepresence Data to refine their models. If capitalism doesn't implode on itself before we reach that point, the future of gig work will probably be Virtual Turks where, via VR, you'll step into the body of a robot when it's faced with a difficult task, complete the task, and then that recorded experience will be used to train future models. It's sad, because under socialism there's an incredible potential for building a society where AI/Robots and humanity live in symbiosis akin to something like The Culture, but it's just gonna be another cyber dystopia panopticon.

[–] context@hexbear.net 43 points 1 year ago (1 children)

Intelligence is largely about interactions between Context and Data

me solidarity data-outdoor-cat

intelligence
[–] QuillcrestFalconer@hexbear.net 22 points 1 year ago (1 children)

Eventually researchers are going to realize (if they haven't already) that there's massive amounts of untapped Data being unrecorded in virtual experiences.

They already have. A lot of robots are already training using simulated environments, and nvidia is developing frameworks to help accelerate this. Also this is how things like alpha go were trained, with self-play, and these reinforcement learning algorithms will probably be extended for LLMs.

Also like you said there's a lot of still untapped data in audio / video and that's starting to be incorporated into the models.

[–] JoeByeThen@hexbear.net 15 points 1 year ago

Yeah, I'm familiar with a bunch of autonomous vehicles/drones being trained in simulated environments, but I'm also thinking stuff like VRChat.

[–] reddit@hexbear.net 6 points 1 year ago (1 children)

My one quibble: that's not the future of gig work, it's the present

[–] JoeByeThen@hexbear.net 6 points 1 year ago (1 children)

It's been a few years since I've used mturk, but there were very few VR based jobs when I last used it. Has that changed?

[–] reddit@hexbear.net 3 points 1 year ago (1 children)

Ah sorry, I was just being a smartass, no idea how much VR is on mturk now. To be clear I think you've got an accurately bleak view of the future of this stuff

[–] JoeByeThen@hexbear.net 2 points 1 year ago

Ah, no worries. Yeah, pretty grim, and I've not even gotten into the horror of what they're gonna do with our biometric data. lol.

[–] HexReplyBot@hexbear.net 3 points 1 year ago

I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:

[–] peppersky@hexbear.net 39 points 1 year ago (1 children)

"our artificial intelligence has read every book in the world and is still dumb as shit"

[–] usa_suxxx@hexbear.net 28 points 1 year ago

Just like me frfr but without the reading

[–] lurkerlady@hexbear.net 32 points 1 year ago* (last edited 1 year ago) (2 children)

This is accurate, though I am actually going to explain why. These big model companies (Google, ClosedAI, etc) parasitize the open-weights/open-source community that actually makes good Loras, fine tunes, and research papers. Consumer hardware simply hasn't gotten good and cheap enough for very good fine tune training, and thats why this is all slowly petering out. In a couple of generations of consumer GPUs, which will be when we get consumer GPUs geared towards AI (re: super high VRAM counts of like 70gb+ for an affordable sub 700 usd cost), we might see another leap forward in this tech. Though I will say that this mostly pertains to LLMs, generative AI models like Stable Diffusion have a lot of tricks up their sleeves that can still be explored. Most of recent research and tweaking has been based around building a structure for the AI to build on, to sort of guide it rather than letting it take random stabs at things, in order to improve outputs. Some people have been doing things like hard coding color theory, framing a photograph, etc, and interpreting human language to trigger that hard code.

We've had statistical models like these since the 50s. Consumer hardware has always been the big materialist bottleneck, this is all powered by small research teams and hobbyist nerds. You can throw a ton of money at it and have a giant research team, but the performance you squeeze out of adding 400b more parameters to your 13b model or having a gigantic locked-down datacenter is going to be diminishing.

Also, synthetic data can be useful, people are hating on it in this thread but its a great way to reinforce good habits in the AI and interpret garbled code and speech that would otherwise confuse the AI. I sometimes feel like people just see something about 'AI bad' and upvote it and don't try to understand it, where it is useful and where it is not, and so on.

[–] bazingabrain@hexbear.net 11 points 1 year ago (1 children)

I fail to see how synthetic data is good if it makes AI used to justify job cuts, "better".

[–] lurkerlady@hexbear.net 9 points 1 year ago* (last edited 1 year ago)

Synthetic data is basically a fancy way of saying 'I'm properly formatting data and reinforcing the ai's good outputs'. Rearranging words, fixing / adding tags, that sort of thing. This is generated with various tools that usually have an LLM or VLM plugged in, though some are as simple as a regex script.

[–] MacNCheezus@lemmy.today 3 points 1 year ago

Better hardware isn't going to change anything except scale if the underlying approach stays the same. LLMs are not intelligent, they're just guessing a bunch of words that are statistically most likely to satisfy the user's request based on their training data. They don't actually understand what they're saying.

[–] Infamousblt@hexbear.net 30 points 1 year ago

Because it was never actually intelligent. Calling it AI was just a buzzword

[–] Flyberius@hexbear.net 30 points 1 year ago

Please crash already. I need some an "All my models, ruined" moment from these fools.

[–] xj9@hexbear.net 30 points 1 year ago (1 children)

wait are you telling me that the AI revolution was extremely oversold???

[–] D61@hexbear.net 20 points 1 year ago

AI Revoluti-off kelly

[–] davel@hexbear.net 28 points 1 year ago* (last edited 1 year ago)

Spicy autocomplete can produce much more content much faster than we can, and it is consuming its own content now. What could go wrong?

clown-to-clown-communicationclown-to-clown-conversation

[–] DragonBallZinn@hexbear.net 25 points 1 year ago* (last edited 1 year ago)

Based. Fuck AI.

Always suspicious when its one of the few technologies boomers got super hyped up about and wanted to shove into everything.

[–] kleeon@hexbear.net 25 points 1 year ago

this is exactly what halted machine learning research back in the day - there was just not enough data out there to train these models

[–] Assian_Candor@hexbear.net 20 points 1 year ago

It would be funny if we hadn't incinerated the planet for this shit. The peddlers will get rich too, zero consequences, except of course for the jobs that were snuffed out in infancy.

[–] Owl@hexbear.net 20 points 1 year ago

This entire boom was predicated on being able to throw 10x the compute budget at a problem and get 2x the quality of results, so it was inevitable. It's not like big tech is suddenly funding long-term R&D teams again; they stopped doing that before most of these companies were even founded.

[–] aaro@hexbear.net 13 points 1 year ago

reposting my hot AI take

Just because capital can't possibly imagine more than 5 minutes in the future, and just because capital can only speak profit and couldn't fathom progress for the sake of progress, doesn't mean that AI isn't real and scary. The technological hurdles are similar things that have been overcome in past technologies, the incentive to replace workers with machines is just as enticing as it's ever been, and if we've seen noise and fervor like this now with this little of the total reward reaped, expect to continue to see this much noise and fervor until the last drop of blood has been squeezed out.

[–] D61@hexbear.net 12 points 1 year ago

The more social media style posts/comments I read about this "AI" stuff, the more I realize I've been doing the same thing since I was in middle school.

I was reading way above my grade level and would use words (often incorrectly) that I wasn't expected to know with such confidence that adults thought I was smart.

AI is just like me fr

[–] BobDole@hexbear.net 11 points 1 year ago

Looks like we’re right on track for another AI Winter

[–] VILenin@hexbear.net 11 points 1 year ago (1 children)

Time to start blaming wamen and menorites

[–] Evilphd666@hexbear.net 4 points 1 year ago

We could unleash it's full potential if we don't handuff it. We have to make it safe for all advertisers. Can't have it tell users capitallism is the problem. People might start revolting! porky-scared

[–] Vampire@hexbear.net 10 points 1 year ago

Large Language Models are approaching the limits of their intelligence

AI is not synonymous with LLMs

To get smarter, they'll have to be merged with other AI techniques.

[–] MaxOS@hexbear.net 8 points 1 year ago
[–] Evilphd666@hexbear.net 7 points 1 year ago

If AI can't generate new and improved information then maybe the I part is a bit disingenuous. Not able to take in new information and make informed decisions. It's a fancy president-parrot-naked

[–] iridaniotter@hexbear.net 5 points 1 year ago
[–] tamagotchicowboy@hexbear.net 4 points 1 year ago

This is why AI is a paper tiger, that and climate change and its handling under capitalism.