this post was submitted on 14 Feb 2026
491 points (99.8% liked)

Fuck AI

5755 readers
1064 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Google accused "commercially motivated" actors of trying to clone its Gemini AI after indiscriminately scraping the web for its models.

all 32 comments
sorted by: hot top controversial new old
[–] jlow@discuss.tchncs.de 18 points 1 hour ago
[–] ieatpwns@lemmy.world 7 points 1 hour ago

Well well well how the turn tables. Get fucked google

[–] DGen@piefed.zip 1 points 20 minutes ago
[–] zr0@lemmy.dbzer0.com 3 points 1 hour ago

Well, to be fair, google had a copy of the whole internet long before AI was a thing.

[–] senorseco@lemmy.today 1 points 39 minutes ago

Google stole that data fair and square.

[–] sp3ctr4l@lemmy.dbzer0.com 15 points 2 hours ago

Wow, they're seriously saying this with a straight face, huh?

Oh hi Google, my name is Epic Games, and I see your recently trained a new 'AI' of yours on Fortnite.

Let me introduce you to my friend Fromsoft, who is pretty sure you uh, copied their notes from Dark Souls as well.

... How are all these people this fucking stupid?

There's no possible resolution to the paradigm of 'I can steal everything but you can't steal anything' other than total chaos.

Total chaos ain't a good standard for a legal system trying to figure out IP law.

This is completely ludicrous.

[–] Ghostie@lemmy.zip 6 points 2 hours ago

poor wittle google

[–] HugeNerd@lemmy.ca 2 points 1 hour ago

But I want to see AI Becky hang-gliding into a forest!

[–] RampantParanoia2365@lemmy.world 11 points 3 hours ago (1 children)

Why in the holy hell would anyone want to copy that useless garbage. Their AI is a moron.

[–] tempest@lemmy.ca 0 points 49 minutes ago

Google was hoarding data before it was cool and people were constantly giving it data in the form of Google ads.

It's actually the one company I might believe didn't scrape shit Immorally.

Im Sure they still did, it's just I think people might believe them if they lied about it.

[–] WatDabney@sopuli.xyz 112 points 6 hours ago (2 children)

Google has become a colonialist project.

First they gained access to the communsl property of the internet. Then they stole it from the original inhabitants. And now they're trying to claim a legal right to exclusive control over the property they stole.

[–] errer@lemmy.world 22 points 5 hours ago

The free parts of the internet already feel like tiny reservations trapped within a vast collection of oppressors…

[–] atomicbocks@sh.itjust.works 5 points 5 hours ago

Google has taken over the Web. Lucky for us there are other software platforms on the Internet.

[–] gravitas_deficiency@sh.itjust.works 86 points 6 hours ago (1 children)
[–] Bustedknuckles@lemmy.world 34 points 5 hours ago

"My output is valuable, proprietary, and demands remuneration; my inputs are fair use and of negligible valuable"

[–] despite_velasquez@lemmy.world 39 points 6 hours ago (2 children)

AI output can never be copyrightable

[–] limpatzk@bookwyr.me 1 points 2 hours ago

And it makes no sense, how can you prove it's generated by YOUR AI?

[–] OwOarchist@pawb.social 14 points 5 hours ago (1 children)

So far... Just wait until the lobbyists get their hands on our laws...

[–] jaybone@lemmy.zip 5 points 3 hours ago

I don’t think you have to wait.

[–] hendrik@palaver.p3x.de 28 points 6 hours ago* (last edited 6 hours ago) (1 children)

It's mental. The terms and conditions of some AI music generators will make people pay for a "license" to use the output for example for commercial purposes. They themselves of course claim "fair use" and steal all the music out there to train their models. I think some companies now don't claim ownership any more, for images and video snippets. And of course AI output isn't copyright-able in the first place.

The companies will occasionally use their trademarks, intellectual property or copyright against people. Of course those rules don't apply the other way around. It's completely fine their product draws all Disney princesses, comic and anime characters and reproduces half of Harry Potter. But beware someone names something with "Claude" in the name. Of course Google follows the same logic with this.

And then my homepage gets hammered with their stupid AI crawlers, but I have to abide by the terms and conditions of their services...

[–] Grimy@lemmy.world 2 points 2 hours ago* (last edited 2 hours ago)

Being pro-copyright is giving the keys to record companies though. They would be the only ones with a "legal" model. Udio got bought by universal not too long ago but as long as laws aren't rewritten for the benefit of mega corps and copyright juggernauts, open source will ruin all the shenanigans they are trying to pull.

It's the same for all the text models. Open source is destroying openais business model. They need laws that restrict what you can train on so they can buy themselves a monopoly.

[–] mayabuttreeks@lemmy.ca 6 points 4 hours ago
[–] stoy@lemmy.zip 8 points 5 hours ago (1 children)

Hey Google!

Check this out, I learned to play the violin for you!

Look, this is it:

.

[–] wonderingwanderer@sopuli.xyz 3 points 4 hours ago

Minus ten points for no tardigrade.

[–] jaredwhite@humansare.social 16 points 6 hours ago

boo fucking hoo. 🤧

[–] OwOarchist@pawb.social 7 points 5 hours ago* (last edited 5 hours ago) (1 children)

You see, big tech AI bros? This is why you're dumb. Even if this all pans out and all your AI dystopia dreams come true, it doesn't mean you're going to be rich and powerful and at the top.

If your AI becomes as good as it's supposedly going to get ... I can just ask it to develop a new AI for me. And then I don't have to use yours anymore. Why would anybody pay you to use your AI when it becomes trivial to make a new one, tailored to their specific needs? Why would I need your big tech company for anything if anything you can provide could be readily replaced by just asking an AI for it. If AI becomes good enough to replace everyone's job, it will replace big tech as well.

The only people who might be benefiting from all this are the ones who manufacture and sell the hardware that runs it. If AI becomes good enough, all software companies will go bankrupt. Yes including Google, Microslop, etc.

[–] wonderingwanderer@sopuli.xyz 7 points 4 hours ago (1 children)

You can already self-host an open source LLM, and fine-tune it on custom datasets. Huggingface has thousands to choose from.

The largest you'll probably fit on consumer hardware is probably 32 billion parameters or so, and that's with quantization. Basically, at 8-bit quantization, you need 1GB RAM for every billion parameters. So a 32 billion parameter 8-bit model would need 32GB RAM, plus overhead. At 16-bit it would need 64GB RAM, and so on. A 24 billion parameter model with 16-bit quantization would take up 48GB RAM, etc.

The commercial LLMs that people pay subscriptions to use an API for tend to have like 130-200 billion parameters with no quantization (32-bit). So it wouldn't run on consumer hardware. But you honestly don't need one that big, and I think they actually suffer in quality by trying to overgeneralize.

For most people's purposes, a 14 billion parameter model with 16-bit architecture is probably fine. You just need 28GB of free RAM. Otherwise, on 14GB RAM you can do 14B params at 8-bit, or 7B at 16-bit. You might lose some accuracy, but with specialized fine-tuning and especially retrieval-augmented generation, it won't be severe.

Anything smaller than 7B might be pushing it, and likewise anything at 4-bit quantization would lose accuracy. 7B at 8-bit would also probably suffer on benchmarks. So realistically you'll probably need at least 16GB of RAM accounting for overhead. More if you want to run any concurrent processes.

The thing about making one from scratch though, is that it's resource-intensive. You can try generating a 1 billion parameter model with blank or randomized weights, the algorithm isn't a secret. But pre-training it could take weeks or months depending on your hardware. Maybe days if you have a high-end GPU. And that's with it running non-stop, so you can imagine the electric bill, and the task of keeping your system cool.

TL;DR, You can ask an LLM to vibe-code you a new model from scratch, but pre-training it you're gonna be limited by the resources you have available. You can already download pre-trained open source models for self-hosting though, and fine-tune them yourself if you desire.

[–] OwOarchist@pawb.social 1 points 2 hours ago (1 children)

(I am kind of making the assumption that their perfect, all-powerful AI, once developed, would also be a bit more efficient than current models, allowing it to more easily run on consumer-grade hardware. Also, in the meantime, consumer-grade hardware is only getting better and more powerful.)

You can ask an LLM to vibe-code you a new model from scratch, but pre-training it you’re gonna be limited by the resources you have available

Why would you ask the uber-LLM to code you a new model that hasn't been trained yet? Just ask it to give you one that already has all the training done and the weights figured out. Ask it to give you one that's ready to go, right out of the box.

[–] wonderingwanderer@sopuli.xyz 1 points 2 hours ago

once developed, would also be a bit more efficient than current models

That's not how it works though. They're not optimizing them for efficiency. The business model they're following is "just a few billion more parameters this time, and it'll gain sentiency for sure."

Which is ridiculous. AGI, even if it's possible (which is doubtful), isn't going to emerge from some highly advanced LLM.

in the meantime, consumer-grade hardware is only getting better and more powerful

There's currently a shortage of DDR5 RAM because these AI companies are buying years-worth of industrial output capacity...

Some companies are shifting away from producing consumer-grade GPUs in order to meet demand coming from commercial data centers.

It's likely we're at the peak of conventional computing, at least in terms of consumer hardware.

Why would you ask the uber-LLM to code you a new model that hasn't been trained yet? Just ask it to give you one that already has all the training done and the weights figured out. Ask it to give you one that's ready to go, right out of the box.

That's not something they're capable of. They have a context window, and none of them has one large enough to output billions of generated parameters. It can give you a python script to generate a gaussian distribution with a given number of parameters, layers, hidden sizes, and attention heads, but it can't make one that's already pre-trained.

Also, their NLP is designed to parse texts, even code, but they already struggle with mathematics. There's no way it could generate a viable weight distribution, even if it had a 12 billion token context window, because they're not designed to predict that.

You'd have to run a script to get an untrained model, and then pre-train it yourself. Or you can download a pre-trained model and fine-tune it yourself, or use as is.