Die4Ever

joined 1 year ago
MODERATOR OF
[–] Die4Ever@retrolemmy.com 2 points 1 month ago

I can't believe they waited so long to enable Proton by default lol

[–] Die4Ever@retrolemmy.com 3 points 1 month ago* (last edited 1 month ago) (2 children)

Your concern that companies like Meta and Microsoft are too scared to let users retrain their models is also irrelevant considering both of these companies have already released models so that anyone can retrain or checkpoint merge them i.e. Llama by Meta and Phi by Microsoft.

they release them to developers, not automatically retrain them unsupervised in their actual products and put them in the faces of customers to share screenshots of the AI's failures on social media and give it a bad name

[–] Die4Ever@retrolemmy.com 1 points 1 month ago

it probably works pretty well when it's tested and verified instead of unsupervised

and for a small pool of people instead of hundreds of millions of users

[–] Die4Ever@retrolemmy.com 3 points 1 month ago* (last edited 1 month ago) (4 children)

Huggingface isn't customer-facing, it's developer-facing. Letting customers retrain your LLM sounds like a bad idea for a company like Meta or Microsoft, it's too risky and could make them look bad. Retraining an LLM for Lovecraft is a totally different scale than retraining an LLM for hundreds of millions of individual customers.

do you think Microsoft could force their AI on every single Windows computer if it was as challenging as you imply?

It's a cloned image, not unique per computer

[–] Die4Ever@retrolemmy.com 10 points 1 month ago* (last edited 1 month ago) (8 children)

An LLM could be trained on the way a specific person communicates over time

Are there any companies doing anything similar to this? From what I've seen companies avoid this stuff like the plague, their LLMs are always frozen with no custom training. Training takes a lot of compute, but also has huge risks of the LLM going off the rails and saying bad things that could even get the company into trouble or get bad publicity. Also the disk space per customer, and loading times of individual models.

The only hope for your use case is that the LLM has a large enough context window to look at previous examples from your chat and use those for each request, but that isn't the same thing as training.

[–] Die4Ever@retrolemmy.com 12 points 1 month ago* (last edited 1 month ago) (2 children)

If you prefer the UI of one to another now you have to create a brand new community which is going to take time to fill. I use old.lemmy.world because I like the old reddit UI

I think there's a misunderstanding?

You can use remote communities, for example: !linux@lemmy.ml for you is https://old.lemmy.world/c/linux@lemmy.ml

the "Old" UI seems to hide the list of communities, but it's here: https://old.lemmy.world/communities

[–] Die4Ever@retrolemmy.com 1 points 1 month ago

The playoffs start in 12 hours with sOs vs uThermal. The grand final match will start in 22 hours from now.

[–] Die4Ever@retrolemmy.com 1 points 1 month ago

wow sOs looks good again!

[–] Die4Ever@retrolemmy.com 1 points 1 month ago

and then GDQ immediately afterwards, July 6th to 13th https://gamesdonequick.com/schedule/56

[–] Die4Ever@retrolemmy.com 2 points 1 month ago

"Not to be confused", no I think it is "to be confused"

[–] Die4Ever@retrolemmy.com 3 points 1 month ago

the biggest thing is this means PeerTube federation works again, so you can follow !spacequesthistorian@spectra.video as an example

view more: ‹ prev next ›