this post was submitted on 02 Dec 2025
541 points (98.9% liked)

Lemmy Shitpost

36031 readers
3326 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS
top 48 comments
sorted by: hot top controversial new old
[–] DarkCloud@lemmy.world 58 points 1 week ago (5 children)

You can get offline versions of LLMs.

[–] criss_cross@lemmy.world 14 points 1 week ago

And gpt-oss is an offline version of chatgpt

[–] linkinkampf19@lemmy.world 7 points 1 week ago

First thing that came to mind: GPT4All

[–] sp3ctr4l@lemmy.dbzer0.com 4 points 1 week ago* (last edited 1 week ago) (1 children)

I've been toying with Qwen3.

On my steam deck.

8 bil param model runs stably.

Its's opensource too!

Alpaca is a neat little flatpak that containerizes everything and makes running local models so easy that I can literally do it without a mouse or keyboard.

[–] JustAnotherKay@lemmy.world 2 points 1 week ago (1 children)

Oh my god I feel so stupid. I’ve been arguing back and forth whether it was worth de-atomizing my steam deck to spin up alpaca in docker. I forgot they have a flatpak

[–] sp3ctr4l@lemmy.dbzer0.com 1 points 1 week ago* (last edited 1 week ago)

Bazzite also has podman, though not specifically docker, in the core OS.

So... I have spun up one local LLM in Alpaca, told it what hardware, OS, and environment it is in/on, told it to generate a context prompt to inform itself of all that... and its now helping me try to figure out how/if it is possible to set up a podman container/environment... for LLMs that either Alpaca does not yet support, or I am too stupid to figure out.

Alpaca even has tools. You can give an LLM the ability to search the web for something, and find some info or what not.

ROCm on a Deck seems to kind of sort of work via ... basically you spoof your gpu id in the podman environment, and then... you would either hwve to do the ole allocate more ram to gpu thing, or attempt to edit the LLM's config and such, to try an run in a much lower than expected vram situation.

(WIP)

Presumably you could tell it to do a lot of things but that seems like a bad idea lol, anyway yeah, I was able to just tell it 'go online and lookup bazzite, familiarize yourself with pertinent details, reformulate context prompt.'

[–] Ghostalmedia@lemmy.world 1 points 1 week ago (1 children)

I mean, most people have a local LLM in their pocket right now.

[–] sp3ctr4l@lemmy.dbzer0.com 4 points 1 week ago (2 children)

Unless I am missing something:

Most people do not have a local LLM in their pocket right now.

Most people have a client app that talks to a remote LLM, which 'lives' in an ecologically and economically dubious mega-datacenter, in their pocket right now.

[–] GamingChairModel@lemmy.world 3 points 1 week ago (2 children)

Plenty of the AI functions on phones are on-device. I know the iPhone is capable of several text-based processing (summarizing, translating) offline, and they have an API for third party developers to use on-device models. And the Pixels have Gemini Nano on-device for certain offline functions.

[–] sp3ctr4l@lemmy.dbzer0.com 1 points 1 week ago

Oh!

Well, I didn't know that.

I'm too poor to be able to afford such fancy phones.

[–] tetris11@feddit.uk 1 points 1 week ago

My phone does speech-to-text flawlessly offline, it's a crazy useful little LLM tool

[–] Ghostalmedia@lemmy.world 2 points 1 week ago

Gemini nano, Apple Intelligence On-device, etc.

[–] tomiant@piefed.social 31 points 1 week ago (2 children)

FCKGW-RHQQ2-YXRKT-8TG6W-2B7Q8

[–] eager_eagle@lemmy.world 5 points 1 week ago* (last edited 1 week ago)

make sure to disconnect the internet first

[–] bjoern_tantau@swg-empire.de 26 points 1 week ago (2 children)

It's just audio of French farting cats.

[–] Lemmyoutofhere@lemmy.ca 17 points 1 week ago
[–] Akasazh@feddit.nl 2 points 1 week ago

My bet was on porn.

Or a copy of an old Encarta cd-rom

[–] SSUPII@sopuli.xyz 20 points 1 week ago* (last edited 1 week ago) (1 children)

If we assume a CD, you can probably fit a 256M parameters model in it. But it will LOAD.

[–] MacNCheezus@lemmy.today 10 points 1 week ago

DVDs exist. They can fit approx. 7B params, enough to be somewhat productive.

[–] khepri@lemmy.world 14 points 1 week ago* (last edited 1 week ago) (2 children)

Could you crunch an LLM into 700Mb that was still functional? Cause this looks like a fun thing to actually do as a joke.

Edit, I bet I could get https://huggingface.co/distilbert/distilgpt2 to run off a CD. How many tps am I gonna get guys 🤣

[–] yellowbadbeast@lemmy.blahaj.zone 11 points 1 week ago (2 children)

Qwen3-0.6B is about 400 MB at Q4 and is surprisingly coherent for what it is.

[–] khepri@lemmy.world 7 points 1 week ago (1 children)

That's so crazy that an LLM capable of doing anything at all can be that small! That's leaves room for like an entire .avi episode of family guy at dvd resolution on there, which is the natural choice for the remaining space of course

[–] tetris11@feddit.uk 2 points 1 week ago

a 4k episode of family guy using H265 (HEVC) and assuming not too many cutaway gags could produce a file about 240MB. You could probably fit a 480i episode of south park in the remaining 60MB

[–] khepri@lemmy.world 2 points 1 week ago

Wow, just popped it onto my very slow desktop and this little model rips haha. I really think tiny LLMs with a good LoRA on top are going to be a huge deal going forward

[–] lime@feddit.nu 4 points 1 week ago* (last edited 1 week ago)

there's also tinyllama, which is somewhere around 600MB. it's hilariously inept. it's like someone jpeg-compressed a robot.

also you're only gonna load off of that cd once so it'll perform fine.

[–] NullPointerException@lemmy.ca 10 points 1 week ago

That’s just Dr Sbaitso.

[–] faizalr@piefed.social 7 points 1 week ago (1 children)

It reminds me of the Britannica Encyclopedia on CD.

[–] MidsizedSedan@lemmy.world 7 points 1 week ago (5 children)

Isn't it possible to download all of wikipedia, and it being surprisenly a small file size? Can it fit on a CD?

[–] AmbiguousProps@lemmy.today 9 points 1 week ago (2 children)

It could fit on a BDXL disc.

[–] masterspace@lemmy.ca 4 points 1 week ago

You can fit text-only wikipedia on a normal Blu Ray as it's only about 24GB. You can also easily fit Llama 3.1 or any of the other open, offline capable ai models as they're only about 4GB.

[–] gustofwind@lemmy.world 2 points 1 week ago

could also store it on a flashdrive or micro sd card

[–] SSUPII@sopuli.xyz 6 points 1 week ago (2 children)

No

(English) 24,05GB without media. Adding media adds 428,36TB.

[–] Axolotl_cpp@feddit.it 3 points 1 week ago* (last edited 1 week ago) (2 children)

Can you give me the text only version link? I found only a version that is like 43gb

[–] ZkhqrD5o@lemmy.world 5 points 1 week ago

I suggest the happy medium called Kiwix, directly from the programme you can download all of Wikipedia with medium-sized pictures for a hundred gigabytes or so.

[–] GregorGizeh@lemmy.zip 1 points 1 week ago* (last edited 1 week ago)

500TB is still surprisingly reasonable for what is essentially a library of human (surface level) knowledge.

It would be interesting to know how large the file would be including all text form references (i'd imagine anything else such as videos would completely blow the proportions)

[–] Axolotl_cpp@feddit.it 5 points 1 week ago (2 children)

No, you really can't; It's like 43 gb the text only version

[–] BanMe@lemmy.world 2 points 1 week ago

So gonna need like 2 CDs then

[–] puppycat@lemmy.blahaj.zone 1 points 1 week ago

yes you really can; it's like 20-25 gb depending on how recent of a copy you have. I've been seeding wikipedia for almost a year and it barely takes any space on my computer

[–] ptz@dubvee.org 1 points 1 week ago* (last edited 1 week ago)

The full 2025-04 English-only ZIM dump is about 120 GB. That includes reduced-size images as well as all articles. I think the text-only version is in the 40-60 GB range.

There are smaller ZIM versions in the ~4 GB range that would fit on a DVD, but they're only a subset for specific topics or for a list of the most popular topics.

[–] rain_worl@lemmy.world 0 points 1 week ago

kiwix? that's compressed (afaik), and when i tried, it took up half of my disk space and needed ethernet

[–] uriel238@lemmy.blahaj.zone 6 points 1 week ago (1 children)

Offline LLMs exist but tend to have a few terabytes of base data just to get started (e.g. before LORAs)

[–] nomorebillboards@lemmy.world 10 points 1 week ago

I thought it was more like 10-20GB to start out with a usable (but somewhat stupid) model.

Are you confusing the size of the dataset with the size of the model?

Maybe they meant GTA?

anyone have the serial?