this post was submitted on 31 Mar 2026
349 points (99.7% liked)

Technology

83251 readers
4338 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] spez@sh.itjust.works 23 points 22 hours ago (3 children)

I mean it's not that big a deal. However, it would another thing if the model itself leaked. Now that would be something.

[–] MangoCats@feddit.it 7 points 19 hours ago

As they tell it, Claude Code is over 80% written by the models anyway...

[–] obbeel@lemmy.eco.br 3 points 17 hours ago (1 children)

Tool usage is very important. Qwen3.5 (135b) can already do wonderful things on OpenCode.

[–] cecilkorik@piefed.ca 10 points 15 hours ago* (last edited 15 hours ago) (1 children)

I dabble in local AI and this always blows my mind. How do people just casually throw 135b parameter models around? Are people like, renting datacenter hardware or GPU time or something, or are people just building personal AI servers with 6 5090s in them, or are they quantizing them down to 0.025 bits or what? what's the secret? how does this work? am I missing something? like the Q4 of Qwen3.5 122B is between 60-80GB just for the model alone. That's 3x 5090s minimum, unless I'm doing the math wrong, and then you need to fit the huge context windows these things have in there too. I don't get it.

Meanwhile I'm over here nearly burning my house down trying to get my poor consumer cards to run glm-4.7-flash.

[–] obbeel@lemmy.eco.br 4 points 15 hours ago

I pay for Ollama Cloud. As for the training of the big models, big companies do it using who-knows-what resources.

[–] lexiw@lemmy.world 7 points 21 hours ago

The harness is as important as the model