this post was submitted on 30 Jul 2025
20 points (91.7% liked)

LocalLLaMA

3450 readers
14 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

GLM-4.5-Air is the lightweight variant of our latest flagship model family, also purpose-built for agent-centric applications. Like GLM-4.5, it adopts the Mixture-of-Experts (MoE) architecture but with a more compact parameter size. GLM-4.5-Air also supports hybrid inference modes, offering a "thinking mode" for advanced reasoning and tool use, and a "non-thinking mode" for real-time interaction. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs

Blog post: https://z.ai/blog/glm-4.5

Hugging Face:

https://huggingface.co/zai-org/GLM-4.5

https://huggingface.co/zai-org/GLM-4.5-Air

you are viewing a single comment's thread
view the rest of the comments
[–] brucethemoose@lemmy.world 2 points 2 days ago* (last edited 2 days ago) (3 children)

If you want CPU offload, IK_llama.cpp is explicitly designed for that and your go-to. It keeps the “dense” part of the model on the GPUs and offloads the lightweight MoE bits to CPU

Vllm and exllama are GPU only. Vllm's niche is that it’s very fast with short context parallel calls (aka for serving dozens of users at once with small models), while exllama uses SOTA quantization for squeezing large models onto GPUs with minimal loss.

[–] doodlebob@lemmy.world 2 points 2 days ago (2 children)

IK sounds promising! Will check it out to see if it can run in a container

[–] doodlebob@lemmy.world 1 points 1 day ago (1 children)

I'm just gonna try vllm, seems like ik_llama.cpp doesnt have a quick docker method

[–] brucethemoose@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

It should work in any generic cuda container, but yeah it’s more of a hobbyist engine. Honestly I just run it raw since it’s dependency free, except for system CUDA.

Vllm absolutely cannot CPU offload AFAIK, but small models will fit in your vram with room to spare.