this post was submitted on 15 Mar 2025
7 points (100.0% liked)

LocalLLaMA

3497 readers
31 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

I don't care a lot about mathematical tasks, but code intellingence is a minor preference but the most anticipated one is overall comprehension, intelligence. (For RAG and large context handling) But anyways any benchmark with a wide variety of models is something I am searching for, + updated.

top 4 comments
sorted by: hot top controversial new old
[–] Smokeydope@lemmy.world 3 points 4 months ago* (last edited 4 months ago) (1 children)

The average of all different benchmarks can be thought of as a kind of 'average intelligence', though in reality its more of a gradient and vibe type thing.

Many models are "benchmaxxed" trained to answer the exact kinds of questions the test asked which often makes the benchmarks results unrelated to real world use case checks. Use them as general indicators but not to be taken too seriously.

All model families are different in ways that you only really understand by spending time with them. Don't forget to set the rigt chat template and correct sample range values as needed per model. Openleaderboard is a good place to start.

[–] thickertoofan@lemm.ee 3 points 4 months ago (1 children)

i use pageassist with Ollama

[–] Smokeydope@lemmy.world 2 points 4 months ago* (last edited 4 months ago) (1 children)

Cool, page assist looks neat I'll have to check it out sometimes. My llm engine is kobold.cpp, and I just user the openwebui in internet browser to connect.

Sorry I don't really have good suggestions for you beyond to just try some of the more popular 1-4bs in a very high quant if not full f8 and see which works best for your use case.

Llama 4b, mistral 4b, phi-3-mini, tinyllm 1.5b, qwen 2-1.5b, ect ect. I assume you want a model with large context size and good comprehension skills to summarize youtube transcripts and webpage articles? At least I think thats what the add-on you mentioned suggested was its purpose.

So look for models with those things over ones that try to specialize in a little bit of domain knowledge.

[–] thickertoofan@lemm.ee 2 points 4 months ago

I checked mostly all of em out from the list, but 1b models are generally unusable for RAG.