this post was submitted on 12 Feb 2026
1119 points (98.2% liked)

Technology

81162 readers
4606 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] lexiw@lemmy.world 1 points 2 days ago (1 children)

I agree, it is a very expensive hobby, and it gets decent in the range 30-80b. However, the model you are using should not perform that bad, it seems that you might be hitting a config issue. Would you mind sharing the cli command you use to run it?

[–] ch00f@lemmy.world 1 points 2 days ago (1 children)

Thanks for taking the time.

So I'm not using a CLI. I've got the intelanalytics/ipex-llm-inference-cpp-xpu image running and hosting LLMs to be used by a separate open-webui container. I originally set it up with Deepseek-R1:latest per the tutorial to get the results above. This was straight out of the box with no tweaks.

The interface offers some controls settings (below screenshot). Is that what you're talking about?

[–] lexiw@lemmy.world 1 points 2 days ago (1 children)

Those values are most of what I was looking for. An LLM is just predicting the next token (for simplicity, a word). It does so by generating every possible word with a probability associated with it, and then picking a random word from this list, influenced by its probability. So for the sentence “a cat sat” it might generate “on: 0.6”, “down: 0.2”, and so on. 0.6 just means 60%, and all the values add up to 1 (100%). Now, the number of tokens generated can be as big as the context, so you might want to pick randomly from the top 10, you control this with the parameter top_k, or you might want to discard all the words below 20%, you control this with min_p. And finally, in cases where you have a token with a big probability followed by tokens with very low probability, you might want to squash these probabilities to be closer together, by decreasing the higher tokens and increasing the lower tokens. You control this with the temperature parameter where 0.1 is very little squashing, and 1 a lot of it. In layman terms this is the amount of creativity of your model. 0 is none, 1 is a lot, 2 is mentally insane.

Now, without knowing your hardware or why you need docker, it is hard to suggest a tool to run LLMs. I am not familiar with what you are using, but it seems to not be maintained, and likely lacks the features needed for a modern LLM to work properly. For consumer grade hardware and personal use, the best tool these days is llamacpp, usually through a newbie friendly wrapper like LMStudio which support other backends as well and provide so much more than just a UI to download and run models. My advice is to download it and start there (it will download the right backend for you, so no need to install anything else manually).

[–] ch00f@lemmy.world 1 points 1 day ago

I'll give that a shot.

I'm running it in docker because it's running on a headless server with a boatload of other services. Ideally whatever I use will be accessible over the network.

I think at the time I started, not everything supported Intel cards, but it looks like llama-cli has support form Intel GPUs. I'll give it a shot. Thanks!