this post was submitted on 29 Jun 2024
39 points (83.1% liked)
Asklemmy
43810 readers
1 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Ollama is great as a hobby, for running fine-tuned models, and if you want to be actually told you're wrong/something's not possible, or get output that a commercial LLM deems unacceptable, but that's reserved for only very few illegal/nsfw (incl both violence/gore and sex) scenarios, and frankly not even all of them with a bit of engineering.
For 99.99% of use cases, GPT4o is literally thousands of times more knowledgeable and thousands of times less likely to hallucinate than your average 7-10b parameter model you'd be able to run locally on even a 16GB GPU