this post was submitted on 07 Jul 2025
1 points (60.0% liked)
ObsidianMD
4682 readers
2 users here now
Unofficial Lemmy community for https://obsidian.md/
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
@mitch @emory @obsidianmd Do you pay for it?
@gabek @obsidianmd @emory I do not. It seems to function with all the features when you use local inferencing via Ollama.
@gabek @mitch @obsidianmd some of the small models i like using with obsidian vaults locally are deepseek+llama distills and MoE models for every occasion. fiction and creative, classification and vision. there's a few 8x merged models that are extremely fun for d&d.
i have a speech operated adventure like #Zork that uses a 6x MoE that can be really surreal.
there's a phi2-ee model on hf that is small and fast at electrical eng work, i use that for a radio and electronics project vault!
@gabek @mitch @obsidianmd i don't either, i have other ways of doing what the paid version supports. i use cloud foundation models and local; my backends for embeddings are always ollama, lmstudio, and/or anythingLLM.
#anythingLLM has an easily deployed docker release and desktop application. it's not as able in managing and cross-threading conversations as LM (really Msty does it best) but #aLLM has a nice setup for agents and RAG.