this post was submitted on 07 Jul 2025
1 points (60.0% liked)

ObsidianMD

4682 readers
2 users here now

Unofficial Lemmy community for https://obsidian.md/

founded 2 years ago
MODERATORS
 

i know i mention this but maybe i didn't tell @obsidianmd directly - obsidian vaults can easily be used for RAG allowing you to explore your notes in new ways. the extract\_wisdom pattern from #fabricAI is great also.

actually here's an extension i came across this morning that can use Fabric's patterns:

https://github.com/chasebank87/mesh-ai

#meshAI #AI #PKM

you are viewing a single comment's thread
view the rest of the comments
[–] gabek@social.gabekangas.com 0 points 1 month ago (3 children)
[–] mitch@hoagie.cloud 1 points 1 month ago

@gabek @obsidianmd @emory I do not. It seems to function with all the features when you use local inferencing via Ollama.

[–] emory@soc.kvet.ch 1 points 1 month ago

@gabek @mitch @obsidianmd some of the small models i like using with obsidian vaults locally are deepseek+llama distills and MoE models for every occasion. fiction and creative, classification and vision. there's a few 8x merged models that are extremely fun for d&d.

i have a speech operated adventure like #Zork that uses a 6x MoE that can be really surreal.

there's a phi2-ee model on hf that is small and fast at electrical eng work, i use that for a radio and electronics project vault!

[–] emory@soc.kvet.ch 1 points 1 month ago

@gabek @mitch @obsidianmd i don't either, i have other ways of doing what the paid version supports. i use cloud foundation models and local; my backends for embeddings are always ollama, lmstudio, and/or anythingLLM.

#anythingLLM has an easily deployed docker release and desktop application. it's not as able in managing and cross-threading conversations as LM (really Msty does it best) but #aLLM has a nice setup for agents and RAG.