this post was submitted on 07 Jul 2025
1 points (60.0% liked)

ObsidianMD

4678 readers
1 users here now

Unofficial Lemmy community for https://obsidian.md/

founded 2 years ago
MODERATORS
 

i know i mention this but maybe i didn't tell @obsidianmd directly - obsidian vaults can easily be used for RAG allowing you to explore your notes in new ways. the extract\_wisdom pattern from #fabricAI is great also.

actually here's an extension i came across this morning that can use Fabric's patterns:

https://github.com/chasebank87/mesh-ai

#meshAI #AI #PKM

top 10 comments
sorted by: hot top controversial new old
[–] mitch@hoagie.cloud 1 points 3 weeks ago (1 children)

@emory @obsidianmd there is a great plugin called Copilot that has RAG built-in as well. I can personally vouch for it with local inferencing.

[–] emory@soc.kvet.ch 1 points 3 weeks ago (1 children)

@mitch @obsidianmd does that use the AI Providers extension? LocalGPT is an extension that does and doesn't have that "premium tier" thing going on.

do you configure the embeddings for local vectors and also local inference? can you compare it to like the RAG function in AnythingLLM or open-webui or something similar?

i use a variety of methods haven't got a favorite yet.

[–] mitch@hoagie.cloud 1 points 3 weeks ago (2 children)

@emory @obsidianmd i will be honest that is a question you might be more equipped to answer than i, but here are the links if you wanna check it out. i see it has a folder named LLMproviders, but i am not sure if that is what you mean or not.

obsidian://show-plugin?id=copilot

https://github.com/logancyang/obsidian-copilot

https://github.com/logancyang/obsidian-copilot/tree/master/src/LLMProviders

[–] emory@soc.kvet.ch 1 points 3 weeks ago (1 children)

@mitch @obsidianmd this extension is great and i wish others used it instead of reimplementing things https://github.com/pfrankov/obsidian-ai-providers

[–] INeedMana@piefed.zip 1 points 3 weeks ago (1 children)

Any chance for something like this working with Le Chat/Mistral? I don't see it in readmes

[–] emory@soc.kvet.ch 1 points 3 weeks ago

@INeedMana if they offer an OpenAI-ish compatible API (e.g. https://blahblah/v1) you can add it as an OpenAI service with a new endpoint and your own API creds inside AI Providers but IDK about the various other extensions.

the mesh-AI one uses fabric though and you can configure fabric to use mistral's API just fine i reckon?

[–] gabek@social.gabekangas.com 0 points 3 weeks ago (3 children)
[–] emory@soc.kvet.ch 1 points 3 weeks ago

@gabek @mitch @obsidianmd i don't either, i have other ways of doing what the paid version supports. i use cloud foundation models and local; my backends for embeddings are always ollama, lmstudio, and/or anythingLLM.

#anythingLLM has an easily deployed docker release and desktop application. it's not as able in managing and cross-threading conversations as LM (really Msty does it best) but #aLLM has a nice setup for agents and RAG.

[–] mitch@hoagie.cloud 1 points 3 weeks ago

@gabek @obsidianmd @emory I do not. It seems to function with all the features when you use local inferencing via Ollama.

[–] emory@soc.kvet.ch 1 points 3 weeks ago

@gabek @mitch @obsidianmd some of the small models i like using with obsidian vaults locally are deepseek+llama distills and MoE models for every occasion. fiction and creative, classification and vision. there's a few 8x merged models that are extremely fun for d&d.

i have a speech operated adventure like #Zork that uses a 6x MoE that can be really surreal.

there's a phi2-ee model on hf that is small and fast at electrical eng work, i use that for a radio and electronics project vault!