I hoard documents, papers, and non-fiction books. But just owning the files doesn't feel like enough without semantic search and some higher order intelligence built with this data.
With LLMs it’s now feasible to run open source models locally to turn these texts into knowledge graphs instead. You could have each atomic fact link back to the source document. I’m interested in going down this path and have a list of tools I would use; I have already experimented with Streamlit, LlamaIndex, and Neo4j.
But I’m wondering if there’s anyone else out there even considering this? Has anything been purpose-built for this yet (local not enterprise focused)?