this post was submitted on 28 Jul 2025
17 points (94.7% liked)

homeassistant

15575 readers
2 users here now

Home Assistant is open source home automation that puts local control and privacy first.
Powered by a worldwide community of tinkerers and DIY enthusiasts.

Home Assistant can be self-installed on ProxMox, Raspberry Pi, or even purchased pre-installed: Home Assistant: Installation

Discussion of Home-Assistant adjacent topics is absolutely fine, within reason.
If you're not sure, DM @GreatAlbatross@feddit.uk

founded 2 years ago
MODERATORS
 

I recently integrated self-hosted Ollama with Home Assistant and it's pretty cool.

I'd like to also be able to ask the Home Assistant Assist things like "What movies are playing in MyCity?" and have the model in Ollama search the web and summarize the results. I use Open-WebUI as a frontend for Ollama and it can do that, but I can't see how configure HA's Assist to do it.

Has anyone done this?

Another example would be to ask "what's the weather today?", and I realize that HA has a weather integration built in and thus I wouldn't need Ollama to search the web for results. Is there a web search integration for HA, so that any question (like "what movies are playing") could be passed to it?

My ultimate goal is to be able to ask questions about the world (i.e., search the web and summarize the results) and control Home Assistant devices. It doesn't have to be HA's Assist, but if I were to use Open-WebUI* for example, I'd have to come up with a way to link my audio-listening devices to it. It also doesn't have to be one "entity", but it should "feel like" one entity to the user (i.e., I don't want to have to speak to a different device or have a different wakeword to ask about the world than I do to control Home Assistant).

*Open-WebUI has integrations to control Home Assistant.

top 4 comments
sorted by: hot top controversial new old
[–] scrubbles@poptalk.scrubbles.tech 6 points 5 days ago (1 children)

So you're getting into function/tool calling with llms. If you don't know, they are structured things that you give to your LLM (at the API/model level, not you directly) that says "I have coded a function called get weather, if the user asks about the weather, let me know by returning a tool call response asking for the weather and I will get it for you" (in layman's terms).

This is how HA works too, if you ask the LLM to do something it is essentially running a tool call to HA for you.

Now, for your question, I'm betting there are ways to add in more, if HA supports this, idk. However, that's what you're looking for. It's also kinda named Agents, like you would create a web agent which takes in what the LLM asks for, gets the top 10 responses, collates them and gives them back to the LLM for summarization.

I'm guessing others have done this, but now you know what you're looking for!

Thanks for the explanations, @Scrubbles@poptalk.scrubbles.tech.

I introduced myself to tools/functions in Open-WebUI: there is a community library of tools/functions and you can essentially copy/paste them into Open-WebUI and then they're available there. If I understand correctly, tools add functionality to models and functions add functionality to Open-WebUI.

I'm still confused, though: if I add a web search tool to Open-WebUI (some do exist), I would guess that it could only ever add functionality to the model if I access the model through Open-WebUI - but HA is accessing the model directly through Ollama.

[–] just_another_person@lemmy.world 1 points 5 days ago (1 children)

Would need more information on what you've done here, but I assume you just used the default add-on, which does not handle asynchronous requests like that out of the box. From the very limited docs, it does mention building 'Sentence Triggers'. You'd need to probably do a few things:

  • Tweak the timeout for responses to allow time for it get a formed payload and send it back
  • Look at other plugins as examples to form custom sentence triggers
  • Set some sort of boundaries to decide when it expects a response from the remote Ollama instance versus using internal data as a response

Just looking at the code here, none of the currently configured interfaces would be expecting any sort of response back from Ollama such as a web search. It's only using it for local inference about entities and performed sentences within HA itself.

Thanks for the reply and the assistance, @just_another_person@lemmy.world.

I'm not using any add-ons in Home Assistant for this integration, just the Ollama integration (but I suspect that's what you meant).

Its docs say that it doesn't integrate with sentence triggers, so I think this suggestion won't work. :(