Home Assistant

412 readers
1 users here now

Home Assistant is open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY...

founded 2 years ago
MODERATORS
1101
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/pr0sty on 2025-04-07 07:02:23+00:00.

1102
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/notatimemachine on 2025-04-07 03:13:29+00:00.


I’ve had Home Assistant running for a while but I still feel very new to it. After my wife asked if it was possible to kick Alexa out of the house, I started digging around in the HA voice stuff and decided to give it all a try.

I got the Home Assistant Voice Preview Edition (HAVPE) and a ReSpeaker Lite to test as voice satellites. After a lot of trial and error—and with a ton of help from ChatGPT and various online forums—I now have a system where speech recognition works locally (using Piper and Whisper) and through Home Assistant Cloud. I also have both Google Gemini and ChatGPT running as conversation agents, which are fully integrated into my voice assistant pipeline. From what I’ve seen so far, the speed of TTS, STT, and action/response cycles varies quite a bit depending on the server-side choices.

I’m not a developer or expert in this stuff, but I had enough familiarity with Home Assistant to stumble through it and the patience to learn and work through tons of little issues—missing integrations, Wi-Fi quirks, YAML formatting, and the usual ESPHome flashing adventures.

Setting up the HAVPE was surprisingly easy, and despite its limitations, I’m impressed with the device. It’s functional and genuinely useful. The ReSpeaker Lite was a bit more of a project to get going, but it’s a very cool little kit—and it might even have better mics than the HAVPE, though I’m still testing that. I’m amazed at how much it’s capable of with a bit of tweaking. Luckily, there’s a very well-maintained YAML template for the device that makes it as usable as the HAVPE after setup.

After a week of using these for lights, switches, timers, reminders, weather, and a few custom routines, I’ve found them reliable enough for everyday use — they can be a bit finicky, but so can Alexa.

The one big limitation for me is media playback. One of the main things I still use Alexa for is playing music and podcasts, and this functionality just isn’t there yet. The devices can technically play media from another device, but there is no voice searching for artists or songs. Hopefully, that part matures soon because, in just about every other way, this voice assistant setup is more flexible and powerful than what I had before.

I’ve seen a lot of people saying Home Assistant Voice isn’t quite ready for prime time—and they’re right—but that hasn’t stopped me from already replacing one of my Echo devices with this setup. If the project keeps heading in this direction, I look forward to replacing all of them — doing this has shown me it’s possible.

1103
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/worldsaway2024 on 2025-04-07 02:59:34+00:00.


I've just started out in Home Assistant and honestly it's pretty overwhelming so far..lol.. and I work in IT! I'm especially having so many bumps along the road into adding things into my dashboard (via configuring the configuration.yaml file ( e.g. Hue-like light card) and running into all kinds or problems with errors with entities list being incorrectly written and the card does not work for me at all - yet.

Which got me thinking.. I know it's a steep learning curve in the beginning. Where did you all start so that you became experts in this? I wish there was some definitely guide or a much user-friendlier way to do everything.

1104
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/brinkre on 2025-04-06 22:07:18+00:00.

1105
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/Khaaaaannnn on 2025-04-06 17:15:26+00:00.


I was just sitting here and she goes “I changed the bedroom light icons FYI”. She looked a bit confused when Iooked up and just said “I love you lol”. Next was asking if I could get the smart litter boxes added to it 😂 Is this the promised land?

1106
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/RinShimizu on 2025-04-06 16:48:28+00:00.

1107
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/Simple-Ad2087 on 2025-04-06 11:12:18+00:00.


Hey everyone,

I've been experimenting with Home Assistant's new voice assistant features and I'm curious how usable it really is in everyday life. As far only on the phone app...

My main question: What hardware are you using to talk to your Home Assistant throughout the house? I'm looking for solutions that are reliable and practical for regular use—not just for testing.

Also, how well does the interaction work for you? Is the voice recognition accurate enough? How natural does the conversation feel?

Personally, I find the current preview hardware a bit underwhelming in terms of design and performance. I can't really imagine placing one in every room yet. But maybe someone has already found a better setup?

Curious to hear your experience.

Oh and by the way, what are the next steps your awaiting? Will the Voicemodel from ChatGPT something you can integrate in HA soon? For even more real conversations.

1108
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/lcopello on 2025-04-06 13:08:10+00:00.

1109
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/prashnts on 2025-04-06 09:02:21+00:00.

1110
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/Yoel-is-my-ign on 2025-04-06 07:23:49+00:00.


Hi, as the title says I wanted to show of my Dashboard and maybe give some ideas for others :) Feedback appreciated! And many thanks to this awesome community!

1111
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/First-Dependent-450 on 2025-04-06 08:06:58+00:00.


Okay, so I did a small experiment at home recently. Mosquitoes have always been an issue, and we usually keep those liquid repellents plugged in 24x7. Realized the bottle was emptying every 5-6 days. Crazy inefficient, right?

So I bought a cheap ₹700 smart plug. Scheduled it to run exactly one hour at sunrise and sunset—basically peak mosquito time. Result?

  • Repellent now lasts almost 20 days instead of 5 days.
  • The house no longer smells like a chemical factory 24/7.

But here’s something interesting that happened: my parents, who usually aren't impressed by any "tech stuff," actually got curious about this setup. Mom asked me yesterday, "Beta, can this kind of thing also automatically switch off the geyser? We always forget and leave it on."

Funny how small tech experiments spark bigger family discussions.

Curious if others here have tried similar "unusual" automations at home? And did it lead to unexpected conversations or solutions?

1112
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/sfortis on 2025-04-06 05:23:22+00:00.

1113
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/oI-Io on 2025-04-06 01:14:04+00:00.

1114
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/TrisolaranPrinceps- on 2025-04-05 23:47:50+00:00.

1115
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/sportymcbasketball on 2025-04-05 20:27:48+00:00.

1116
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/MisterGoodDeal on 2025-04-05 18:47:59+00:00.

1117
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/Chiccocarone on 2025-04-05 18:23:25+00:00.


Currently my music setup consists of some snapclients all around my house which all have one source which is an instance of plexamp that I have virtualized but I was looking for something more and just tried music assistant while using it's built in Snapserver instead of my external one and the Plex integration and I'm blown away by how many features there are and how well they are integrated all together, this makes mass + snapcast feel and work like a proper multi room setup. I'm so excited to start use this more and it's gonna change how I listen to music. Also on a side note since I noticed that the Plex's integration has not a maintainer anymore I started to look at it to possibly fix some issues (like the player not showing in Plex) since for me Plex my only music source and I'm excited to improve this so that others can have an even better experience with it. Seriously of you haven't tried music assistant try it it's really good.

1118
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/WatchNovis on 2025-04-05 15:21:58+00:00.

1119
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/flyize on 2025-04-05 14:51:50+00:00.


I mean, assuming that exists. I really don't want to have a space heater with a 64 pin molex connector attached to it, if I don't have to.

1120
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/techantics on 2025-04-05 14:14:00+00:00.

1121
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/lbpz on 2025-04-05 11:07:50+00:00.

1122
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/Jay_Skye on 2025-04-04 23:14:43+00:00.


[UPDATED CODE]

Below is the full tutorial to set up a back-and-forth chat with your LLM via actionable notifications, using the Ollama integration . First, you’ll create a script to start the conversation, then an automation to handle replies and showing how to include a conversation ID to maintain context.

Each time you run the “Start LLM Chat” script, a new conversation ID is created—resetting the context. This conversation ID is stored in the input_text helper and used by both the start script and the reply-handling automation.

Step 0: Create an Input Text Helper for the Conversation ID

  1. Navigate to Helpers: Go to Settings > Devices & Services > Helpers.
  2. Add a New Helper: Click + Add Helper and choose Text.
  3. Configure the Helper: • Name: Conversation ID • Entity ID: (It will be created as input_text.conversation_id)
  4. Save the Helper.

Step 1: Create the Script for Starting the Chat (with Dynamic Conversation ID)

This script does the following: • Dynamically generates a unique conversation ID using the current timestamp. • Stores the conversation ID in the input_text helper. • Starts the chat by calling the LLM service using that conversation ID. • Sends an actionable notification to your iPhone.

Path: Settings > Automations & Scenes > Scripts > + Add Script

Copy and paste the following YAML into the script editor:

alias: "Start LLM Chat" mode: single sequence:

Step 1: Generate and store a unique conversation ID.

  • service: inputtext.set_value data: entity_id: input_text.conversation_id value: "conversation{{ as_timestamp(now()) | int }}"

Step 2: Start the conversation using the generated conversation ID.

  • service: conversation.process data: agent_id: conversation.llama3_2_1b text: > Let's start a chat! What's on your mind today? Use emojis if you'd like! conversation_id: "{{ states('input_text.conversation_id') }}" response_variable: ai_response

Step 3: Send a notification to your iPhone with a reply action.

  • service: notify.mobile_app_iphone_16_pro data: message: "{{ ai_response.response.speech.plain.speech }}" data: actions:
  • action: "REPLY_CHAT_1" title: "Reply" behavior: textInput textInputButtonTitle: "Send" textInputPlaceholder: "Type your reply here..."
• Save the Script with a name like “Start LLM Chat.”

Step 2: Create the Automation for Handling Replies (Using the Stored Conversation ID)

This automation triggers when you reply to the notification. It: • Retrieves the conversation ID from the input_text helper. • Sends your reply to the LLM using the same conversation ID (maintaining context). • Sends a new notification with the LLM’s reply.

Path: Settings > Automations & Scenes > Automations > + Add Automation > Start with an empty automation > Edit in YAML

Copy and paste the following YAML into the automation editor:

alias: "Handle LLM Chat Reply" mode: single trigger:

  • platform: event event_type: mobile_app_notification_action event_data: action: "REPLY_CHAT_1" action:
  • service: conversation.process data: agent_id: conversation.llama3_2_1b text: "{{ trigger.event.data.reply_text }}" conversation_id: "{{ states('input_text.conversation_id') }}" response_variable: ai_reply
  • service: notify.mobile_app_iphone_16_pro data: message: "{{ ai_reply.response.speech.plain.speech }}" data: actions:
  • action: "REPLY_CHAT_1" title: "Reply" behavior: textInput textInputButtonTitle: "Send" textInputPlaceholder: "Type your reply here..."

• Save the Automation with a name like “Handle LLM Chat Reply.”

Step 3: Testing the Flow

  1. Trigger the Script: • Go to Settings > Automations & Scenes > Scripts. • Find “Start LLM Chat” and click Run. • A unique conversation ID is generated and stored (e.g., conversation_1678901234).
  2. Reply to the Notification: • A notification appears on your iPhone with a reply action. • Tap Reply, type your message, and hit Send.
  3. Observe the Conversation: • The automation picks up your reply, sends it to the LLM using the stored conversation ID, and sends the response back to your iPhone. • As long as you don’t run the start script again, the context is maintained via the same conversation ID.
  4. Resetting the Context: • Running the “Start LLM Chat” script again generates a new conversation ID, starting a fresh conversation context.

Path Summary • Create the Input Text Helper: • Settings > Devices & Services > Helpers > + Add Helper (choose Text) • Create the Script: • Settings > Automations & Scenes > Scripts > + Add Script • Create the Automation: • Settings > Automations & Scenes > Automations > + Add Automation > Start with an empty automation > Edit in YAML

This dynamic setup ensures that every time you start a new chat, a unique conversation ID is generated and stored, resetting the conversation context. Subsequent replies use this ID to maintain continuity.

1123
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/syararsa on 2025-04-05 04:42:35+00:00.


Some days I swear my lights have a better social life than I do - constantly switching on and off like they’re at a rave. I just want to turn the living room lights on, but apparently, they prefer the dark. Honestly, at this point, my lights and I need couples therapy. Anyone else stuck in this endless loop of "nope, not today"? 😅

1124
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/swake88 on 2025-04-05 03:12:03+00:00.

1125
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/homeassistant by /u/Beginning_Feeling371 on 2025-04-05 02:17:34+00:00.

view more: ‹ prev next ›