I've been using Google's Gemini and it's pretty good at interpreting fucked up or imperfect smart commands. For example we have some lights named "Chrimas Lights" and it will turn those on and off by referring to them as Christmas lights. It can also do multiple commands in a row without being overly explicit. So you can say "set lights to x%, make them yellow, and turn them off in an hour and set my TV to volume x" and it'll do it no problem. The old assistant could not do anything even close to this.
It's also much faster and processes words as fast if not faster than a human can. From finishing a command to the command being executed seems to be about 1/10th of a second which makes me wonder if it's doing any sort of inferencing on the back end. It's one of the best LLM integrations I've seen so far.
Glad to help!
The reason it works is because telecom providers use DNS-based throttling instead of deep packet inspection to selectively limit bandwidth to video sites. They have a massive list of all the popular streaming sites (YouTube, AppleTV, Netflix, etc.) and then throttle the sites in the list. When providers say "unlimited 480p video streaming" they actually have no clue what video quality you are watching. They just pick a bandwidth limitation that would only allow 480p video to play without buffering.
They could in theory use network traffic analysis to identify video websites which have bursty bandwidth patterns (due to the nature of video buffers), but this would be more difficult, more expensive, and extremely prone to false positives.