Good advice but the respirator doesn’t wrap around the ears so in the context of acoustic protection/reflection it doesn’t seem effective. I agree that a shield is conspicuous… I’m just spitballing about total effectiveness in theory. I like the end of the video shared above where the concave parabolic shape of the shield (when reversed) was used to redirect the LRAD sound at the operator. Pretty cool seeing a vulnerability in a system exploited in that way.
badlotus
Thanks for sharing this video! It’s a great resource. $5 shop headphones and a polycarbonate shield you can make for under $100 beats a $20,000-$100,000 LRAD. I wonder if ear plugs, shop headphones, and a shield would be any more effective.
This is in Turkey, not the US. Also not TikTok employees but Telus Digital employees. Shit headline.
Light olive oil on the crust before topping with sauce. Corn meal or corn meal/salt mixture under the crust to help absorb moisture trapped underneath. Perforated pans also help. I also cook at a higher temperature. 450 degrees Fahrenheit. Make sure to preheat. A brick oven or pizza stone will help with consistent heating.
Predicting Trump and co are going to start saying Wall Street went woke and corporate boards are being paid for their opposition by China and George Soros. All the greatest hits.
I literally just deleted one of these texts before reading this post. Started seeing a lot more of these after creating a LinkedIn account… hmmm.
An alternative to MusicBrainz Picard is Lidarr. No sonic analysis but it can organize and rename your library among other things.
Picard is the better option for music organization though.
Fair points! I’ve been tinkering with Homeassistant for a while now. The community has come very far so I’m hopeful that more advanced features will be added as the user base grows.
Yes, the voice recognition is decent. I mainly wanted a way to control some smart light switches without using a Google device. If you’re looking for something more advanced I don’t have any experience using his tool in that use-case.
Have you heard of Ollama? It’s an LLM engine that you can run at home. The speed, model size, context length, etc. that you can achieve really depends on your hardware. I’m using a low-mid graphics card and 32GB of RAM and get decent performance. Not lightning quick like ChatGPT but fine for simple tasks.
Have you heard of Homeassistant? It’s a self-hosted smart home solution that fills a lot of the gaps left by the most smart home tech. They’ve recently added and refined support for various different voice assistants, some of which run completely on your hardware. I have found they have great community support for this project and you can also buy their hardware if you don’t feel like tinkering on a Raspberry Pi or VM. The best thing (IMHO) about Homeassistant is that it is FOSS.
All good info. A bit outside of this post context but helpful nonetheless.