I was trying to get it to pause Pi-hole on request. I’m using Home Assistant Cloud (Nabu Casa) for speech to text, and have also got OpenAI plugged into it for if it doesn’t recognise a command. The screen shot is from the debug logs that I eventually found after struggling to work out why it wasn’t running my automation.
I’m using the new Home Assistant Voice Preview. Don’t get me wrong, overall very happy with it for the price point, but for some reason the cloud speech recognition (I believe powered by Google) is very good at understanding me until I start trying to talk about ad blocking.
cough antitrust cough
The leading all but one languages help to me a Lord with Google assistant not your relevant
It was fast though
Maybe it should have thought about it a bit more.
This is why, even though very convenient, you should never integrate your time machine with Home Assistant.
Don’t worry, while I integrated my time machine with Home Assistant, it’s not exposed to the voice assistant so in theory it shouldn’t be able to send me back in time to stop Hitler.
Are we concerned about changing the timeline or something? Why is the goal to not stop Hitler?
Because then we would still have Stalin and the Red Alert scenario. Somehow I believe that would be worse.
Oh shit this could well be a bootstrap paradox where I need to stop Hitler in order for me to be here in the future to go back to stop Hitler!
I’m sorry Dave. I’m afraid I can’t do that.
Reminds me of years ago I wanted to play Lush, the radio station from SomaFM.
Literally could not get anything but Rush. And I have zero Rush in my library.
Maybe you should give them a try. Signals is a really approachable starting point, IMO, but 2112 is a better hard sell.
Just don’t listen to Power Windows, Hold Your Fire, or Presto until you’re heard their hard prog stuff. I like those albums, too, but things got a little weird when Getty went hard in the synths.
This is exactly what I would expect a Rush fan to answer. But seriously, they are great. Sadly I’m too young so I never had the opportunity to experience them live. 3 hour shows of that quality after 40 years of touring is absolutely amazing.
Make an alternarive trigger phrase “stop Hitler”?
The problem is it seems to be different every time. Here’s my list so far:
Oof Australian (not Austrian?;)) accent seems to be heavy ;)
Maybe I misunderstood but it sounded like you just called me Australian 😲
Sorry, New Zealander ;) as someone who lives directly on the opposite side of the globe (Poland) I don’t see that much difference in accent;) https://www.youtube.com/watch?v=XT637TV3y5s
Not much difference!? Aussies are all Feesh and Cheeps and we’re more like Fush and Chups. They are all “Mate, can you get the goon bag out of the esky” and we’re more like “Bro, can ya grab us a brew from the chilly bin?”
And if you ask for a tinnie you’ll get something quite different in NZ vs Australia.
One time I was on a plane and the pilot was Australian. It took quite a while to work out the language he was speaking was English.
Hard to get them mixed up 😋
If you say so;) Please don’t get too offended, it doesn’t sound that different from each other :) Here’s one example of why that might be: https://m.youtube.com/watch?v=NRdg1MOYxHo
I’m really sorry you’re experiencing this but it’s at least a little funny how botched speech processing can still be.
I don’t know enough to really help you, but good luck <3
Stop at locker 🤣
Use a local, open source voice recognition and the problems with understanding ad blocking related phrases should disappear. 😁
It’s running on a Pi 4, they highly recommend cloud in this case because it’s not very powerful.
But I also speak with a New Zealand accent, and could not get it to understand a single word I said until I connected it to their cloud option.
The OpenAI integration coupled with cloud text to speech/speech to text has actually been fantastic. It works a lot better than I was expecting based on their warnings of it being very early days. The main problem is the data from home assistant is fed in as a prompt, and so to keep it short it only sends the states. Data stored as attributes is not accessible, so I’m making helpers, scripts, and automations for the assistant to trigger in order to output more data.
You can get the same from a local whisperer model to be clear.