Voice assistants are money losing products. If they can do something like processing the wakewords on the device before chosing to send to a server they will. These companies are far too stingy to continuously stream audio to their servers
Back in the day when everything had to be processed server-side sure.
Now we have purpose-built hardware helping work this shit out. The devices are basically capable of handling native language resolution locally. They’re no longer need to farm the data out. I still don’t think they’re doing this we would see it in the open source operating systems, but if they wanted to any late model cell phone would be absolutely fine parsing out your interests from your conversations. Hell, I’m sure the contents of this dictation I’m making now are being reduced and added to my social graph at Google.
Someone can correct me if I’m wrong but home assistant is currently struggling with this and is processing everything on your local box because it can’t do wakewords on the device.
Exactly. If it is practical and money can be made doing it, then continuous, ambient sound parsing will be the norm. Currently it seems like it’s not a valuable business. When it is valuable to them, they will add a checkbox somewhere in your account to disable it, and most people will not be bothered enough to look for it.
My experiences are much MUCH different. The amount of compute waste is through the roof, and we shrug at +$50k/m provisioning. You don’t even need approvals for that, and you can leave it idle and you MIGHT get a ping from gloudgov after a few months.
Voice assistants are money losing products. If they can do something like processing the wakewords on the device before chosing to send to a server they will. These companies are far too stingy to continuously stream audio to their servers
Back in the day when everything had to be processed server-side sure.
Now we have purpose-built hardware helping work this shit out. The devices are basically capable of handling native language resolution locally. They’re no longer need to farm the data out. I still don’t think they’re doing this we would see it in the open source operating systems, but if they wanted to any late model cell phone would be absolutely fine parsing out your interests from your conversations. Hell, I’m sure the contents of this dictation I’m making now are being reduced and added to my social graph at Google.
Someone can correct me if I’m wrong but home assistant is currently struggling with this and is processing everything on your local box because it can’t do wakewords on the device.
Yeah what possible use could this company, whose business model relies on surveillance, have for surveiling you
Exactly. If it is practical and money can be made doing it, then continuous, ambient sound parsing will be the norm. Currently it seems like it’s not a valuable business. When it is valuable to them, they will add a checkbox somewhere in your account to disable it, and most people will not be bothered enough to look for it.
Are they though?
My experiences are much MUCH different. The amount of compute waste is through the roof, and we shrug at +$50k/m provisioning. You don’t even need approvals for that, and you can leave it idle and you MIGHT get a ping from gloudgov after a few months.