View Single Post
  #18   Report Post  
Posted to uk.d-i-y,
Rod Speed Rod Speed is offline
external usenet poster
Posts: 40,893
Default ATTN: Rod Speed: Alexa et al listen in on you.

Commander Kinsey wrote
Vir Campestris wrote
Max Demian wrote
Commander Kinsey wrote

When I worked at a university, we did some studies on voice
recognition. The main thing I found was more processing
power makes a lot of difference. One of the systems we
had, you could see it thinking about it, getting a better
match as it went along. A slow processor can't keep up with
a fast talker. I don't know how good a processor is in Alexas.

Surely Alexa's processing is done centrally by the Amazon server.

It is.

The device has beamforming in it with the microphone
array to pick up the direction of the loudest sound.

It ten uses local wakeword recognition for the Alexa keyword.

Only then does it start to stream audio to the server -
starting just _before_ the wakeword incidentally.

It's far too expensive to process audio in the servers all the time.

And BTW the mute button is hardware linked to the light.
No software hack can make it look deaf but not be deaf.

This is stupid for two reasons.

Nope, it's the only viable way to do it.

Firstly it's a privacy breach.

So is using google, making or receiving a phone
calls, driving anywhere, walking most places,
buying books, buying anything, talking to
anyone, using usenet etc etc etc. I don't care.

Secondly why do all that processing in one place?

It isnt all done in one place, its done in various places.

You don't need much power to do voice recognition
with today's processors, smartphones do it ffs.

But you do need lots of power to answer most questions
and so you might as well use that power to work out
what the person meant in context too even when
doing something simple like setting a timer etc.