“What would it mean for your business if you could target potential clients who are actively discussing their need for your services in their day-to-day conversations? No, it's not a Black Mirror episode—it's Voice Data, and CMG has the capabilities to use it to your business advantage.”
Do you have any evidence for this claim? Voice recognition and processing is very power and energy intensive, so things like power consumption and heat dissipation should be readily measurable; especially if an app like Google or Amazon is doing it on an effectively constant basis.
Keywords are being sent to Google — have you sniffed this traffic with Wireshark and analyzed the packets being sent?
Phones have dedicated hardware for voice processing, true, but that’s when you voluntarily enable it through voice dictation or train it with very specific and optimally chosen key phrases (“Okay Google,” “Hey Siri,” …). For apps that otherwise allegedly listen to voice audio constantly, they would need to be utilizing his hardware continuously and constantly. Do you have any evidence that apps like Google continuously utilize this hardware (knowing that it is a power intensive and heat-inducing process?)
I’m not trying to argue in bad faith. As an engineer, I’m having trouble mentally architecting such a surveillance system in my head which would also not leave blatantly obvious evidence behind on the device for researchers to collect. These are all the questions that I naturally came up while thinking of the ramifications of your statement. I want to keep an open mind and consider the facts here.
I would assume that you are right, considering how much gargage you collect if listening.
Now imagine recording those who have not given consent, or the device saving full scripts of movies.
Right. The legality of just recording everything in a room, without any consent, is already incredibly dubious at best, so companies aren’t going to risk it. At least with voice dictation or wakewords, you need to voluntarily say something or push a button which signifies your consent to the device recording you.
Also, another problem with the idea of on-device conversion to a keyword that is sent to Google or Amazon: with constant recording from millions of devices, even text forms of keywords will still be an infeasible amount of data to process. Discord’s ~200 million active users send almost a billion text messages each day, yet Discord can’t use algorithmic AI to detect hate speech from Nazis or pedophiles approaching vulnerable children — it is simply far too much data to timely process.
Amazon has 500 million Amazon Echo’s sold, and that’s just Amazon. From an infrastructure-standpoint, how is Amazon supposed to deal with processing near 24/7 keyword spam from 500 million Echo devices every single day? Such a solution would also have to be, in theory, infinitely scalable as the amount of traffic is directly proportional to the number of devices sold/being actively used.
It’s just technologically infeasible.
Anecdotally, the odds are near zero that my wife and I can talk once about maybe buying some obscure thing like electric blinds and suddenly targetted ads for them somehow pop up on our devices.
This happens a lot.
I think you’re being naive if you believe they don’t locally distill our discussions into key words and phrases and transmit those.
As I asked someone else, what is your methodology here? Have you limited all variables that can cause Google to collect this information and lead to confirmation bias? Did your wife search for prices of electric blinds on Google after you both talked about buying them (a very natural and logical next step after discussing the need to purchase them)? A random conversation, or 20 of them, with your wife isn’t exactly a controlled experiment.
Again, I’ll mention that Mitchollow attempted to conduct a genuine controlled experiment to determine whether Google was listening to him through his microphone. What he failed to realize was that he was livestreaming this experiment to YouTube through Google’s servers, the very conspirators he claimed were listening to him, so he was already voluntarily giving them all of the audio data they would ever want! When this was pointed out to him he retracted his statements, admitting the experiment was intrinsically worthless by design.
I’m not being naive, I’m skeptical of these claims and challenging the veracity of the anecdotal evidence presented. To blindly accept yet another anecdote that confirms a bias of ours whilst rejecting differing skeptical opinions would be closed-minded.
That in itself is concerning. Everyone is arguing that no company would invest resources to voice to text everything. But Google does it in YouTube.
What a hilarious oversight with that experiment lol! He must have felt stupid when it was pointed out to him.
Confirmation bias — people unconsciously prefer remembering things supporting their beliefs.
You are assuming he had the belief before the events when it could be the events creating the belief.