Some random company claiming this capability without any further evidence should probably be treated with some level of scrutiny.
The part of CMG advertising the capability is CMG Local Solutions. CMG itself is owned by Apollo Global Management and Cox Enterprises, which includes the ISP Cox Communications. CMG operates a wide array of local news television and radio stations.
Cox Enterprises isn’t some random company. It’s one of the largest privately owned companies in the US. They are somewhat capable of doing things like this.
Having experience with Cox Enterprises, it’s just a massive amalgamation of disparate acquisitions that have never been remotely brought together in a meaningful way so it is a slightly dubious claim. This would require much more coordination across entities than I feel is possible with the CMG I knew of pre-pandemic.
Nah, if you hired a team, it wouldn’t matter how divided they were. In fact, them being frantic is probably how we’re hearing about it. They needed to advertise their services without looking at the big picture.
Why?
What about modern capitalism makes you optimistic. I know for a fact this is happening. I bought a pair of Bose earbuds—I was pretty excited about them but they were defective. The app they tried to get me to download required me to sign away permission to “map” my head movements, intercept any sound coming through what I actively play through the headphones…AND “passively record any sound around you.”
And when I saw that shit, I got right the fuck out of there—even though seeing that shit required me to click through three sub menus and entirely different legal documents, all of which I would’ve agreed to like every other privacy policy: absentmindedly.
After getting right the fuck out of there, I went on their website to contact customer service about the defect. So I opened an SMS chat with customer service—where I was told “replying to this chat is tacit agreement to our CUSTOMER SERVICE PRIVACY POLICY,” which I opened. And initially I was fine because it seemed like it was a different policy just allowing them to record the conversation “for training purposes.” Until I clicked through one, two, three and now FOUR sub menus to find I WOULD’VE AGREED TO THE SAME FUCKING PRIVACY POLICY.
So I fucking called Bose. I wanted to know if I could use these headphones without ever agreeing to the privacy policy. But of course customer service couldn’t even conceive of my question. I asked to get transferred to the legal dept.
Lol of course not. What the fuck was I thinking.
So fuck them, I returned those fuckers as fast as I could.
How often are you digging into sub pages and cited clauses of the privacy policies you’re agreeing to on a day-to-day basis? Because I will tell you, they were making me sign away the right to ALL a of that information, and their specific info on how they were using it (a different sub-contract) was pretty lax on who they could share it with.
I fully believe this has been happening WAY longer than just recently. Capitalism is trading on our data in the most invasive ways imaginable. The spying and capabilities have reached dystopian levels. How long ago did those CIA leaks come out about smart TVs being used to eavesdrop? That was like 2014. Ten goddamn years ago.
Why?
Hitchens’s razor, for one. Something sounding plausible just because late stage capitalism is an ever-growing cancerous beast doesn’t mean anything for veracity and objective truth.
It’s very much the same as the idea that crystals can heal you and cure you of cancer, psychics exist and exhibit quantum telepathy, and doctors are lying to you to scam you of your hard earned money and you should instead use Vitamin C to cure COVID-19. Does that all sound stupid to you? If it does, just know your same arguments are being used to persuade other less fortunate folks into buying crystals, tarot cards, and Vitamin C pills in the hopes of improving their lives.
All of these things are sold to these people under the pretense that capitalism is lying to you, governments are lying to you, big pharma is lying to you, and they’re all colluding to steal your identity, personal information, and scam you of your money.
The ability to reason using empirical evidence, and not what makes us feel good or bad inside, is what allows our society to even function in the first place.
Here is an alternative Piped link(s):
buying crystals, tarot cards, and Vitamin C pills
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
But isn’t that just some other logical fallacy? I don’t have anything to cite, but a lot of shit is being sold to people under the pretense of religion. It doesn’t discredit the value it brings religious people. Or the people that abuse faith to swindle poor people out of their money—what’s it called? Investment Christianity or some shit? The whole “tithing brings you closer to god” thing where those incredibly wealthy televangelists are seeing the opportunity of “you just have to have faith/not having faith in me is spitting in gods eye” and abusing it. Do televangelists discredit all religion?
I mean, I’m an atheist myself, but I’ve read studies from sociologists saying the population’s increasing loss of faith does have negative effects on overall contentedness and hopefulness and community. Saying, “well televangelists exist, so just know your faith in god is being used to swindle poor people.” You can’t discredit everything having to do with a concept by finding the people taking advantage of it. People find a way to take advantage of every single thing.
I can’t discredit the concept of using phones because the concept of calling someone is being abused to steal old people’s personal info.
And, I mean, what lines are we even drawing here? It’s WELL established that data miners, data trading, invasive permissions signed away in privacy policies for the purpose of packaging and reselling, invasive domestic spying programs…these things all exist and have existed for a long time. My point is…I’m against it? I’m not drawing some insane conclusion about some conspiracy—just because there is a nuanced connection between being wary of our data being stolen and the insane conspiracy theories that the unknown aspects of that problem spawn, doesn’t mean that every person concerned with the loss of privacy is responsible for the extreme end of the spectrum.
That’s the problem I have with what you’re saying—you’re acting like there is no nuance. Because there is well-established reason for concern regarding privacy. And jumping to unfounded conclusions is almost a natural response to any new information in the internet age.
COVID denialism, illuminati, etc. is wariness brought to an illogical extreme. The existence of that phenomenon should NOT discredit any reasonable person concerned about privacy.
Remember brexit? Remember trump? Both of those world events came about from a relatively unknown industry that was exposed after the fact. And those invasive data profiling businesses didn’t go under. They changed their names.
The Edward Snowden revelations were over a decade ago. I’d argue that assuming there is no cause for concern is beyond naive.
And you’re likening crystals and telepathy to “doctors have a profit motive?” Sure, there is an illogical extreme to the information that big pharmaceutical companies have a stranglehold on the medical field and corrupt treatment by prioritizing profits—look at the opioid crisis, look at the entire concept of pharmaceutical reps and commercials for prescription drugs.
These things alone are the concern. Just because they can and do breed extreme ideas with no basis in reality doesn’t justify discrediting the concept itself.
I get it, unfounded conclusions are generally disagreeable. But “our privacy is disappearing” isn’t an unfounded conclusion. I’m saying I’ve read the privacy policy that was getting me to sign away every scrap of privacy the limits of the product could’ve possibly invaded. Conspiracy theorists don’t make that untrue.
“Our privacy is disappearing” is a valid concern.
“Megacorporations are conspiring to harvest advertising data from millions of consumers through the continuous, unadulterated processing of recorded audio, recorded without their consent” is, well, a conspiracy.
There is no physical and empirical evidence that suggests this. I’ve asked multiple times in this post for direct empirical evidence of advertising companies hijacking consumer devices to record you without your consent, explaining why it should be easy and trivial to detect if it were the case. All I’ve gotten so far was moving the goalposts, fear mongering about late-stage capitalism, pre-emptive special pleading, “well the government said it was happening with some other tech (even though we’re not supposed to trust the government)” and anecdotes.
I’ve challenged the objectivity of the anecdotes presented to me, because “my wife and I talked about buying electric blinds in the car and suddenly we got ads for electric blinds” is not scientific. Because I’m interested in the core, objective truth of the situation, not someone’s over-aggrandized and biased interpretation of it.
This is the second time someone has called me “naive.” Critical thinking is not naive: it forms the literal cornerstone of our modern society. To imply otherwise is the same type of dismissive thinking used to perpetuate these conspiracies — from companies listening to your every word, to crystals healing you, to doctors scamming you via cancer treatments.
You are right that there is concern for privacy. When it reaches the point of living in abject anxiety and fear of every electronic device you will ever own in the future because of an irrational and frankly schizotypal belief that they’re all listening to you… that’s simply not healthy for the mind. That is wariness brought to an illogical extreme.
I got over that fear so long ago when I sat down and actually thought about the practicality of the whole thing, and I’m glad that I have a healthier state of mind because of it. Meanwhile, this thinking continues to prevail in the privacy “community” and be parroted by major figureheads and “leaders.”
What this community needs is actual accountability to thoroughly scrutinize and dismantle bullshit beliefs, not fostering even more paranoia. That’s the line I draw.
I mean, I get what you’re saying. I’m not the person that was claiming 100% this without a doubt exists. I was just talking about my experience in nearly signing away the right to allow them to record surreptitiously at any time. I don’t know exactly what they’re looking for. and you’re right, unending recording is absolutely not happening. But signing away the right for them to record however they want in the future is just as bad.
I live alone. If they were recording all the time it’d definitely be one huge, mostly silent file of me sometimes making noises at my cat and then music or tv sounds.it’d be pointless. I’m not claiming they’re recording nonstop. It wasn’t me that said it. But my signing away their right to do so is 100% problematic. They don’t have to be recording all the time FOR it to be problematic. I’m not claiming they’re using unique, multi-word phrases to wake up/initiate recording to sell me blinds.
But how many times have we heard law enforcement has gotten warrantless access to customers’ data brought companies? It doesn’t stop until it’s exposed—and even then I do not doubt that it continues after the public outcry has died with the news cycle. I mean, just this week we heard about pharmacies just handing out medical records whenever asked. The cops have been just acting as private enterprise and are customers in the data-trading market. So they’re warrantlessly accessing all that really weird specific private data. They’re ClearviewAI’s customer for facial recognition data.
It’s not at all illogical to read the privacy policy, see I’m signing away the right to record at any time, look at articles like these, and have cause for concern. I get it, you’re saying we would know immediately if we were being recorded based on empirical evidence in our data usage. But what I’m saying is the stars are aligning in troubling ways. I’m not claiming constant surveillance. I’m saying we are signing away all rights to any privacy, data mining and trading is a massive industry that exists and is abused by law enforcement, law enforcement itself operates in super problematic ways, capitalism has bred vampiric companies hat are extracting as much money as they can from our increasingly free-flowing data.
My concern is broad and overarching. I’m not claiming constant recording. You might be confusing my conversation with another you’ve had ITT. But I’m 100% uncomfortable signing away those rights, and I’m sure we are headed for much worse. I’m inclined to take part in the pearl clutching and fear mongering (yes, I know these two phrases have negative connotations) when articles like this are discussed because we are UNDER-alarmed with the loss of our privacy. So I say we DO get people riled up over this because we’ve let WAY TOO MUCH slide for way too long. We neee to be getting our collective ire up over the loss of privacy, and if we need to use unfounded claims of the POSSIBILITY for them to be doing this AT THE SAME TIME that we’re signing away all rights to privacy, then fine. Set off the fire alarm for the noxious fart that is the unfounded claim in this article.
Because we desperately need to do something.
That’s a fair stance to have. I agree that the general trend of privacy violations across all industries is concerning, and it’s reasonable to extrapolate that it’s going to get worse. At the same time, it’s important to gauge what is presently possible in order for these extrapolations to be reasonable, so we can appropriately prepare for what these advertising corporations would do next.
For example, I think it’s very likely that the government and megacorporations will collude further to harvest as much personal data and metadata in the name of “national security” — see the revelation that the government gag-ordered Google and Apple to keep hush about the harvesting of metadata from push notifications. I don’t think, even with the advancements in AI, that we will have smart speaker and phone companies deploying a dystopian, horrifying solution of mass surveillance to a scale that would make even the CCP blush. Maybe it would be possible within the next 50 years, but not now with how expensive AI software/hardware is right now, and especially not in the past.
In principle, I do agree that riling up people through outrageous claims of privacy violations is a good thing purely to spread the message, but I think the strongest weapon we have for actual change is legal precedent. We need a court to strictly and firmly tell these companies, and companies in the future, and government agencies looking to infringe upon our rights, that harvesting the private, sensitive information of its users without consent is objectively wrong. A court case where the factual basis of the situation is dubious at best (for example, the context of this whole “marketing company is listening to you” claim is confusing and questionable) isn’t going to help us here, because these companies with handsomely-paid lawyers are just going to say “well, that’s not what the situation factually is, it’s __.”
Why waste the effort? That which can be asserted without evidence, can be dismissed without evidence.
“Nah I’ve already got 4 tin-foil hats on and I’m destroying anything made after the 1950s right now. Kids included, they are microchipped with the vaccines. It’s okay because I’ll plead insanity.” -way too many people
Your optimism about capitalism is tragic.
Do people seriously still think this is a thing?
Literally anyone can run the basic numbers on the bandwidth that would be involved, you have 2 options:
-
They stream the audio out to their own servers which process is there. The bandwidth involved would be INSTANTLY obvious, as streaming audio out is non-trivial and anyone can pop open their phone to monitor their network usage. You’d hit your data limit in 1-2 days right away
-
They have the app always on and listening for “wakewords”, which then trigger the recording and only then does it stream audio out. WakewordS plural is doing a LOT of heavy lifting here. Just 1 single wakeword takes a tremendous amount of training and money, and if they wanted the countless amount of them that would be required for what people are claiming? We’re talking a LOT of money. But thats not all, running that sort of program is extremely resource intensive and, once again, you can monitor your phones resource usage, you’d see the app at the top burning through your battery like no tomorrow. Android and iPhone both have notifications to inform you if a specific app is using a lot of battery power and will show you this sort of indicator. You’d once again instantly notice such an app running.
I think a big part of this misunderstanding comes from the fact that Alexa/Google devices seem so small and trivial for their wakewords.
What people dont know though is Alexa / Google Home have an entire dedicated board with its own dedicated processor JUST for detecting their ONE wake word, and not only that they explicitly chose a phrase that is easy to listen for
“Okay Google” and “Hey Alexa” have a non-trivial amount of engineering baked into making sure they are distinct and less likely to get mistaken for other words, and even despite that they have false positives constantly.
If thats the amount of resources involved for just one wake word/phrase, you have to understand that targeted marking would require hundreds times that, its not viable for your phone to do it 24/7 without also doubling as a hand warmer in your pocket all day long.
The point of OK Google is to start listening for commands, so it needs to be really good and accurate. Whereas, the point of fluffy blanket is to show you an ad for fluffy blankets, so it can be poorly trained and wildly inaccurate. It wouldn’t take that much money to train a model to listen for some ad keywords and be just accurate enough to get a return on investment.
(I’m not saying they are monitoring you, just that it would probably be a lot less expensive than you think.)
the point of fluffy blanket is to show you an ad for fluffy blankets, so it can be poorly trained and wildly inaccurate
Doesn’t that defeat the whole purpose of listening for key ad words?
“We’ll come up with dedicated hardware to detect if someone says “fluffy blankets,” but it’s really poor at that, so if someone is talking about cat food then our processing will detect “cat food” as “fluffy blanket,” and serve them an ad for fluffy blankets. Oh wait… that means there’s hardly any correlation between what a person says and what ads we serve them. Why don’t we just serve them randomized ads? Why bother with advanced technology to listen to them in the first place?”
I was about to write this but you took the words right out of my mouth, so I will just write “this ^”
I think what the person is saying is that if you aren’t listening for keywords to fire up your smart speaker, but are more instead just ‘bugging’ a home, you don’t need much in the way of hardware in the consumers home.
Assuming you aren’t consuming massive amounts of data to transmit the audio and making a fuss on someone’s home network, this can be done relatively unnoticed, or the traffic can be hidden with other traffic. A sketchy device maker (or, more likely, an app developer) can bug someone’s home or device with sketchy EULA’s and murky device permissions. Then they send the audio to their own servers where they process it, extract keywords, and sell the metadata for ad targeting.
Advertising companies already misrepresent the efficacy of the ads, while marketers have fully drank the kool-aid - leading to advertisers actually scamming marketers. (There was actually a better article on this, but I couldn’t find it.) I’m not sure accuracy of the speech interpretation would matter to them.
I would not be surprised to learn that advertisers are doing legally questionable things to sell misrepresented advertising services. … but I also wouldn’t be surprised to learn that an advertising company is misrepresenting their capabilities to commit a little (more) light fraud against marketers.sigh yay capitalism. We’re all fucked.
This along with much else that’s pointed out make the whole devices capturing audio to process keywords for ads all seem unlikely, but, one thing worth pointing out is that people do sell bad products that barely or even just plain old don’t do what they told their customers it would do. Someone could sell a listening to keywords to target ads solution to interested advertisers that just really sucks and is super shit at its job. From the device user’s standpoint it’d be a small comfort to know the device was listening to your conversations but also really sucked at it and often thought you were saying something totally different to what you said but I’d still be greatly dismayed that they were attempting, albeit poorly, to listen to my conversations.
If it’s random sampled no one would notice. “Oh my battery ran low today.” Tomorrow it’s fine.
Google used to (probably still does) A/B test Play services that caused battery drain. You never knew if something was wrong or you were the unlucky chosen one out of 1000 that day.
Bandwidth for voice is tiny. The amr-wb standard is 6.6 kbits/second with voice detection. So it’s only sending 6 kbits/ when it detects voice.
Given that a single webpage today averages 2 megabytes, an additional 825 bytes of data each second could easily go unnoticed.
It’s insane people still believe voice takes up heaps of bandwidth.
Even moreso, on device you could just speech to text, and send the text back home. That’s like… no data. Undetectable.
Even WITH voice, like you said, fuckin tiny amounts of data for today’s tech.
This is why I’ll never have “smart” anything in my house.
This is simply not true. Low bit compressed audio is small amounts of bandwidth you would never notice on home internet. And recognizing wakewords? Tiny, tiny amounts of processing. Google’s design is for accuracy and control, a marketing team cares nothing about that. They’ll use an algorithm that just grabs everything.
Yes, this would be battery intensive on phones when not plugged in. But triggering on power, via CarPlay, or on smart speakers is trivial.
I’m still skeptical, but not because of this.
Edit: For creds: Developer specializing in algorithm creation and have previously rolled my own hardware and branch for MyCroft.
FYI, sd 855 from 2019 could detect 2 wake words at the same time. With the exponential power increase in npus since then it wouldn’t be shocking if newer ones can detect hundreds
But what about a car? Cars are as smart as smartphones now, and you certainly wouldn’t notice the small amount of power needed to collect and transfer data compared to driving the car. Some car manufacturer TOS agreements seemingly admit that they collect and use your in-car conversations (including any passengers, which they claim is your duty to inform them they are being recorded). Almost all the manufacturers are equally bad for privacy and data collection.
Mozilla details what data each car collects here.
What you’re saying makes sense, but I can’t believe nobody has bought up the fact that a lot of our phones are constantly listening for music and displaying the song details on our lock screen. That all happens without the little green microphone active light and minimal battery and bandwidth consumption.
I know next to nothing about the technology involved, but it doesn’t seem like it’s very far from listening for advertising keywords.
That uses a similar approach to the wake word technology, but slightly differently applied.
I am not a computer or ML scientist but this is the gist of how it was explained to me:
Your smartphone will have a low-powered chip connect to your microphone when it is not in use/phone is idle to run a local AI model (this is how it works offline) that asks one thing: is this music or is it not music. Anyway, after that model decides it’s music, it wakes up the main CPU which looks up a snippet of that audio against a database of other audio snippets that correspond to popular/likely songs, and then it displays a song match.
To answer your questions about how it’s different:
-
the song id happens on a system level access, so it doesn’t go through the normal audio permission system, and thus wouldn’t trigger the microphone access notification.
-
because it is using a low-powered detection system rather than always having the microphone on, it can run with much less battery usage.
-
As I understand it, it’s a lot easier to tell if audio seems like it’s music than whether it’s a specific intelligible word that you may or may not be looking for, which you then have to process into language that’s linked to metadata, etc etc.
-
The initial size of the database is somewhat minor, as what is downloaded is a selection of audio patterns that the audio snippet is compared against. This database gets rotated over time, and the song id apps often also allow you to send your audio snippet to the online megadatabases (Apple’s music library/Google’s music library) for better protection, but overall the data transfer isn’t very noticeable. Searching for arbitrary hot words cannot be nearly as optimized as assistant activations or music detection, especially if it’s not built into the system.
And that’s about it…for now.
All of this is built on current knowledge of researchers analysing data traffic, OS functions, ML audio detection, mobile computation capabilities, and traditional mobile assistants. It’s possible that this may change radically in the near future, where arbitrary audio detection/collection somehow becomes much cheaper computationally, or generative AI makes it easy to extrapolate conversations from low quality audio snippets, or something else I don’t know yet.
-
-
They’ve redirected the page now that it’s getting attention, but here’s the archived version.
I’m very skeptical of their claims, but it’s possible they’ve partnered with some small number of apps so that they can claim that this is technically working.
We already knew this was happening at least a decade ago when people realized why Facebook and Instagram needed unrestricted microphone permissions.
This is why I generally ensure my phone is configured ahead of time to block ads in most cases. I don’t need this garbage on my device.
As for how they could listen? It’s pretty easy.
By waiting until the phone is completely still and potentially on a charger, it can collect a lot of data. Phones typically live on the nightstand by your bed at night; and could be listening intently when charging.
Similarly it could start listening when it hears extended conversations; simply by listening to the microphone for human speech every x minutes for y minutes. Then it can record snippets; encode them quickly and upload them for processing. This would be thermally undetectable.
Finally it could simply start listening in certain situations; like when it detects other devices (via BT). Then it could simply capture as many small snippets of your conversation as it could.
Aren’t they all already listening always? I mean, how else does it hear you say “Ay yo Siri” otherwise?
No.
Both Android and iOS do enforce permissions against applications that have not been granted explicit access to listen constantly.
For example, the Google Assistant is a privileged app oftentimes; and it is allowed to listen. It does so by listening efficiently for one kind of sound, the hotword “Ok Google”.
Other applications not only have to obtain user permission; but oftentimes that permission is restricted to be only granted “While app is in use”, meaning it’s the app on the screen, notifying the user, in the foreground, or recently opened. This permission prevents most abuses of the microphone unless someone is using an app.
the phone’s processor has the wake up word hardcoded, so it’s not like an ad company can add a new one on a whim. and it uses passive listening, so it’s not recording everything you say - I’ve seen it compared to sitting in a class and not paying attention until the teacher says your name.
Have you seen this code though? Every time I hear a statement like that, I have to wonder if you’re all just taking their word for it.
I don’t take their word for it, unless they show me that code and prove that it is the code running on all the devices in use.
Do you also personally audit all open source software that you use?
Your rebuttal makes no sense.
The issue with proprietary “smart” assistants is that we can only guess how they work.
Okay so that’s a no, then, thanks!
No but I do review code audits that certified professionals publish for things that I use when they are available, and I also don’t use any voice assistants and only use open source smartphone ROMs such as GrapheneOS.
Basically I use the opsec methods available to me to prevent as much of the rampant spying that I can. The last thing I would do is put an open mic to Amazon’s audio harvesting bots in my home because that’s incredibly careless.
No
That’s what I figured!
Thanks for the reply.
There’s no way that an app with mic permissions could basically do the same thing and pick up on certain preprogrammed words like Ford or Coke which could then be parsed by AI and used by advertisers? It certainly seems like that isn’t out of the realm of physical possibility but I’m definitely no expert. Would they have had to pay the OS maker to hardcode it in to the OS? Could that be done in an update at a later time?
There’s no way that an app with mic permissions could basically do the same thing and pick up on certain preprogrammed words like Ford or Coke which could then be parsed by AI and used by advertisers?
only if you want the phone to start burning battery and data while displaying the “microphone in use” indicator all the time.
not to mention that the specific phrases have been picked in order to cause as few false positives as possible (which is why you can’t change them yourself), and you can still fool Google Assistant by saying “hey booboo” or “okay boomer”. good luck with making it reliably recognize “Ford”, lol.
Huh, TIL. I figured “if they can do it with one thing they could do it with more than one thing.”
For that I think they use special hardware, that’s the reason that you can’t modify the calling word, and they still notify you when the voice assistant is disabled. I don’t know if this is actually true, or the companies try to hide behind this, or I just remember it incorrectly.
That same hardware couldn’t also have a brand added as a code word for ad, like say “pepsi?”
Of course this is possible. Is it practical? Nope. There is already so much data harvested by the likes you Google and Facebook that they can tell what you like, what videos or articles you read, what you share, in some cases who you talk to. Importing a shit ton of audio data is pointless, they already know what you like.
The sheer amount of audio data that would have to be processed by Google and Amazon every second for every Google Home/Amazon Echo/Facebook Whatever would be a technical and logistical nightmare. It’s far easier for them to wait until you voluntarily give them that data yourself, whether it’s clicking on their ads, searching for specific things in Google/Amazon, and way more slimy methods that they use to track your everyday likes and needs.
you just need to process the audio on the devices and then send keywords to Google etc. it’s technically trivial since most phones already have dedicated hardware for that. your phone listens to activation words all the time, unless you disable it. there is no reason why they can’t also forward anything else it hears as text
Do you have any evidence for this claim? Voice recognition and processing is very power and energy intensive, so things like power consumption and heat dissipation should be readily measurable; especially if an app like Google or Amazon is doing it on an effectively constant basis.
Keywords are being sent to Google — have you sniffed this traffic with Wireshark and analyzed the packets being sent?
Phones have dedicated hardware for voice processing, true, but that’s when you voluntarily enable it through voice dictation or train it with very specific and optimally chosen key phrases (“Okay Google,” “Hey Siri,” …). For apps that otherwise allegedly listen to voice audio constantly, they would need to be utilizing his hardware continuously and constantly. Do you have any evidence that apps like Google continuously utilize this hardware (knowing that it is a power intensive and heat-inducing process?)
I’m not trying to argue in bad faith. As an engineer, I’m having trouble mentally architecting such a surveillance system in my head which would also not leave blatantly obvious evidence behind on the device for researchers to collect. These are all the questions that I naturally came up while thinking of the ramifications of your statement. I want to keep an open mind and consider the facts here.
I would assume that you are right, considering how much gargage you collect if listening.
Now imagine recording those who have not given consent, or the device saving full scripts of movies.
Right. The legality of just recording everything in a room, without any consent, is already incredibly dubious at best, so companies aren’t going to risk it. At least with voice dictation or wakewords, you need to voluntarily say something or push a button which signifies your consent to the device recording you.
Also, another problem with the idea of on-device conversion to a keyword that is sent to Google or Amazon: with constant recording from millions of devices, even text forms of keywords will still be an infeasible amount of data to process. Discord’s ~200 million active users send almost a billion text messages each day, yet Discord can’t use algorithmic AI to detect hate speech from Nazis or pedophiles approaching vulnerable children — it is simply far too much data to timely process.
Amazon has 500 million Amazon Echo’s sold, and that’s just Amazon. From an infrastructure-standpoint, how is Amazon supposed to deal with processing near 24/7 keyword spam from 500 million Echo devices every single day? Such a solution would also have to be, in theory, infinitely scalable as the amount of traffic is directly proportional to the number of devices sold/being actively used.
It’s just technologically infeasible.
Anecdotally, the odds are near zero that my wife and I can talk once about maybe buying some obscure thing like electric blinds and suddenly targetted ads for them somehow pop up on our devices.
This happens a lot.
I think you’re being naive if you believe they don’t locally distill our discussions into key words and phrases and transmit those.
As I asked someone else, what is your methodology here? Have you limited all variables that can cause Google to collect this information and lead to confirmation bias? Did your wife search for prices of electric blinds on Google after you both talked about buying them (a very natural and logical next step after discussing the need to purchase them)? A random conversation, or 20 of them, with your wife isn’t exactly a controlled experiment.
Again, I’ll mention that Mitchollow attempted to conduct a genuine controlled experiment to determine whether Google was listening to him through his microphone. What he failed to realize was that he was livestreaming this experiment to YouTube through Google’s servers, the very conspirators he claimed were listening to him, so he was already voluntarily giving them all of the audio data they would ever want! When this was pointed out to him he retracted his statements, admitting the experiment was intrinsically worthless by design.
I’m not being naive, I’m skeptical of these claims and challenging the veracity of the anecdotal evidence presented. To blindly accept yet another anecdote that confirms a bias of ours whilst rejecting differing skeptical opinions would be closed-minded.
Confirmation bias — people unconsciously prefer remembering things supporting their beliefs.
Ok but third parties have no access to this in the background. My guess is they are buying marketing data from their listed “partners” and making wide claims about how they obtained it. Still a huge breach of privacy though!
I don’t know why, given recent impressive developments, but I’ve always met thie idea that this is really happening with heavy skepticism and I still do. This is definitely the most concrete thing I’ve ever heard and I definitely don’t doubt companies would do this, I just… I don’t know, it’s hard to believe they really are.
One reason is it just seems like they’d be absolutely overwhelmed by useless data, it’s not like AI is cheap to run, and it’d be so hard to link a conversation that’s captured to a genuine sentiment and then to an ad connecting to that person and then a purchasing decision to that ad. This is scary for sure but it feels like this is more marketing hype to marketeers than a real thing.
Will be watching closely. I feel like this might actually be that bridge too far that the mainstream of society will demand action be taken against if it gets widely adopted and widely known. Even if it technically works and is provably effective to advertisers I think you’d need Google or Amazon to be the ones pulling it off and to have done so silently so we all just kinda assume they’re doing it but don’t know. If a company “starts” offering this service in a way the public can latch on to it would likely cause a massive backlash that would hopefully scupper such plans.
The biggest criticism for the idea of phones always listening and sending that data somewhere ia that they would also be listening to other corporations and their meetings. Even if multi-billion corporations can just waltz over the rights of normal people, other companies would be very interested in knowing this is happening.
Also I feel like they already know this stuff so they gain very fucking little in listening on us. You saw an interesting website two days ago and spent more time in it than normal. Then you meet uo with friends whom are known to have similar likes as you, why the hell would ad companies not show ads for the same page / item / event to those people. It doesn’t matter at all if you mention it or not. Companies already know what products and brands you like, if your friends search for something, obviously they get ads of products that are interesting in their circle of friends. The items / brands / whatever are being talked about because they’re interesting to the circle of people, which companies already know.
All you need is a list of advertising keywords. Have the device treat those like wake words just like Alexa and then target ads to the device based on which words it heard most often.
This is but the most simple version, it’s easy to elaborate from there.
Have the device treat those like wake words just like Alexa
You do understand how incredibly difficult this is right?
And people kept telling me I was wrong when I said this.
You still are.
Why? See the reasons above.
I wouldn’t even know because I don’t see ads on my phone or pc.
Copyright © 2023 Cox Media Group, LLC.
Fucking COX, why am I not surprised a fucking ISP like this garbage is behind it.
deleted
TIL we have too many cox in this world
I found that gem as well.
Not the wasted impressions!
Not sure about every company lol, but this article is helpful. I had everything off but the driving one, and can confirm I got ads from stuff I mentioned in the car the other day.
What was your methodology? Are you absolutely sure you eliminated all variables that would signal to Google that you were needing whatever you were talking about? Maybe you were talking in the car with your wife about buying something, and she decided to look up prices for it on Google, which then triggered their algorithms to associating that thing-of-want with her identity, and then associated that thing-of-want with your identity since it likely knows you two are married.
Mitchollow tried to demonstrate exactly what you’re claiming with a controlled experiment where he would prove that Google would listen in to him saying “dog toys” without him clicking on or searching anything related to dog toys beforehand. What he failed to realize was that:
- He livestreamed the whole thing to YouTube, the conspirators he claimed were listening to him in the first place, so they were already processing his speech (including him claiming his need for dog toys repeatedly) and likely correlating all of that data to his identity
- He directly clicked on a (very likely coincidental, or due to data collected by #1) ad corresponding to his phrase of choice (“dog toys”), triggering the algorithm to exclusively show him dog toys on all other pages that used AdSense.
After these flaws were pointed out, he admitted the test was effectively worthless and retracted his claims. The point here is it’s important to eliminate all variables that could lead to confirmation bias.
I’ve had other similar stories of friends allegedly receiving ads after saying specific keywords. Probably one of the best ones to demonstrate that this entire notion is silly was an avid Magic: The Gathering player getting surprised that he received MTG ads after talking about MTG to his MTG playing friends. He was spooked out and claimed that Amazon was listening to his everyday speech.
Bro, it’s literally a setting that’s turned on. It ain’t that deep. Just turn it off.
Why didn’t you turn it off then?
It is now turned off 👍
deleted by creator
“Unprecedented understanding of consumer behavior”
Scary stuff
deleted
And companies responsible for this are cocks?