@jard - eviltoast

Scala compiler engineer for embedded HDLs by profession.

I also trickjump in Quake III Arena as a hobby.

  • 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle






  • That’s a fair stance to have. I agree that the general trend of privacy violations across all industries is concerning, and it’s reasonable to extrapolate that it’s going to get worse. At the same time, it’s important to gauge what is presently possible in order for these extrapolations to be reasonable, so we can appropriately prepare for what these advertising corporations would do next.

    For example, I think it’s very likely that the government and megacorporations will collude further to harvest as much personal data and metadata in the name of “national security” — see the revelation that the government gag-ordered Google and Apple to keep hush about the harvesting of metadata from push notifications. I don’t think, even with the advancements in AI, that we will have smart speaker and phone companies deploying a dystopian, horrifying solution of mass surveillance to a scale that would make even the CCP blush. Maybe it would be possible within the next 50 years, but not now with how expensive AI software/hardware is right now, and especially not in the past.

    In principle, I do agree that riling up people through outrageous claims of privacy violations is a good thing purely to spread the message, but I think the strongest weapon we have for actual change is legal precedent. We need a court to strictly and firmly tell these companies, and companies in the future, and government agencies looking to infringe upon our rights, that harvesting the private, sensitive information of its users without consent is objectively wrong. A court case where the factual basis of the situation is dubious at best (for example, the context of this whole “marketing company is listening to you” claim is confusing and questionable) isn’t going to help us here, because these companies with handsomely-paid lawyers are just going to say “well, that’s not what the situation factually is, it’s __.”




  • “Our privacy is disappearing” is a valid concern.

    “Megacorporations are conspiring to harvest advertising data from millions of consumers through the continuous, unadulterated processing of recorded audio, recorded without their consent” is, well, a conspiracy.

    There is no physical and empirical evidence that suggests this. I’ve asked multiple times in this post for direct empirical evidence of advertising companies hijacking consumer devices to record you without your consent, explaining why it should be easy and trivial to detect if it were the case. All I’ve gotten so far was moving the goalposts, fear mongering about late-stage capitalism, pre-emptive special pleading, “well the government said it was happening with some other tech (even though we’re not supposed to trust the government)” and anecdotes.

    I’ve challenged the objectivity of the anecdotes presented to me, because “my wife and I talked about buying electric blinds in the car and suddenly we got ads for electric blinds” is not scientific. Because I’m interested in the core, objective truth of the situation, not someone’s over-aggrandized and biased interpretation of it.

    This is the second time someone has called me “naive.” Critical thinking is not naive: it forms the literal cornerstone of our modern society. To imply otherwise is the same type of dismissive thinking used to perpetuate these conspiracies — from companies listening to your every word, to crystals healing you, to doctors scamming you via cancer treatments.

    You are right that there is concern for privacy. When it reaches the point of living in abject anxiety and fear of every electronic device you will ever own in the future because of an irrational and frankly schizotypal belief that they’re all listening to you… that’s simply not healthy for the mind. That is wariness brought to an illogical extreme.

    I got over that fear so long ago when I sat down and actually thought about the practicality of the whole thing, and I’m glad that I have a healthier state of mind because of it. Meanwhile, this thinking continues to prevail in the privacy “community” and be parroted by major figureheads and “leaders.”

    What this community needs is actual accountability to thoroughly scrutinize and dismantle bullshit beliefs, not fostering even more paranoia. That’s the line I draw.


  • Why?

    Hitchens’s razor, for one. Something sounding plausible just because late stage capitalism is an ever-growing cancerous beast doesn’t mean anything for veracity and objective truth.

    It’s very much the same as the idea that crystals can heal you and cure you of cancer, psychics exist and exhibit quantum telepathy, and doctors are lying to you to scam you of your hard earned money and you should instead use Vitamin C to cure COVID-19. Does that all sound stupid to you? If it does, just know your same arguments are being used to persuade other less fortunate folks into buying crystals, tarot cards, and Vitamin C pills in the hopes of improving their lives.

    All of these things are sold to these people under the pretense that capitalism is lying to you, governments are lying to you, big pharma is lying to you, and they’re all colluding to steal your identity, personal information, and scam you of your money.

    The ability to reason using empirical evidence, and not what makes us feel good or bad inside, is what allows our society to even function in the first place.




  • Right. The legality of just recording everything in a room, without any consent, is already incredibly dubious at best, so companies aren’t going to risk it. At least with voice dictation or wakewords, you need to voluntarily say something or push a button which signifies your consent to the device recording you.

    Also, another problem with the idea of on-device conversion to a keyword that is sent to Google or Amazon: with constant recording from millions of devices, even text forms of keywords will still be an infeasible amount of data to process. Discord’s ~200 million active users send almost a billion text messages each day, yet Discord can’t use algorithmic AI to detect hate speech from Nazis or pedophiles approaching vulnerable children — it is simply far too much data to timely process.

    Amazon has 500 million Amazon Echo’s sold, and that’s just Amazon. From an infrastructure-standpoint, how is Amazon supposed to deal with processing near 24/7 keyword spam from 500 million Echo devices every single day? Such a solution would also have to be, in theory, infinitely scalable as the amount of traffic is directly proportional to the number of devices sold/being actively used.

    It’s just technologically infeasible.


  • Do you have any evidence for this claim? Voice recognition and processing is very power and energy intensive, so things like power consumption and heat dissipation should be readily measurable; especially if an app like Google or Amazon is doing it on an effectively constant basis.

    Keywords are being sent to Google — have you sniffed this traffic with Wireshark and analyzed the packets being sent?

    Phones have dedicated hardware for voice processing, true, but that’s when you voluntarily enable it through voice dictation or train it with very specific and optimally chosen key phrases (“Okay Google,” “Hey Siri,” …). For apps that otherwise allegedly listen to voice audio constantly, they would need to be utilizing his hardware continuously and constantly. Do you have any evidence that apps like Google continuously utilize this hardware (knowing that it is a power intensive and heat-inducing process?)

    I’m not trying to argue in bad faith. As an engineer, I’m having trouble mentally architecting such a surveillance system in my head which would also not leave blatantly obvious evidence behind on the device for researchers to collect. These are all the questions that I naturally came up while thinking of the ramifications of your statement. I want to keep an open mind and consider the facts here.



  • What was your methodology? Are you absolutely sure you eliminated all variables that would signal to Google that you were needing whatever you were talking about? Maybe you were talking in the car with your wife about buying something, and she decided to look up prices for it on Google, which then triggered their algorithms to associating that thing-of-want with her identity, and then associated that thing-of-want with your identity since it likely knows you two are married.

    Mitchollow tried to demonstrate exactly what you’re claiming with a controlled experiment where he would prove that Google would listen in to him saying “dog toys” without him clicking on or searching anything related to dog toys beforehand. What he failed to realize was that:

    1. He livestreamed the whole thing to YouTube, the conspirators he claimed were listening to him in the first place, so they were already processing his speech (including him claiming his need for dog toys repeatedly) and likely correlating all of that data to his identity
    2. He directly clicked on a (very likely coincidental, or due to data collected by #1) ad corresponding to his phrase of choice (“dog toys”), triggering the algorithm to exclusively show him dog toys on all other pages that used AdSense.

    After these flaws were pointed out, he admitted the test was effectively worthless and retracted his claims. The point here is it’s important to eliminate all variables that could lead to confirmation bias.

    I’ve had other similar stories of friends allegedly receiving ads after saying specific keywords. Probably one of the best ones to demonstrate that this entire notion is silly was an avid Magic: The Gathering player getting surprised that he received MTG ads after talking about MTG to his MTG playing friends. He was spooked out and claimed that Amazon was listening to his everyday speech.


  • the point of fluffy blanket is to show you an ad for fluffy blankets, so it can be poorly trained and wildly inaccurate

    Doesn’t that defeat the whole purpose of listening for key ad words?

    “We’ll come up with dedicated hardware to detect if someone says “fluffy blankets,” but it’s really poor at that, so if someone is talking about cat food then our processing will detect “cat food” as “fluffy blanket,” and serve them an ad for fluffy blankets. Oh wait… that means there’s hardly any correlation between what a person says and what ads we serve them. Why don’t we just serve them randomized ads? Why bother with advanced technology to listen to them in the first place?”