A longstanding conspiracy is the tale of how Facebook is listening in on your conversations, but the way it is actually serving you ads is much more unsettling.

  • multiplewolves@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    1
    ·
    20 hours ago

    People worried about “digital eavesdropping” aren’t paranoid. There’s an entire class-action lawsuit based on Apple’s Siri getting caught being activated without the trigger command and data that was captured being sent to third party providers.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      19 hours ago

      Not outright false, but out of context. That suit was settled with Apple denying any wrongdoing, for one thing, but more importantly, from what I can tell the point wasn’t whether Apple was turning on Siri without permission (which is unlikely) and more that accidental or unintentional activations were being recorded and processed for advertising.

      I presume that’s scarier for Apple, because a) it’s probably very likely to have happened, and b) if a court found they have to be 100% accurate in filtering out unintended activations the entire voice assistant thing may be completely impossible to implement legally.

      So we know they paid some money to settle that, but we don’t know what was going on (beyond research like the one in the linked article by the OP that says it’s unlikely anybody is sending secret voice data).

      • multiplewolves@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        14 hours ago

        Nearly every settlement with a major corporation is settled without the company admitting wrongdoing. I don’t doubt that there was an accidental glitch involved. What confuses me is why that makes it ok to you.

        It’s generally a safe bet with cases like this that it would not have made it at far as it did in courts or been as hefty in compensation if the evidence hadn’t been damning.

        Here’s the original article in the Guardian that set the whole thing in motion. Apple formally apologized for it.

        In other words, we kinda do know what happened. There was a whistleblower on the contractor side.

        • MudMan@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          14 hours ago

          Yeah, we know what happened and it’s not that Apple was actively triggering Siri without prompting as a way to spy on people.

          The whistleblower you mention (and the article you link) raised that Apple was using human canvassers specifically to filter out accidental activations, or at least to grade the quality of the outcome.

          The concern was raised because they were hearing a lot of sensitive information and felt the reporting on it wasn’t thorough enough.

          Which is certainly bad. It’s a problem.

          But as the OG’s piece says, it is very much NOT an admission that Apple is actively triggering indiscriminate recordings. If anything, it’s the opposite.

          That’s the thing about these. They don’t need to be used nefariously to capture all of this crap. It’s still a microphone reacting to voice commands. On billions of pockets. Any amount of false positives is going to generate thousands, millions of random recordings. I have random recordings of myself from butt dialing my cam app or a voice memo app and I have NEVER turned on voice activation for a voice assistant (because it’s bad and intrusive and a privacy nightmare).

          See, I’m not saying it’s OK with me.

          I’m saying that Siri working as advertised is a privacy nightmare. People somehow feel the need to make up a fictitious exaggeration of what the issue is to make it feel bad to them, except that’s not what’s happening and it’s entirely unnecessary, because the entirely allowed, EULA’d up, publicly disclosed usage of data canvassing throughout the entire ecosystem is much, much, MUCH worse in aggregate.

          What confuses me is why that is ok to you.

          • multiplewolves@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            13 hours ago

            My reply was addressing what you’d said here:

            So we know they paid some money to settle that, but we don’t know what was going on (beyond research like the one in the linked article by the OP that says it’s unlikely anybody is sending secret voice data).

            We do know what was going on. It wasn’t user-end research. A contractor whose job was to determine the efficacy of Siri approached the media because they could tell the audio capture for quite a bit of what they were hearing wasn’t intentional.

            To your earlier points, I hope Apple is terrified, and I don’t think that voice activation can be implemented in a way that protects its users from privacy violations.

            I don’t know what about my reply led you to believe I am ok with any of this, but to clarify, I am a proponent of strict privacy laws that protect consumers before businesses.

            I think “accidents” precede intentional action and I only trust Apple (or any other big tech company) as far as I can throw it.

            • MudMan@fedia.io
              link
              fedilink
              arrow-up
              3
              ·
              13 hours ago

              I didn’t mean you-you, I meant you all in general.

              People are way more willing to be outraged about some always-on spying that doesn’t exist, beyond accidental activations, but they aren’t outraged about demontrable, intrusive data gathering.

              But you-you are also now doing the same thing, with the implication that these recordings are somehow laying the groundwork for later always-on spying. And that’s weird. Why go for the hypothetical future intrusion instead of the current, factual intrusions, you know?

              • multiplewolves@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                13 hours ago

                Why go for the hypothetical future intrusion instead of the current, factual intrusions, you know?

                ¿Porqué no los dos?

                I am the one who brought up the case in the first place because it is truly alarming in and of itself. I’m surprised it doesn’t come up more. It seems to me that the pervasion of voice-activated assistants, like cross-site tracking that led the way to fingerprinting, should be paid more heed, both as a problem now and as a gateway to potentially more egregious violations of privacy later. Don’t doubt that the fears could materialize.

                But fair enough! I think we agree far more than we diverge here.

                • MudMan@fedia.io
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  13 hours ago

                  Well, ostensibly because one is a real issue you can do something about now and the other one is not.

                  And by focusing on the paranoia about imagined future transgressions it both implicitly normalizes the current functionality and paints the pushback to the current implementation as some hyperbolic, out-of-touch maximalist thing. You can call it the PETA paradox, maybe.

  • infeeeee@lemm.ee
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    19 hours ago

    An earlier version of this article was published in 2019.

    While the content of the article is true, and I had to explain it to other people on the internet and irl multiple times in the last decade, the article doesn’t write about the new TPU chips nowadays appearing in devices. With them full on-device STT will become more and more possible, so the tests mentioned in the article won’t detect eavesdropping, as they won’t need to send sound files to datacenters, only the transcript.

    It would have been useful if they wrote about this new vector in the article, as TPUs were not that common in 2019.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      19 hours ago

      Okay, but if the myth was false then and the behavior of the legit user interaction with the voice assistant versus idle is different… why would they wait until they have on-device processing to implement it that way? Why would they implement expensive server recognition for intended use but sneak in on-device processing JUST for advertising purposes they are already mining you for well within the EULA’s terms? That and you’d definitely see it in battery consumption, if not in data throughput. NPUs/TPUs are hungry bois, so it wouldn’t be a particularly smart workaround for quiet detection.

      It’s not that I think they wouldn’t spy on your conversations, it’s that I think it’d be bad business to do it that way.

      This is always shocking to me. I mean, the researchers in this example are out there going “no, seriously, these third party apps are taking screenshots of your phone whenever you give screenshot permissions and sometimes sending video of what you do and they track you to the smallest detail and it’s messed up” and everybody brushes that off and goes BUT SIRI IS LISTENING THO!!! and you just can’t convince them to care about the real bad thing or to stop caring about the probably false less bad thing.

      It’s very confusing to me.

      • infeeeee@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 hours ago

        I guess there is some, but not too much valuable content in spoken personal conversations. Current tech is not there yet, so it would be more costly in money and reputation to build some datacenters for this yet. If the money cost would go order of magnitudes lower they would wrap it in some marketing bullshit like “you can search in your all memories”, lot of people would gladly allow it (iirc there was a black mirror episode about a device like this)

        I think it won’t happen tomorrow or in the near future, my point was they just reposted a 6 years old article without writing about a new and relelevant development in the topic.

        • MudMan@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          17 hours ago

          Well, I do agree that it’s weird to update a five year old article by injecting new content without fully rewriting it. They did update it, they are reporting some stuff that Cox Media Group did in 2024, the backlash to it and the eventual backtracking. But they’re not providing version control, so it’s hard to know what’s new and old in the piece. They probably should have just made a separate follow-up.

          Still, you can’t be mad at something that isn’t happening because you think maybe it will happen some day but not be mad at things that are happening that are worse than the thing that isn’t happening. That makes no sense. Why make up a false outrageous scenario and not discuss the real, current more outrageous one?

          Incidentally, this whole line of questioning is why I absolutely loathe Black Mirror in both concept and execution. Yeah, speaking of unpopular opinions.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    18 hours ago

    “How did it know what I was talking about”

    “It didn’t. It just planted the conversation topic in your head with ads”

    doesn’t understand