A longstanding conspiracy is the tale of how Facebook is listening in on your conversations, but the way it is actually serving you ads is much more unsettling.

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    1 day ago

    Not outright false, but out of context. That suit was settled with Apple denying any wrongdoing, for one thing, but more importantly, from what I can tell the point wasn’t whether Apple was turning on Siri without permission (which is unlikely) and more that accidental or unintentional activations were being recorded and processed for advertising.

    I presume that’s scarier for Apple, because a) it’s probably very likely to have happened, and b) if a court found they have to be 100% accurate in filtering out unintended activations the entire voice assistant thing may be completely impossible to implement legally.

    So we know they paid some money to settle that, but we don’t know what was going on (beyond research like the one in the linked article by the OP that says it’s unlikely anybody is sending secret voice data).

    • multiplewolves@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      21 hours ago

      Nearly every settlement with a major corporation is settled without the company admitting wrongdoing. I don’t doubt that there was an accidental glitch involved. What confuses me is why that makes it ok to you.

      It’s generally a safe bet with cases like this that it would not have made it at far as it did in courts or been as hefty in compensation if the evidence hadn’t been damning.

      Here’s the original article in the Guardian that set the whole thing in motion. Apple formally apologized for it.

      In other words, we kinda do know what happened. There was a whistleblower on the contractor side.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        20 hours ago

        Yeah, we know what happened and it’s not that Apple was actively triggering Siri without prompting as a way to spy on people.

        The whistleblower you mention (and the article you link) raised that Apple was using human canvassers specifically to filter out accidental activations, or at least to grade the quality of the outcome.

        The concern was raised because they were hearing a lot of sensitive information and felt the reporting on it wasn’t thorough enough.

        Which is certainly bad. It’s a problem.

        But as the OG’s piece says, it is very much NOT an admission that Apple is actively triggering indiscriminate recordings. If anything, it’s the opposite.

        That’s the thing about these. They don’t need to be used nefariously to capture all of this crap. It’s still a microphone reacting to voice commands. On billions of pockets. Any amount of false positives is going to generate thousands, millions of random recordings. I have random recordings of myself from butt dialing my cam app or a voice memo app and I have NEVER turned on voice activation for a voice assistant (because it’s bad and intrusive and a privacy nightmare).

        See, I’m not saying it’s OK with me.

        I’m saying that Siri working as advertised is a privacy nightmare. People somehow feel the need to make up a fictitious exaggeration of what the issue is to make it feel bad to them, except that’s not what’s happening and it’s entirely unnecessary, because the entirely allowed, EULA’d up, publicly disclosed usage of data canvassing throughout the entire ecosystem is much, much, MUCH worse in aggregate.

        What confuses me is why that is ok to you.

        • multiplewolves@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          20 hours ago

          My reply was addressing what you’d said here:

          So we know they paid some money to settle that, but we don’t know what was going on (beyond research like the one in the linked article by the OP that says it’s unlikely anybody is sending secret voice data).

          We do know what was going on. It wasn’t user-end research. A contractor whose job was to determine the efficacy of Siri approached the media because they could tell the audio capture for quite a bit of what they were hearing wasn’t intentional.

          To your earlier points, I hope Apple is terrified, and I don’t think that voice activation can be implemented in a way that protects its users from privacy violations.

          I don’t know what about my reply led you to believe I am ok with any of this, but to clarify, I am a proponent of strict privacy laws that protect consumers before businesses.

          I think “accidents” precede intentional action and I only trust Apple (or any other big tech company) as far as I can throw it.

          • MudMan@fedia.io
            link
            fedilink
            arrow-up
            3
            ·
            20 hours ago

            I didn’t mean you-you, I meant you all in general.

            People are way more willing to be outraged about some always-on spying that doesn’t exist, beyond accidental activations, but they aren’t outraged about demontrable, intrusive data gathering.

            But you-you are also now doing the same thing, with the implication that these recordings are somehow laying the groundwork for later always-on spying. And that’s weird. Why go for the hypothetical future intrusion instead of the current, factual intrusions, you know?

            • multiplewolves@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              19 hours ago

              Why go for the hypothetical future intrusion instead of the current, factual intrusions, you know?

              ¿Porqué no los dos?

              I am the one who brought up the case in the first place because it is truly alarming in and of itself. I’m surprised it doesn’t come up more. It seems to me that the pervasion of voice-activated assistants, like cross-site tracking that led the way to fingerprinting, should be paid more heed, both as a problem now and as a gateway to potentially more egregious violations of privacy later. Don’t doubt that the fears could materialize.

              But fair enough! I think we agree far more than we diverge here.

              • MudMan@fedia.io
                link
                fedilink
                arrow-up
                2
                ·
                19 hours ago

                Well, ostensibly because one is a real issue you can do something about now and the other one is not.

                And by focusing on the paranoia about imagined future transgressions it both implicitly normalizes the current functionality and paints the pushback to the current implementation as some hyperbolic, out-of-touch maximalist thing. You can call it the PETA paradox, maybe.

                • multiplewolves@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  19 hours ago

                  I don’t think seeing a logical progression or escalation is normalizing current state. It wasn’t, as you put it earlier, “working as intended”. But anyone observing corporate behavior over decades can see that today’s accident or unpopular innovation can be tomorrow’s status quo unless it gets enough pushback.

                  We haven’t heard about the transgressions that are being committed by corporations right now because they haven’t been caught yet. What’s considered legal is, and we clearly agree on this point, already well beyond the pale.

                  Everyone should be objecting to violations of privacy, both the ones we can prove and anything hypothetical that could occur. It is not worthless to object preemptively to something that hasn’t happened yet.

                  If there had been significant, detailed information available about TSA scanners prior to their implementation, for example, the outcry might have halted their use, or at least delayed it. Anyone who described how those work in theoretical terms prior to their implementation would have been labeled “hyperbolic” and “out of touch” prior to the reality of that tech. They’re truly invasive. Anything that’s seemingly out of reach technologically with current solutions could well be around the corner.

                  Anyway, we’re going in circles. I’ve been trying to end this conversation implicitly without success, so on to explicitly: thank you for the discourse and have a good night/day.

                  • MudMan@fedia.io
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    15 hours ago

                    See, there you go, lost me completely now. “We should be preemptively pissed off about imaginary offenses because you just KNOW these people will eventually get there” is not how we should run our brains, let alone our regulations.

                    And now I’m skeptical about not just your hypothetical objections but about all of them. That’s the type of process I find counterproductive.

                    Anyway, all good with me in the agree to disagree front. Have a nice one yourself.