- cross-posted to:
- privacy@lemmy.world
- cross-posted to:
- privacy@lemmy.world
A longstanding conspiracy is the tale of how Facebook is listening in on your conversations, but the way it is actually serving you ads is much more unsettling.
A longstanding conspiracy is the tale of how Facebook is listening in on your conversations, but the way it is actually serving you ads is much more unsettling.
Nearly every settlement with a major corporation is settled without the company admitting wrongdoing. I don’t doubt that there was an accidental glitch involved. What confuses me is why that makes it ok to you.
It’s generally a safe bet with cases like this that it would not have made it at far as it did in courts or been as hefty in compensation if the evidence hadn’t been damning.
Here’s the original article in the Guardian that set the whole thing in motion. Apple formally apologized for it.
In other words, we kinda do know what happened. There was a whistleblower on the contractor side.
Yeah, we know what happened and it’s not that Apple was actively triggering Siri without prompting as a way to spy on people.
The whistleblower you mention (and the article you link) raised that Apple was using human canvassers specifically to filter out accidental activations, or at least to grade the quality of the outcome.
The concern was raised because they were hearing a lot of sensitive information and felt the reporting on it wasn’t thorough enough.
Which is certainly bad. It’s a problem.
But as the OG’s piece says, it is very much NOT an admission that Apple is actively triggering indiscriminate recordings. If anything, it’s the opposite.
That’s the thing about these. They don’t need to be used nefariously to capture all of this crap. It’s still a microphone reacting to voice commands. On billions of pockets. Any amount of false positives is going to generate thousands, millions of random recordings. I have random recordings of myself from butt dialing my cam app or a voice memo app and I have NEVER turned on voice activation for a voice assistant (because it’s bad and intrusive and a privacy nightmare).
See, I’m not saying it’s OK with me.
I’m saying that Siri working as advertised is a privacy nightmare. People somehow feel the need to make up a fictitious exaggeration of what the issue is to make it feel bad to them, except that’s not what’s happening and it’s entirely unnecessary, because the entirely allowed, EULA’d up, publicly disclosed usage of data canvassing throughout the entire ecosystem is much, much, MUCH worse in aggregate.
What confuses me is why that is ok to you.
My reply was addressing what you’d said here:
We do know what was going on. It wasn’t user-end research. A contractor whose job was to determine the efficacy of Siri approached the media because they could tell the audio capture for quite a bit of what they were hearing wasn’t intentional.
To your earlier points, I hope Apple is terrified, and I don’t think that voice activation can be implemented in a way that protects its users from privacy violations.
I don’t know what about my reply led you to believe I am ok with any of this, but to clarify, I am a proponent of strict privacy laws that protect consumers before businesses.
I think “accidents” precede intentional action and I only trust Apple (or any other big tech company) as far as I can throw it.
I didn’t mean you-you, I meant you all in general.
People are way more willing to be outraged about some always-on spying that doesn’t exist, beyond accidental activations, but they aren’t outraged about demontrable, intrusive data gathering.
But you-you are also now doing the same thing, with the implication that these recordings are somehow laying the groundwork for later always-on spying. And that’s weird. Why go for the hypothetical future intrusion instead of the current, factual intrusions, you know?
¿Porqué no los dos?
I am the one who brought up the case in the first place because it is truly alarming in and of itself. I’m surprised it doesn’t come up more. It seems to me that the pervasion of voice-activated assistants, like cross-site tracking that led the way to fingerprinting, should be paid more heed, both as a problem now and as a gateway to potentially more egregious violations of privacy later. Don’t doubt that the fears could materialize.
But fair enough! I think we agree far more than we diverge here.
Well, ostensibly because one is a real issue you can do something about now and the other one is not.
And by focusing on the paranoia about imagined future transgressions it both implicitly normalizes the current functionality and paints the pushback to the current implementation as some hyperbolic, out-of-touch maximalist thing. You can call it the PETA paradox, maybe.
I don’t think seeing a logical progression or escalation is normalizing current state. It wasn’t, as you put it earlier, “working as intended”. But anyone observing corporate behavior over decades can see that today’s accident or unpopular innovation can be tomorrow’s status quo unless it gets enough pushback.
We haven’t heard about the transgressions that are being committed by corporations right now because they haven’t been caught yet. What’s considered legal is, and we clearly agree on this point, already well beyond the pale.
Everyone should be objecting to violations of privacy, both the ones we can prove and anything hypothetical that could occur. It is not worthless to object preemptively to something that hasn’t happened yet.
If there had been significant, detailed information available about TSA scanners prior to their implementation, for example, the outcry might have halted their use, or at least delayed it. Anyone who described how those work in theoretical terms prior to their implementation would have been labeled “hyperbolic” and “out of touch” prior to the reality of that tech. They’re truly invasive. Anything that’s seemingly out of reach technologically with current solutions could well be around the corner.
Anyway, we’re going in circles. I’ve been trying to end this conversation implicitly without success, so on to explicitly: thank you for the discourse and have a good night/day.
See, there you go, lost me completely now. “We should be preemptively pissed off about imaginary offenses because you just KNOW these people will eventually get there” is not how we should run our brains, let alone our regulations.
And now I’m skeptical about not just your hypothetical objections but about all of them. That’s the type of process I find counterproductive.
Anyway, all good with me in the agree to disagree front. Have a nice one yourself.
I’m happy to address your reply.
That’s a wildly inaccurate characterization of what I said. I’m trying to get out of this interaction because you misinterpret me and then move the goal posts. You went from “we don’t really know what happened” (which isn’t true) to “my point all along is that what’s really happening should be the focus, these things happened with the system working as intended” which is still incorrect. Now you’re splitting hairs over inconsequential details based on broad misunderstanding.
Nice dismissal of my entire perspective without understanding it. My objections aren’t hypothetical. We know that audio clips are accidentally saved because it happened. We know that Apple knows it happened because they acknowledged it with a formal apology. The intention isn’t the important point. They apologized because they got caught. If they hadn’t gotten caught, their process of capturing audio would have resumed and probably increased as they sought to streamline their services. That’s a reasonable projection.
Is your case here really that I had a point up until I requested we end this interaction? And then suddenly nothing I had said made sense to you anymore? Please.
Sure.