

Reliance wouldn’t be my primary concern, but rather the privacy implication. It seems like Google has to step up its surveillance game /s. Fun project though


Reliance wouldn’t be my primary concern, but rather the privacy implication. It seems like Google has to step up its surveillance game /s. Fun project though


So the amend alleges, Nvidia having used/stored/copied/obtained/distributed copyrighted works (including plaintiffs’), both through databases available on Hugging Face (‘Books3’ featured in both ‘The Pile’ and ‘SlimPajama’), or pirating from shadow libraries (like Anna’s Archive), to train multiple LLMs (primarily their ‘NeMo Megatron’ series), and distributing the copyrighted data through the ‘NeMo Megatron Framework’; data which was ultimately sourced from shadow libraries.
It’s quite an interesting read actually, especially the link to this Anna’s Archive blog post. Which it grossly pulls out of context, as plaintiffs clearly despise the shadow libraries too: as they have ultimately provided access to their copyrighted material.
Especially the part: “Most (but not all!) US-based companies reconsidered once they realized the illegal nature of our work. By contrast, Chinese firms have enthusiastically embraced our collection, apparently untroubled by its legality.” makes me wonder if that’s the reason why models like Deepseek, initially blew Western models out of the water.


So a Mastodon ripoff, but its instances hosted by a single entity (effectively centralized): ensuring all instances residing within the European jurisdiction (allowing for full control over it). I don’t see how they genuinely believe, to have humans do the photo validation, when competing at the scale of X; especially when you run all the instances. Perhaps they could recruit volunteers to socialize the losses, as the platform privatizes the profits. Nothing but a privacy-centric approach however: said the privacy expert…
Zeiter emphasized that systemic disinformation is eroding public trust and weakening democratic decision-making … W will be legally the subsidiary of “We Don’t Have Time,” a media platform for climate action … A group of 54 members of the European Parliament [primarily Greens/EFA, Renew, The Left] called for European alternatives
If that doesn’t sound like a recipe, for swinging the pendulum to the other extreme (once more), I don’t know what does… Because can you imagine, a modern social media platform, not being a political echo chamber: not promoting extremism by use of filter bubbles, and instead allowing for deescalation through counter argumentation. One would almost start to think, for it all to be intentional: as a deeply divided population will never stand united, against their common oppressor.


Great, more hoops to jump thr… I mean… an “advanced flow”, for gaining the privilege of installing apps of your choosing


innovation COURAGE


With “deletion” you’re simply advancing the moment, they’re supposedly “deleting” your data; something I refuse to believe, they actually do. Instead, I suspect they “anonymize”, or effectively “pseudonymize” the data (as cross-referencing is trivial, when showing equal patterns on a new account; would the need arise). Stagnation wouldn’t require services to take such steps, and any personal data remains connected to you, personally.
For the Gmail account, I would recommend: not deleting the account, opening an account at a privacy-respecting service (using Disroot as an example), connect the Gmail account to an email-client (like Thunderbird), copy all its contents (including ‘sent’ or other specific folders) to a local folder (making sure to back these up periodically), delete all contents from the Gmail server, and simply wait for incoming messages, at the now empty Gmail account.
If a worthy email comes in: copy it over to the local folder, and delete it from the Gmail server. For used services, you could change the contact address to the Disroot account, and for others you could delete them, or simply mark them as spam (and periodically emptying the spam-folder). You may not want to wait for privacy-sensitive services, to finally make an appearance, and change these over to the Disroot address right away.
I’ve been doing this for years now, and my big-tech accounts remain empty most of the time. Do make sure to transfer every folder, and make regular backups!


Maybe the best ad is to not have AI


THIS is how you do it, looking at you Brave: requiring me to (re)type my queries in the URL bar (appending ‘&summary=0’ to it), so I’m not required to store a persistent cookie, keeping the damn setting off…


My emails forced me to, locking me out of accounts I needed to access.
Microsoft had me fill this form, to “prove” I was the rightful owner of the account, after some suspicious login-attempts from an African country. The form included fields like: name (I don’t think I supplied at creation, or a false one), other email addresses, previous passwords (which potentially yield completely unrelated passwords), etc.; only for the application to be rejected and locking me out of my primary email for a full month. After that outright violation, I immediately switched to Disroot, and haven’t had any of said problems ever since. I backup all its contents locally using Thunderbird, and delete the origins from the server afterwards.
Many platforms have this messed up dark pattern, of revoking one’s access to a real-world dependencies, unless giving in to the service’s demands. Enforcement of 2FA is another one of those “excuses” for this type of misbehavior, and so is bot-detection.


Yeah, I think they employ a pretty sophisticated bot detection algorithm. I vaguely remember there being this ‘make 5 friends’ objective, or something along those lines, which I had no intention of fulfilling. If a new account, having triggered the manual reviewing process, doesn’t adhere to common usage patterns, simply have them supply additional information. Any collateral damage simply means additional data, to be appended to Facebook’s self-profiling platform… I mean, what else would one expect when Facebook’s first outside investor was Palentir’s Peter Thiel?


It’s been like that for quite a while. I remember deleting all big-tech accounts in 2019, and shortly after, Facebook started requiring login for full public page access. Therefore I created a burner account using a ‘this person does not exist’ picture, which provided me short-lived access after manual review. For account recovery, I was required to supply additional selfies (or even video-selfies?), but at that point I gave up.


Well, we are using them today for human programmers, so… :-)
True that haha…


I don’t know: it’s not just the outputs posing a risk, but also the tools themselves. The stacking of technology can only increase attack-surface it seems, at least to me. The fact that these models seem to auto-fill API values, without user-interaction, is quite unacceptable to me; it shouldn’t require additional tools, checking for such common flaws.
Perhaps AI tools in professional contexts, can be best seen as template search tools. Describe the desired template, and it simply provides the template, it believes most closely matches the prompt. The professional can then “simply” refine the template, to match it to set specifications. Or perhaps rather use it as inspiration and start fresh, and not end up spending additional time resolving flaws.


No worries! :)


The main paradox here, seems to be: the 70% boilerplate head-start being perceived faster, but the remaining 30% of fixing the AI-introduced mess, negating the marketed time-savings; or even leading to outright counterproductivity. At least in more demanding environments, not cherry picked by the industry, shoveling the tools.


I understand you’ve read the comment as a single thing, mainly because it is. However, the BLE part is an additional piece of critique, which is not directly related to this specific exploit; neither is the tangent on the headphone jack “substitution”. It’s, indeed, this fast pairing feature, which is the subject of the discussed exploit; so you understood that correctly (or I misunderstood it too…).
I’m however of the opinion, BLE being a major attack vector, by design. These are IoT devices that, especially when “find my device” is enabled (which in many cases isn’t even optional: “turned off” iPhones for example), do announce themselves periodically to the surrounding mesh, allowing for the precise location of these devices; and therefore also the persons carrying them. If bad actors gain access, to for example Google’s Sensorvault (legally in the case of state-actors), or would find ways of building such databases themselves; then I’d argue you’re in serious waters. Is it a convenient feature, to help one relocate lost devices? Yes. But this nice-to-have, also comes with this serious downside, which I believe doesn’t even near justify the means. Rob Braxman has a decent video about the subject if you’re interested.
It’s not even a case of kids not wanting to switch, most devices don’t even come with 3.5mm jack connectors anymore…


If the devices weren’t previously linked to a Google account … then a hacker could … also link it to their Google account.
This already severely limits the pool of potential victims; but still a more practical exploit indeed. It’s almost as if this BLE tracking is a feature, rather than an exploit. And if you want to be notified of a device following you around, one has to perpetually enable BLE on their smartphone. But of course, headphone jacks are a thing of the past, and wireless is clearly the future. :)


But you need to be in close proximity (~15m max) to stalk a victim? You might as well just follow them around physically then. Perhaps when the victim is in a private location, eavesdrop on their conversation or locating their position within there, might be a possibility. But ear raping would, of course, constitute the most significant danger of all. Also WhisperPair, not WhisPair?


AI reviews don’t replace maintainer code review, nor do they relieve maintainers from their due diligence.
I can’t help but to always be a bit skeptical, when reading something like this. To me it’s akin to having to do calculations manually, but there’s a calculator right beside you. For now, the technology might not yet be considered sufficiently trustworthy, but what if the clanker starts spitting out conclusions, which equal a maintainer’s, like 99% of the time? Wouldn’t (partial) automation of the process become extremely tempting, especially when the stack of pull request starts piling up (because of vibecoding)?
Such a policy would be near-impossible to enforce anyway. In fact, we’d rather have them transparently disclose the use of AI than hide it and submit the code against our terms. According to our policy, any significant use of AI in a pull request must be disclosed and labelled.
And how exactly do you enforce that? It seems like you’re just shifting the problem.
Certain more esoteric concerns about AI code being somehow inherently inferior to “real code” are not based in reality.
I mean, there’s hallucination concerns, there’s licensing conflicts. Sure, people can also copy code from other projects with incompatible licenses, but someone without programming experience is less likely to do so, than when vibecoding with a tool directly trained on such material.
Malicious and deceptive LLMs are absolutely conceivable, but that would bring us back to the saboteur.
If Microsoft itself, would be the saboteur, you’d be fucked. They know the maintainers, because GitHub is Microsoft property, and so is the proprietary AI model, directly implemented in the toolchain. A malicious version of Copilot could, hypothetically, be supplied to maintainers, specifically targeting this exploit. Microsoft is NOT your friend, it closely works together with government organizations; which are increasingly interested in compromising consumer privacy.
For now, I do believe this to be a sane approach to AI usage, and believe developers to have the freedom to choose their preferred environment. But the active usage of such tools, does come with a (healthy) dose of critique, especially with regards to privacy-oriented pieces of software; a field where AI has generally been rather invasive.
So multiple, nickel-titanium alloy tubes, are stretched and released within the refrigerator, causing a temperature change in the alloy, the heat of which (pulled from the interior) transferred to the calcium chloride fluid, being pumped around through the tubes; to be transferred to the outdoor climate, by use of an exterior heat exchanger. Something along those lines?