

It’s weird to me that you would click a link and without your consent JS code can be downloaded from wherever and run on your computer.
NoScript is always on for me (on my personal PC). Sites that don’t load at all are probably not worth visiting.
I am also @lsxskip@mastodon.social
It’s weird to me that you would click a link and without your consent JS code can be downloaded from wherever and run on your computer.
NoScript is always on for me (on my personal PC). Sites that don’t load at all are probably not worth visiting.
Ph-trees can do range and closest queries across N dimensions very quickly. I have not used it for 1 dimension, but I’d imagine it would work fine.
Can you share sample code I can try or documentation I can follow of using an AMD GPU in that way (shared, virtualized, using only open source drivers)?
You really piqued my interest. I use docker/podman.
W/ an AMD graphics card, eglinfo on the host shows the card is AMD Radeon and driver is matching that.
In the container, without --gpus=all, it shows the card is unknown and the driver is “swrast” (so just CPU fallback).
To make --gpus=all work, it gives the error
docker: Error response from daemon: could not select device driver “” with capabilities: [[gpu]
I was doing a bad job searching before. I found that AMD can share the GPU, it just works a little differently in terms of how to launch the container. https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/amdgpu-install.html#amdgpu-install-dkms
But sadly my AMD GPU is too old/junk to have current driver support.
Anyways, appreciate the reply! Now I can mod my code to run on cheaper cloud instances.
(Note I’m an OpenGL/3D app developer, but probably OpenCL works about the same architecturally)
AFIK it’s only NVIDIA that allows containers shared access to a GPU on the host.
With the majority of code being deployed in containers, you end up locked into the NVIDIA ecosystem even if you use OpenCL. So I guess people just use CUDA since they are limited by the container requirement anyways.
That’s from my experience using OpenGL headless. If I’m wrong please correct me; I’d prefer being GPU agnostic.
Hired!
I bet the people you work with are very happy to have you as a lead.
I’ve been in this scenario and I didn’t wait for layoffs. I left and applied my skills where shit code is not tolerated, and quality is rewarded.
But in this hypothetical, we got this shit code not by management encouraging the right behavior, and giving time to make it right. They’re going to keep the yes men and fire the “unproductive” ones (and I know fully, adding to the pile is not, in the long run, productive, but what does the management overseeing this mess think?)
To be fair, if you give me a shit code base and expect me to add features with no time to fix the existing ones, I will also just add more shit on the pile. Because obviously that’s how you want your codebase to look.
This should be a Venn diagram with zero overlap, lol
Maplibre (https://maplibre.org/) offers a beautiful open source solution. There are affordable open source solutions for OSM base maps too (https://github.com/protomaps/basemaps), where you can host the whole thing as a single static file.
No one should be paying Google per API key :)
True, but they were still resource constrained, which might be why they ended up with a model with lower resource requirements.
The scary part to me (noted in the article as well) is less the technical hack but more so the amount of data they are collecting.
Subaru had/has an ongoing issue where the telematics drains the battery while the car is parked, especially if it’s parked out of reach of cell towers. With the amount of data they are sending, it’s not surprising.
There is no need for the car to report its position whatsoever unless I request assistance.
Should be a nice salary boost for developers in a year or two when all these companies desperately need to rehire to fix whatever AI slop mess they have created.
And I hope every developer demands 2x their current salary if they are tasked with re-engineering that crap.
Yup. If source is not available I’m not using it if I have any choice in the matter. Binary distribution is nice, but I’d rather have source.
Plus I’m sure some kind soul has created a build pipeline that autogenerates binaries from the source. I can always either use that or clone and customize it. It’s a natural separation—as a dev I’d like my responsibility to end at “I merged working code to trunk”.
+1 for feeder
Uhh he should know all the Elite hackers call it Tracer-T
Remember this blast from the past?
Those mega corporations have intentionally misused the term “algorithm” which implies an unbiased method of ranking or sorting. What they are actually using is more like a human curated list of items to promote that supports their self serving goals.
SO is rapidly fading into irrelevance, but we’re all still writing code anyways. Seems like the problem will solve itself.
Everything’s computer