“Is your UNIX Linux compatible?”
I saw the thumbnail and thought this was a map of The Netherlands
One of the Top 500 supercountries
Technically accurate
Aw thanks!
Surprised to learn that there were windows based Supercomputers.
Those were the basic entry level configurations needed to run Windows Vista with Aero effects.
Meh, you just needed a discrete GPU, and not even a good one either. Just a basic, bare-bones card with 128MB of VRAM and pixel shader 2.0 support would have sufficed, but sadly most users didn’t even have that back in 06-08.
It was mostly the consumer’s fault for buying cheap garbage laptops with trash-tier iGPUs in them, and the manufacturer’s for slapping a “compatible with Vista” sticker on them and pushing those shitboxes on consumers. If you had a half-decent $700-800 PC then, Vista ran like a dream.
No, it was mostly the manufacturers fault for implying that their machine would run the operating system it shipped with well. Well that and Microsoft’s fault for strong arming them to push Vista on machines that weren’t going to run it well.
APUs obviously weren’t a thing yet, and it was common knowledge back then that contemporary iGPUs were complete and utter trash. I mean they were so weak that you couldn’t even play HD video or even enable some of XP’s very basic graphical effects with most integrated graphics.
Everyone knew that you needed a dedicated graphics card back then, so you can and should in fact put some blame on the consumer for being dumb enough to buy a PC without one, regardless of what the sticker said. I mean I was a teenager back then and even still I knew better. The blame goes both ways.
No, if you weren’t “involved in the scene” and only had the word of the person at the store then you have no idea what an iGPU is, let alone why they weren’t up to the task of running the very thing it was sold with.
You were a teenager in a time where teenagers average tech knowledge was much higher than before. That is not the same as someone who just learnt they now need one of those computer things for work. Not everyone had someone near them who could explain it to them. Blaming them for not knowing the intricacies of the machines is ridiculous. It was pure greed by Microsoft and the manufacturers.
Most computers sold are the lowest end models. At work we never got anything decent so it was always a bit of a struggle. Our office stayed with XP for way longer than we should have so we skipped Vista altogether and adopted Windows 7 a few years late.
No vista still sucked with every nagging pop-up.
TweakUAC solved that problem.
Now the real question is what package manager are they using? apt or yum? Lol
Slackpkg or slackpkg+, without a doubt.
they specifically built it to only use snaps
They’re all Ubuntu distros lol
Portage (Gentoo)
They are using pacman obviously :)
nah theyre using nixpkgs
Also, Gnome or KDE?
It’s probably mostly cli
Yeah, but gterminal or konsole?
edit: apparently people didn’t realise this was a really obvious joke
That’s a terminal emulator
thatsthejoke.jpg
TTY
Ah hahahaha!!!
Windows! Some dumbass put Windows on a supercomputer!
Probably need one, just for the benchmark comparisons.
A supercomputer running Windows HPC Server 2008 actually ranked 23 in TOP500 in June 2008.
I always forget that Windows Server even exists, because the name is so stupid. “windows” should mean “gui
interfaceto os.”edit: fixed redundacy.
But Windows Server has GUI. Although a server having GUI (not webui, desktop) is kinda stupid
The GUI is optional these days, and there’s plenty of Windows servers that don’t use it. The recommended administration approach these days is PowerShell remoting, often over SSH now that Windows has a native SSH server bundled (based on OpenSSH).
That gives me the idea of windows server installed on bare metal configured as a lightweight game runner. (much like a linux distro with minimal wm)
I’ve seen people using slightly modified windows server as an unbloated gaming OS but I’m not sure if running a custom minimal GUI on windows server is possible. You seem knowledgeable on the subject, with enough effort, is it possible?
I don’t think I know enough to answer that question, sorry!
I’d say having a GUI is not inherently stupid. The stupid part is, if I understand it correctly, the GUI being a required component and the primary access method.
Yeah. Thankfully, Windows server cleaned up that stupidity starting around 2006 and finished in around 2018.
Which all sounds fine until we meditate on the history that basically all other server operating systems have had efficient remote administration solutions since before 1995 (reasonable solutions existed before SSH, even).
Windows was over 20 years late to adopt non-grapgical low latency (aka sane) options for remote administration.
I think it’s a big part of the reason Windows doesn’t appear much on this chart.
And Mac! Whatever that means 🤣
deleted by creator
Prob Microsoft themselves
Ironically, even Microsoft uses Linux in its Azure datacenters, iirc
They use a mixture of Windows and Linux. They do use Linux quite a bit, but they also have a lot of Hyper-V servers.
True. Never meant to say they use Linux exclusively; thanks for clarification anyway!
Apple uses both Linux and Windows (not for datacenters) too.
Good point.
But still, the 30% efficient supercomputer.
Heh. I don’t think that number was ever official, but I heard it as well.
Heh. I don’t think that number was ever official, but I heard it as well.
I don’t think either of the chart’s axes list efficiency?
So basically, everybody switched from expensive UNIX™ to cheap “unix”-in-all-but-trademark-certification once it became feasible, and otherwise nothing has changed in 30 years.
Except this time the Unix-like took 100% of the market
Was too clear this thing is just better
BSD is mostly Unix too, so even if Unix didn’t have 100% because of mac and Windows it was like 99%
BSD is more UNIX than Linux is, to be fair.
Mac is BSD
BSD is BSD-like
BSD is based on Unix, and Linux isn’t, so it is way more Unix than Linux is.
Maybe windows is not used in supercomputers often because unix and linux is more flexiable for the cpus they use(Power9,Sparc,etc)
That’s certainly a big part of it. When one needs to buy a metric crap load of CPUs, one tends to shop outside the popular defaults.
Another big reason, historically, is that Supercomputers didn’t typically have any kind of non-command-line way to interact with them, and Windows needed it.
Until PowerShell and Windows 8, there were still substantial configuration options in Windows that were 100% managed by graphical packages. They could be changed by direct file edits and registry editing, but it added a lot of risk. All of the “did I make a mistake” tools were graphical and so unavailable from command line.
So any version of Windows stripped down enough to run on any super-computer cluster was going to be missing a lot of features, until around 2006.
Since Linux and Unix started as command line operating systems, both already had plenty fully featured options for Supercomputing.
More importantly, they can’t adapt Windows to their needs.
Yep the other reason.
Plus Linux doesn’t limit you in the number of drives, whereas Windows limits you from A to Z. I read it here.
You can mount drives against folders in windows. So while D: is one drive, D:\Logs or D:\Cake can each be a different disk.
What in the world? I don’t think I’ve ever seen that in the wild
It’s common in the server world. KB article on it is here.
For people who haven’t installed Windows before, the default boot drive is G, and the default file system is C
So you only have 25 to work with (everything but G)
Almost, the default boot drive is C:, everything gets mapped after that. So if you have a second HDD at D: and a disk reader at E:, any USBs you plug in would go to F:.
Why do you copy the boot files from C and put them in G during install then?
G can be mapped after boot (usually to removable drives)
Ok that would make sense tbh
deleted by creator
So you’re telling me that there was a Mac super computer in '05?
https://en.wikipedia.org/wiki/System_X_(supercomputer)
G5
Oof, in only a couple years it was worthless.
Also known as Big Mac
haha
Funnily enough is was known for being quite cheap
If I recall correctly they linked a bunch of powermacs together with FireWire.
It apparently later was transitioned to Xserves
Can we get a source for this image?
Sure. Added it to the post.
Wait what Mac?
The Big Mac. 3rd fastest when it was built and also the cheapest, costing only $5.2 million.
Interesting. It’s like those data centers that ran on thousands of Xboxes
Wha?
(searches interwebs)
Wow, that completely passed me by…
I think it was PS3 that shipped with “Other OS” functionality, and were sold a little cheaper than production costs would indicate, to make it up on games.
Only thing is, a bunch of institutions discovered you could order a pallet of PS3’s, set up Linux, and have a pretty skookum cluster for cheap.
I’m pretty sure Sony dropped “Other OS” not because of vague concerns of piracy, but because they were effectively subsidizing supercomputers.
Don’t know if any of those PS3 clusters made it onto Top500.
It was 33rd in 2010:
In November 2010, the Air Force Research Laboratory created a powerful supercomputer, nicknamed the “Condor Cluster”, by connecting together 1,760 consoles with 168 GPUs and 84 coordinating servers in a parallel array capable of 500 trillion floating-point operations per second (500 TFLOPS). As built, the Condor Cluster was the 33rd largest supercomputer in the world and was used to analyze high definition satellite imagery at a cost of only one tenth that of a traditional supercomputer.
https://en.wikipedia.org/wiki/PlayStation_3_cluster
https://phys.org/news/2010-12-air-playstation-3s-supercomputer.html
OMG I can feel the heat emanating from that photo
Makes me think how PS2 had export restrictions because “its graphics chip is sufficiently powerful to control missiles equipped with terrain reading navigation systems”
Some of thaose restrictions get stupid. We had a client ship us their hardware and they included Laser Mouse on the manifest. The US border controls would not allow delivery due to a Laser being included LOL. Had they just entered it as a Mouse the package would have been delivered.
That’s so friggin cool to think about!
I remember when 128 but SSL Encryption was export restricted in the mid 90’s. When I first opened an online banking account, the Bank sent a CD with a customized version of Netscape Navigator with 128 bit SSL, and the bank logo in place of the Netscape N.
3rd fastest
And 1st tastiest
That’s highly debatable.
Oh Xserve, we hardly knew ye 😢
Mac is a flavor of Unix, not that surprising really.
Mac is also also derived from BSD since it is built on Darwin
Apple had its current desktop environment for it’s proprietary ecosystem built on BSD with their own twist while supercomputers are typically multiuser parallel computing beats, so I’d say it is really fucking surprising. Pretty and responsive desktop environments and breathtaking number crunchers are the polar opposites of a product. Fuck me, you’ll find UNIX roots in Windows NT but my flabbers would be ghasted if Deep Blue had dropped a Blue Screen.
I’m confused on why they separate BSD from Unix. BSD is a Unix variant.
Unix is basically a brand name.
BSD had to be completely re-written to remove all Unix code, so it could be published under a free license.
It isn’t Unix certified.So it is Unix-derived, but not currently a Unix system (which is a completely meaningless term anyway).
But OS X, macOS, and at least one Linux distro are/were UNIX certified.
Yup. It is all about paying the price, Microsoft could technically get Windows certified as UNIX. IBM did just that with its mainframe OS. Here’s a list of certified UNIX systems: https://www.opengroup.org/openbrand/register/
Microsoft could technically get Windows certified as UNIX.
I don’t think they could now that the POSIX subsystem and Windows Services for UNIX are both gone. Don’t you need at least some level of POSIX compliance (at least the parts where POSIX and Unix standards overlap) to get Unix certified?
It means nothing, it’s just a paycheck you sign and then you get to say “I certify my OS is Unix”. The little bit more technical part is POSIX compliance but modern OSs are such massive and complex beasts today that those compliances are tiny parts and very slowly but very surely becoming irrelevant over time.
Apple made OSX Unix certified because it was cheap and it got them off the hook from a lawsuit. That’s it.
To make it more specific I guess, what’s the problem with that? It’s like having a “people living on boats” and “people with no long term address”. You could include the former in the latter, but then you are just conveying less information.
Others have answered, but it is interesting to know the history of UNIX and why this came to be. BSD is technically UNIX derived, but being more specific isn’t the reason why it has distinct branding. As with many evils the root is money, and there’s a lot in play into how it all happened, including AT&T being a phone monopoly.
https://en.wikipedia.org/wiki/UNIX_System_Laboratories,_Inc._v._Berkeley_Software_Design,_Inc.
And I recommend watching this video informative and funny about the history and drama behind it all: https://www.youtube.com/watch?v=g7tvI6JCXD0
deleted by creator
So is Linux. So I guess the light blue is all other UNIX variants?
no, it’s not
EulerOS, a Linux distro, was certified UNIX.
I think this is a Ship of Theseus thing here that we’re going to argue about because at what point is it just UNIX-like and not UNIX?
UNIX-like is definitely a descriptor currently used for Linux.
Even the Wikipedia entry starts that way.
Yes, but it’s not Unix. That’s literally part of GNU/Linux’ name.
Mac OS is more Unix than Linux.
It’s Unix if you pay to have it certified (assuming it’s compatible to begin with). That’s basically it.
Some Linux distros have
Some commercial ones did at some point. I’m not sure if they still do.
The question is whether their users care or not I suppose.
As someone who worked on designing racks in the super computer space about 10 q5vyrs ago I had no clue windows and mac even tried to entered the space
about 10 q5vyrs ago
Have you been distracted and typed a password/PSK in the wrong field 8)
Lol typing on phone plus bevy. Can’t defend it beyond that
There was a time when a bunch of organisations made their own supercomputers by just clustering a lot of regular computers:
https://en.wikipedia.org/wiki/System_X_(supercomputer)For Windows I couldn’t find anything.
If you google “Windows supercomputer”, you just get lots of results about Microsoft supercomputers, which of course all run on Linux.No there was HPC sku of Windows 2003 and 2008 : https://en.m.wikipedia.org/wiki/Windows_Server_2003#Windows_Compute_Cluster_Server
Microsoft earnestly tried to enter the space with a deployment system, a job scheduler and an MPI implementation. Licenses were quite cheap and they were pushing hard with free consulting and support, but it did not stick.
but it did not stick.
Yeah. It was bad. The job of a Supercomputer is to be really fast and really parallel. Windows for Supercomputing was… not.
I honestly thought it might make it, considering the engineering talent that Microsoft had.
But I think time proves that Unix and Linux just had an insurmountable head start. Windows, to the best of my knowledge, never came close to closing the gap.
At this point I think it’s most telling that even Azure runs on Linux. Microsoft’s twin flagship products somehow still only work well when Linux does the heavy lifting and works as the glue between
Where did you find that azure runs on linux? I have been qurious for a while, but google refuse to tell me anything but the old “a variant of hyper-v” or “linux is 60% of the azure worklad” (not what i asked about!)
Where did you find that azure runs on linux?
I dont know of anywhere that Microsoft confirms, officially, that Azure, itself, is largely running on Linux. They share stats about what workloads others are running on it, but not, to my knowledge, about what it is composed of.
I suppose that would be an oversimplification, anyway.
But that Azure itself is running mostly on Linux is an open secret among folks who spend time chatting with engineers who have worked on the framework of the Azure cloud.
When I have chatted with them, Azure cloud engineers have displayed huge amouts of Linux experience while they sometimes needed to “phone a friend” to answer Windows server edition questions.
For a variety of reasons related to how much longer people have been scaling Linux clusters, than Windows servers, this isn’t particularly shocking.
Edit: To confirm what others have mentioned, inferring from chatting with MS staff suggests, more specifically, that Azure, itself, is mostly Linux OS running on a Hyper-V virtualization later.
Thank you, i was hoping there was something official that i had missed.
Good question! I can’t remember.
I think I read a Microsoft blog or something like a decade ago that said they shifted from a Hyper-V based solution to Linux to improve stability, but honestly it’s been so long I wouldn’t be shocked if I just saw it in a reddit comment on a related article that I didn’t yet have the technical knowhow to fully comprehend and took it as gospel.
But, surely Windows is the wrong OS?
Windows is a per-user GUI… supercomputing is all about crunching numbers, isn’t it?
I can understand M$ trying to get into this market and I know Windows server can be used to run stuff, but again, you don’t need a GUI on each node a supercomputer they’d be better off with DOS…?
But, surely Windows is the wrong OS?
Oh yes! To be clear - trying to put any version of Windows on a super-computer is every bit as insane as you might imagine. By what I heard in the rumor mill, it went every bit as badly as anyone might have guessed.
But I like to root for an underdog, and it was neat to hear about Microsoft engineers trying to take the Windows kernel somewhere it had no rational excuse to run, perhaps by sheer force of will and hard work.
I could see the NT kernel being okay in isolation, but the rest of Windows coming along for the ride puts the kibosh on that idea.
Yeh it was system x I worked on out default was redhat. I forget the other options but win and mac sure as shut wasn’t on the list
Would the one made out of playstations be in this statistic?
I think you can actually see it in the graph.
The Condor Cluster with its 500 Teraflops would have been in the Top 500 supercomputers from 2009 till ~2014.
The PS3 operating system is a BSD, and you can see a thin yellow line in that exact time frame.Yes, in the linux stat. The otheros option on the early PS3 allowed you to boot linux, which is what most, of not all, of the clusters used.
What would the other be
TempleOS
When you really have to look deep into god’s mind you just have to put templeOS on a supercomputer.
If you install TempleOS on the fastest supercomputer Frontier, you get Event Horizon.
WARNING: Gory, disturbing pictureDo NOT network-enable TempleOS.
God will get angry if you do.
What movie/tv show is this image from?
Event Horizon
Praise be upon him
a glowie’s worst nightmare
How can there be N/A though? How can any functional computer not have an operating system? Or is just reading the really big MHz number of the CPU count as it being a supercomputer?
Early computers didn’t have operating systems.
You just plugged in a punch card or tape with the program you want to run and the computer executed those exact instructions and nothing else.
Those programs were specifically written for that exact hardware (not even for that model, but for that machine).
To boot up the computer, you had to put a number of switches into the correct position (0 or 1), to bring its registers in the correct state to accept programs.So you were the BIOS and bootloader, and there was no need for an OS because the userspace programs told the CPU directly what bits to flip.
They ofcouse had one, probably linux, or unix. But that information, about the cluster, is not available.
Thanks for the links!