Seconded. OP, if you can write Markdown, Hugo will turn it into a website.
🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍
Seconded. OP, if you can write Markdown, Hugo will turn it into a website.
I second Mint.
Linux is a kernel; a distribution is a kernel plus user space tools. Most distributions are mainly configurations tuned for specific use cases - work, gaming, servers, etc. For example, the GUI part of any base OS constitutes over half of the disk space and memory use; if you’re running a server to serve web pages, you don’t need all that crap.
Unlike Windows or OSX, there are literally dozens of GUIs you can choose from, and most distros focus on setting up one really well as the default.
Note that you can add, and for the large part, remove any Linux on any distro, so you could start with a server distro or a gaming distro and by adding or removing end up with essentially the same system.
The most significant difference between most distros is the package manager, the thing you use to install software and manage dependencies. Honestly, that’s not important at this point, but it will be the biggest distinction after you’ve been using Linux for a while.
So: Mint. It’s a desktop/laptop distro, it’s designed to be easy to install and use, and you can mostly use it completely without ever havingy to drop to the command line. When my dad, who’s approaching 80, bought a laptop last year and didn’t want to register with Microsoft or give them his credit card, I walked him through over the phone downloading Mint, burning it to a USB stick, and installing it. Most of his questions were things like finding an image burner, which keyboard/layout to choose (during install), which type of install to chose (HD partitioning); nothing he couldn’t have figured out by making guesses and mostly choosing the defaults. Since then, I’ve received one call about setting up the printer, which turned out too be a printer issue because his son-in-law had changed the WiFi password and not updated the printer (he obviously doesn’t use his printer much).
Mint is an excellent first distro. It may not be your last distro, but it’s an easy conversion option. You don’t have to update the software on it often, it’s easy to use - familiar, for Windows folks - and really just an all-around great first choice.
Three things I do recommend:
Suggestion 3 gets you two things: first, it makes changing distributions in the future far easier; all you’ll do is replace root and you’ll keep your home partition - all your personal, user files: music, docs, pictures, etc. Second, btrfs will let you use snapper, which is a tool that takes snapshots of your filesystem. Snapper is similar to Time Machine on OSX; there’s even a Time Machine-like GUI tool for browsing and accessing snapshots.
Start with Mint. You can always change later, and if you partition your drive like I suggest, it is pretty easy to switch.
Go ahead… you can whisper it to me
Who do I write?
Opening an office is a completely different thing; there is an enormous difference between offshore contractors and offshore employees. That much, I’ll agree with.
In the US, though, it’s usually cost-driven. When offshore mandates come down, it’s always in terms of getting more people for less cost. However, in most cases, you don’t get more quality code faster by throwing more people at it. It’s very much a case of “9 women making a baby in one month.” Rarely are software problems solved with larger teams; usually, a single, highly skilled programmer will do more for a software project than 5 junior developers.
Not an projects are the same. Sometimes what you do need is a bunch of people. But it’s by far more the exception than the rule, and yet Management (especially in companies where software isn’t the core competency) almost always assumes the opposite.
If you performed a survey in the US, I would bet good money that in the majority of cases the decision to offshore was not made by line managers, but by someone higher in the chain who did not have a software engineering degree.
Thing is, outsourcing never stopped. It’s still going strong, sending jobs to whichever country is cheapest.
India is losing out to Indonesia, to Mexico, and to S American countries.
It’s a really stupid drive to the bottom, and you always get what you pay for. Want a good development team in Bengaluru? It might be cheaper than in the US, but not that much cheaper. Want good developers in Mexico? You can get them, but they’re not the cheapest. And when a company outsources like this, they’ve already admitted they’re willing to sacrifice quality for cost savings, and you - as a manager - won’t be getting those good, more expensive developers. You’ll be getting whoever is cheapest.
It is among the most stupid business practices I’ve had to fight with in my long career, and one of the things I hate the most.
Developers are not cogs. You can’t swap them out like such, and any executive who thinks you can is a fool and an incompetent idiot.
This.
I use single partitions, because since everything is SSD now, partition failures are almost nonexistent. I don’t know why; I don’t understand the mechanics of why disks are more prone to partition failures, but now when SSD start to fail, it seems as if it is anything except something that can be isolated by position.
But I do isolate by subvolume, and for the reason you give: snapshots. I snapshot root only when something changes, but do hourly snapshots of home. It keeps data use more manageable. Nightly backups, and I never have more than 24 home snapshot at a time.
I think Android updates intentionally made the Pixel C slower. It was a noticeable process, up to the point they stopped supporting it. I’d downgrade to an earlier version, but there’s such poor support in Lineage, I’m barely able to run the version that’s on there now.
Such a shame, because it’s still an amazingly beautiful device.
I’m 100% with you. I want a Light Phone with a changeable battery and the ability to run 4 non-standard phone apps that I need to have mobile: OSMAnd, Home Assistant, Gadget Bridge, and Jami. Assuming it has a phone, calculator, calendar, notes, and address book - the bare-bones phone functions - everything else I use on my phone is literally something I can do probably more easily on my laptop, and is nothing I need to be able to do while out and about. If it did that, I would probably never upgrade; my upgrade cycle is on the order of every 4 years or so as is, but if you took off all of the other crap, I’d use my phone less and upgrade less often.
The main issue with phones like the Light Phone is that there are those apps that need to be mobile, and they often aren’t available there.
ntfy is in the app store, so you don’t have you side load it. I don’t know how many iOS apps use ntfy, but many Android OSS apps will ask you over which notification system you want to work.
I was just clarifying that this isn’t one of the XKCD proliferation cases. Apple and Google’s push notifications are proprietary and give them full access to your notifications. Unified Push is the OSS alternative, and this KDE enhancement doesn’t createa another one: it uses the defacto standard OSS push notification specification.
The fact that ntfy is in the Apple app store makes me suspect there must be some number of iOS apps that can be configured to use Unified Push.
since all apps are designed to run well on budget phones from 5 years ago, there’s no reason to upgrade.
5 years, maybe, but any more is stretching it. And not getting system upgrades anymore is problematic. Unless you own a particular model of phone, de-Googled Android can be hard to come by.
For example, I have a 7-year old Pixel C. By the time Google stopped using system updates for it, I wasn’t wanting them as every release made the device slower and more unstable. After some effort, I was finally able to install a version of Lineage, which itself has problems including no updates in years. There’s a lot of software that is incompatible with my device, both from Aurora and FDroid.
Android isn’t Linux; Google doesn’t care about maintaining backward compatability on old devices, much less performance, and there’s no army of engineers making sure it is because there’s a served running in walled-up closet no one can find.
Google deprecates features and ABIs in Android, apps update and suddenly aren’t backwards compatible.
5 years, maybe. The entire industry is addicted to users upgrading their phones, and everyone gets a piece of that pie. There’s no actors, except perhaps app developers, who have any interest in keeping old phones running. Telecoms upgrade their wireless network - the internet connection in my 8 y/o car, and half its navigation features, died the day AT&T decided to stop supporting 3G; Phone makers make no money if you don’t buy new phones; and maintaining backwards compatibility costs Google money which they’d rather siphon off to shareholders.
I meant the server can deliver notifications via ntfy:
https://docs.ntfy.sh/examples/#home-assistant
Yeah, I don’t think the mobile app communicates with the server over anything but the web API.
The developers get less, but it ends up costing more to employ people in the EU. In the US, the rule of thumb - for white collar, non-executive jobs, at least - is 1.4x the salary for TCE (and it’s often reasonable to round up to 1.5). For EU employees, it’s between 1.5 and up to 1.8. Norway is 1.7; I don’t know what Sweden is, but I’d assumed it’s around the same.
The social welfare benefits are far better in those countries, and it’s companies paying for those in that overhead. The better the social welfare net, the higher the costs. There may be exceptions, but they’re the minority. You want really cheap labor, go to counties with nearly no social welfare.
Offshoring to reduce costs isn’t the point; for the most part, you get what you pay for. Even offshoring to countries with notoriously cheap labor, if you want good programmers, you end up paying much closer to domestic costs. Highly educated, experienced programmers command higher prices, regardless of the country, but when companies offshore labor for cost control, the cost of the labor is usually the most important decision factor and line managers are left with the consequences.
Regardless, the difference in TCE isn’t going to make a huge difference in how far a million dollars goes. People are expensive.
Yup.
Say you’re paying just under market rates for mid-range non-web developers: about 90k ea. That costs you half again as much, once you factor in benefits, so 150k ea. A million pays for 6 devs for a year, and leaves you some change.
Then you have operating costs: at least one person in each of HR, finance, legal, and IT ops - at a bare minimum. You have equipment and utility costs. And we haven’t even gotten to management; even if everyone reported to a single person, 10 direct reports is stretching it, and they’re not doing other jobs like networking and seeking other funding, so you need people for that.
In a bare-bones organization paying people less than market rates, a million dollars probably buys the foundation between 3 and 6 months of operating runtime.
It turns out, in this case it isn’t. This is about a KDE library (service?) that uses Unified Push, which is a standard implemented by servers like ntfy, Nextpush, and Gotify. If you use any f-droid apps, you’re probably already using Unified Push. Home Assistant uses it for mobile notifications, too.
It is, probably, the third biggest notification protocol after Google’s and Apple’s, only it doesn’t route through their servers or provide them with more of your data to harvest and sell.
Unified Push is a good thing. It looks like KDE just makes it accessible to KDE application developers through the KDE libraries.
Huh. It’s a little off in some details, but does a pretty good neckbeard.
I’ve been using Arch for several years, including off-beat offshoots like Artix, and have never once compiled a kernel on it. The last time I compiled a kernel was a couple of decades ago when I was running Gentoo.
Arch is mainly a precomputed precompiled binary distribution; AUR is where you get the compile from source packages, and that’s the community repos.
Arch users may have a reputation for acting superior, and I’ll admit that they Arch wiki justifies a lot of “look it up yourself” mentality, must because it’s so comprehensively good. But I think GPT misses the mark on a couple of points, like extra fingers on a Stable Diffusion person.
This is what recommend as well. The various KeePasses all to pretty good jobs of merging databases, in case of sync conflicts, and you can utterly ignore whether you’re online or not. Plus, there’s a really fantastic tool, written by a veritable genius of a developer, that lets you use a KeePass DB as a secret service on your desktop.
Your use case is obviously different, but I’ve gone years between system upgrades. I mostly do OSS coding, or work stuff; not gaming. The only case I can imagine needing to upgrade my little Ryzen with 16 cores - a laptop CPU - is if it becomes absolutely imperative that I run AI models on my desktop. Or if Rust really does become pervasive; compiling Rust programs is almost as bad as compiling Haskell, and will take over my computer for minutes at a time.
When I got this little micro, the first thing I did was upgrade it to 64GB of RAM, because that’s the one thing I think you can never have too much of; especially with the modern web and all the shit that brings with it; Electron apps, and so on, absolutely chew up memory. The one good thing about the Rust trend is better memory use, so the crappy compile times are somewhat forgiveable.