StickerMule claims it was an innocent mistake, but not sure how credible that is https://twitter.com/stickermule/status/1656058962247053312
Agreed, but for many services 2 or 3 nines is acceptable.
For the cloud storage system I worked on it wasn’t, and that had different setups for different customers, from a simple 3 node system (the smallest setup, mostly for customers trialing the solution) to a 3 geo setup which has at least 9 nodes in 3 different datacenters.
For the finanicial system, we run a live/live/live setup, where we’re running a cluster in 3 different cloud operators, and the client is expected to know all of them and do failover. That obviously requires little more complexity on the client side, but in many cases developers or organisations control both anyway.
Netflix is obviously at another scale, I can’t comment on what their needs are, or how their solution looks, but I think it’s fair to say they are an exceptional case.
I don’t remember the details but a lot centered around the build system IIRC.
Sorry, yes, that was durability. I got it mixed up in my head. Availability had lower targets.
But I stand by the gist of my argument - you can achieve a lot with a live/live system, or a 3 node system with a master election, or…
High availability doesn’t have to equate high cost or complexity, if you can take it into account when designing the system.
Linksys was part of Cisco. They had veryy deep pockets, but the FSF & SFC prevailed regardless.
I doubt the FSF or SFC will go after Nvidia, this has been a long standing issue and I haven’t heard about any lawsuits being brought because of it, even before Nvidia had more money than God.
Free Software Foundation, Inc. Vs Cisco Systems Inc. disagrees. The FSF sued Linksys for violating the license for GCC, libc etc.
And they were forced in court to release all their WRT stuff under GPL, which is how OpenWRT got its start.
If you really need the scale of 2000 physical machines, you’re at a scale and complexity level where it’s going to be expensive no matter what.
And I think if you need that kind of resources, you’ll still be cheaper of DIY.
I used to work on an on premise object storage system before, where we required double digits of “nines” availability. High availability is not rocket science. Most scenarios are covered by having 2 or 3 machines.
I’d also wager that using the cloud properly is a different skillset than properly managing or upgrading a Linux system, not necessarily a cheaper or better one from a company point of view.
Got to agree with @Zushii@feddit.de here, although it depends on the scope of your service or project.
Cloud services are good at getting you up and running quickly, but they are very, very expensive to scale up.
I work for a financial services company, and we are paying 7 digit monthly AWS bills for an amount of work that could realistically be done with one really big dedicated server. And now we’re required to support multiple cloud providers by some of our customers, we’ve spent a TON of effort trying to untangle from SQS/SNS and other AWS specific technologies.
Clouds like to tell you:
The last item is true, but the first two are only true if you are running a small service. Scaling up on a cloud is not cost effective, and maintaining a complicated cloud architecture can be FAR more complicated than managing a similar centralized architecture.
I’ve heard gaming on debian isnt as ‘out of the box’ as it is with Ubuntu.
Depends on what your hardware is. Debian typically runs some older versions of pretty much everything. If you have newish hardware, you might need to run a newer kernel than Debian ships by default for full support. When that happens to me, I usually run the Liquorix kernel packages, which has been around for more than a decade and has never caused me problems on Debian.
For some graphics drivers, you might need a newer Mesa, which is typically available from Debians’ own backports.
Don’t do either unless you know you need to, because both lead to a somewhat higher risk for an unstable system.
You can just install Steam using Flatpak, and it works just fine.
No he can’t, but Red Bull can. Perez already won a few races at the start of the season.
I’m not talking about buying a Steam Deck, I’m talking about the effect it has had on Linux gaming in general.
I mostly play on a laptop with a Radeon GPU and it’s been absolutely issue free gaming wise.
I used to have this problem but not since the Steam Deck is out.
Before, I was always frustrated fiddling with Lutris, winetricks, etc. But now it’s only been plug and play for me, just let Steam take care of it. Zero compatibility issues. In fact, recently I’ve had more issues with native games than Proton.
Same. Gentoo taught me so much. Wouldn’t run it today, though. Ain’t got time for that.
I don’t know. I like Debian. My home server also doubles as a desktop sometimes and it does a good job.
I’m mostly not super interested in cutting edge versions. I run a newer kernel and mesa than default Debian, but the rest is just fine. I’m fine with Firefox ESR, and lagging a little bit behind the state of the art.
For proprietary, non-free software I’d much prefer them to be sandboxed in Flatpak, thank you very much. So yeah, let Flatpak integrate payments!
For open source keystone applications, like my browser or my text editor, please let me have an unsandboxed native package.
Yeah. I switched away from Ubuntu for all this crap.
I moved to Fedora for my laptop & desktop, and Debian for my home server. I’m considering switching everything to Debian eventually, but there’s a couple dedicated repos that make using Fedora on my laptop much easier for now.
Surely Elon would prefer the old Lucid fork, https://www.xemacs.org/
I couldn’t find one locally either. Ended up ordering a returned product from Amazon abroad, a friend of mine then shipped it over. The stuff I do to avoid Nvidia…
Been using this with Sway since the start of the year, and it’s been wonderful.