Most arch users are casuals that finally figured out how to read a manual. Then you have the 1% of arch users who are writing the manual…
It’s the Gentoo and BSD users we should fear and respect, walking quietly with a big stick of competence.
Most arch users are casuals that finally figured out how to read a manual. Then you have the 1% of arch users who are writing the manual…
It’s the Gentoo and BSD users we should fear and respect, walking quietly with a big stick of competence.
Yeah, that’s the thing.
The gaming market only barely exists at this point. That’s why Nvidia can ignore the gaming market for as long as they want to.
~~Pheasants~~ gamers buy ~~cheap inference cards~~ gaming cards.
The absolute majority of Nvidias sales globally are top-of-the-line AI SKUs. Gaming cards are just a way of letting data scientists and developers have cheap CUDA hardware at home (while allowing some Cyberpunk), so they keep buying NVL clusters at work.
Nvidia’s networking division is probably a greater revenue stream than gaming GPUs.
I have fucked around enough with R’s package management. Makes Python look like a god damn dream. Containers around it is just polishing a turd. Still have nightmares from building containers with R in automated pipelines, ending up at like 8 GB per container.
Also, good luck getting reproducible container builds.
Regarding locales - yes, I mentioned that. Thats’s a shitty design decision if I ever saw one. But within a locale, most Excel documents from last century and onwards should work reasonably well. (Well, normal Excel files. Macros and VB really shouldn’t work…). And it works on normal office machines, and you can email the files, and you can give it to your boss. And your boss can actually do something with it.
I also think Excel should be replaced by something. But not R.
R, the language where dependency resolution is built upon thoughts and prayers.
Say what you want about Excel, but compatibility is kinda decent (ignoring locales and DNA sequences). Meanwhile, good luck replicating your R installation on another machine.
the H200 has a very impressive bandwith of 4.89 TB/s, but for the same price you can get 37 TB/s spread across 58 RX 9070s, but if this actually works in practice i don't know.
Your math checks out, but only for some workloads. Other workloads scale out like shit, and then you want all your bandwidth concentrated. At some point you’ll also want to consider power draw:
Now include power and cooling over a few years and do the same calculations.
As for apples and oranges, this is why you can’t look at the marketing numbers, you need to benchmark your workload yourself.
Well, a few issues:
For fun, home use, research or small time hacking? Sure, buy all the gaming cards you can. If you actually need support and have a commercial use case? Pony up. Either way, benchmark your workload, don’t look at marketing numbers.
Is it a scam? Of course, but you can’t avoid it.
Your numbers are old. If you are building today with anyone ad much as mentioning AI, you might as well consider 100kW/rack as ”normal”. An off-the-shelf CPU today runs at 500W, and you usually have two of them per server, along with memory, storage and networking. With old school 1U pizza boxes, that’s basically 100kW/rack. If you start adding GPUs, just double or quadruple power density right off the bat. Of course, assume everything is direct liquid cooled.
I kinda get why organisations don’t migrate.
IPv6 just hands you a bag of footguns. Yes, I want all my machines to have random unpredictable IPs. Having some extra additional link local garbage can’t hurt either, can it? Oh, and you can’t run exhaustive scans over your IP ranges to map out your infra.
I’m not saying people shouldn’t migrate, but large orgs like universities have challenges to solve, without any obvious upside to the cost. All of the above can be solved, but at a cost.
A few years ago my old university finally went with NAT instead of handing out public IPs to all servers, workstations and random wifi clients. (Yes, you got a public IP on the wifi. Behind a firewall, but still public.) I think they have a /16 and a few extra /24s in total.
Exactly. The malware can do whatever, but as long as the TPM measurements don’t add up the drive will remain encrypted. Given stringent enough TPM measurements and config you can probably boot signed malware without yielding access to the encrypted data.
In my view, SecureBoot is just icing on the cake that is measured boot via TPM. Nice icing though.
And yet, in the real world we actually use distribution centers and loading docks, we don’t go sending delivery boys point to point. At the receiving company’s loading docks, we can have staff specialise in internal delivery, and also maybe figure out if the package should go to someone’s office or a temporary warehouse or something. The receiver might be on vacation, and internal logistics will know how to figure out that issue.
Meanwhile, the point-to-point delivery boy will fail to enter the building, then fail to find the correct office, then get rerouted to a private residence of someone on vacation (they need to sign personally of course), and finally we need another delivery boy to move the package to the loading dock where it should have gone in the first place.
I get the ”let’s slaughter NAT” arguments, but this is an argument in favour of NAT. And in reality, we still need to have routing and firewalls. The exact same distribution network is still in use, but with fewer allowances for the recipient to manage internal delivery.
Personal opinion: IPv6 should have been almost exactly the same as IPv4, but with more numbers and a clear path to do transparent IPv6 to IPv4 traffic without running dual stack (maybe a NAT?). IPv6 is too complex, error prone and unsupported to deploy without shooting yourself in the foot, even now, a few decades after introduction.