Let's be generous and say $100. That's 6 months with the most expensive tier.
That should be plenty to get a gaming PC capable of playing latest AAA titles in 4k 60 fps and high settings.
It's one computer, Michael. What could it cost, 10 dollars?
Let's be generous and say $100. That's 6 months with the most expensive tier.
That should be plenty to get a gaming PC capable of playing latest AAA titles in 4k 60 fps and high settings.
It's one computer, Michael. What could it cost, 10 dollars?
I paid $200 for ultimate for a year and got borderlands 4 included in it.
You'd be challenged to build a decent gaming PC for 3 times that.
You're pushing code to prod without pr's and code reviews? What kind of jank-ass cowboy shop are you running?
It doesn't matter if an llm or a human wrote it, it needs peer review, unit tests and go through QA before it gets anywhere near production.
If you got good internet you could look into GeForce Now as a stopgap / headstart.
It's using an x86 compatibility layer, pex i think it was called. So apparently you will be running windows x86 games on it.
Edit: fex! https://github.com/FEX-Emu/FEX
Edit 2, from tom's hardware article:
The company also showed off the x86 version of Hades 2 running standalone (as in not streaming from a PC) on the Steam Frame. And the game ran just fine and looked good at what Valve reps told me was 1400p in a window inside the headset
Enlightened dumbness 🧘
It regurgitates old code, it cannot come up with new stuff.
The trick is, most of what you write is basically old code in new wrapping. In most projects, I'd say the new and novel part is maybe 10% of the code. The rest is things like setting up db models, connecting them to base logic, set up views, api endpoints, decoding the message on the ui part, displaying it to user, handling input back, threading things so UI doesn't hang, error handling, input data verification, basic unit tests, set up settings, support reading them from a file or env vars, making UI look not horrible, add translatable text, and so on and on and on. All that has been written in some variation a million times before. All can be written (and verified) by a half-asleep competent coder.
The actual new interesting part is gonna be a small small percentage of the total code.
I guess I'm one of the idiots then, but what do I know. I've only been coding since the 90s
That's kinda wrong though. I've seen llm's write pretty good code, in some cases even doing something clever I hadn't thought of.
You should treat it as any junior though, and read the code changes and give feedback if needed.
I've used Claude code to fix some bugs and add some new features to some of my old, small programs and websites. Not things I can't do myself, but things I can't be arsed to sit down and actually do.
It's actually gone really well, with clean and solid code. easily readable, correct, with error handling and even comments explaining things. It even took a gui stream processing program I had and wrote a server / webapp with the same functionality, and was able to extend it with a few new features I've been thinking to add.
These are not complex things, but a few of them were 20+ files big, and it manage to not only navigate the code, but understand it well enough to add features with the changes touching multiple files (model, logic, view layer for example, or refactor a too big class and update all references to use the new classes).
So it's absolutely useful and capable of writing good code.
I'd think the main users of GeForce now are people who don't play games that much and won't spend that much money on gaming hardware.