xthexder

joined 2 years ago
[–] xthexder@l.sw0.com 2 points 2 months ago (4 children)

Where are you getting February from? As far as I know this happened last week. I don't blame the person not wanting to reveal their face.

[–] xthexder@l.sw0.com 6 points 2 months ago (6 children)

Did you watch the video? There's no way that it was just "user error", nobody randomly swerves into a tree when nothing's there. Maybe you're implying it was insurance fraud?

Tesla gives out beta access to users, so I wouldn't put too much weight on that claimed version they were using.

[–] xthexder@l.sw0.com 11 points 2 months ago (8 children)

FSD wouldn't have done any better, it can't even figure out shadows on the road properly as seen in this crash 3 days ago:

https://fuelarc.com/tech/tesla-full-self-driving-veers-off-road-hits-tree-and-flips-car-for-no-obvious-reason-no-serious-injuries-but-scary/

[–] xthexder@l.sw0.com 0 points 2 months ago (1 children)

Anything that's per-commit is part of the "build" in my opinion.

But if you're running a language server and have stuff like format-on-save enabled, it's going to use a lot more power as you're coding.

But like you said, text editing is a small part of the workflow, and looking up docs and browsing code should barely require any CPU, a phone can do it with fractions of a Watt, and a PC should be underclocking when the CPU is underused.

[–] xthexder@l.sw0.com 5 points 2 months ago* (last edited 2 months ago) (1 children)

It sounds like it does save you a lot of time then. I haven't had the same experience, but I did all my learning to program before LLMs.

Personally I think the amount of power saved here is negligible, but it would actually be an interesting study to see just how much it is. It may or may not offset the power usage of the LLM, depending on how many questions you end up asking and such.

[–] xthexder@l.sw0.com 4 points 2 months ago* (last edited 2 months ago) (1 children)

I didn't even say which direction it was misleading, it's just not really a valid comparison to compare a single invocation of an LLM with an unrelated continuous task.

You're comparing Volume of Water with Flow Rate. Or if this was power, you'd be comparing Energy (Joules or kWh) with Power (Watts)

Maybe comparing asking ChatGPT a question to doing a Google search (before their AI results) would actually make sense. I'd also dispute those "downloading a file" and other bandwidth related numbers. Network transfers are insanely optimized at this point.

[–] xthexder@l.sw0.com 2 points 2 months ago* (last edited 2 months ago) (6 children)

Just writing code uses almost no energy. Your PC should be clocking down when you're not doing anything. 1GHz is plenty for text editing.

Does ChatGPT (or whatever LLM you use) reduce the number of times you hit build? Because that's where all the electricity goes.

[–] xthexder@l.sw0.com 6 points 2 months ago (3 children)

Asking ChatGPT a question doesn't take 1 hour like most of these... this is a very misleading graph

[–] xthexder@l.sw0.com 8 points 2 months ago

To be fair, it probably lost its value whether they drove it or not. 6000 miles is basically brand new from a mileage perspective.

[–] xthexder@l.sw0.com 41 points 2 months ago* (last edited 2 months ago) (1 children)

Or just std::bitset<8> for C++. Bit fields are neat though, it can store weird stuff like a 3 bit integer, packed next to booleans

[–] xthexder@l.sw0.com 1 points 2 months ago (1 children)

I don't think it would have to be any different than people getting a bigger tax return at the end of the year. Or like the HST rebates Ontario has been doing where they pay it out I think quarterly?

As it is right now, I've seen the occasional "tax return sale" because businesses know people just got paid a chunk of money and might be impulsive with it. I don't think this is necessarily a bad thing, the demand for everyday items won't change, and people will try and save money regardless of income level.

[–] xthexder@l.sw0.com 3 points 2 months ago

You’re just saying, human-written software can have bugs.

That's pretty much exactly the point they're making. Humans create the training data. Humans aren't perfect, and therefore the AI training data cannot be perfect. The AI will always make mistakes and have biases as long as it's being trained on human data.

view more: ‹ prev next ›