this post was submitted on 25 Feb 2026
220 points (96.2% liked)

Technology

83500 readers
3087 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TropicalDingdong@lemmy.world 117 points 1 month ago (7 children)

I just..

Am I wrong here? Like, look, shame me. I work in machine learning and have since 2012. I don't do any of the llm shit. I do things like predicting wildfire risk from satellite imagery or biomass in the amazon, soil carbon, shit like that.

I've tried all the code assistants. They're fucking crap. There's no building an economy around these things. You'll just get dogshit. There's no building institutions around these things.

[–] WanderingThoughts@europe.pub 41 points 1 month ago

Heh, that's the joke going around now.

AI works, it replaces workers, we lose our jobs.

AI doesn't work, bubble pops, we lose our jobs.

[–] Zwuzelmaus@feddit.org 13 points 1 month ago

They're fucking crap. There's no building an economy around these things.

You are right in every serious part of the world.

But add "venture capital" to the equation and it works out stronger than anything else so far.

[–] Buddahriffic@lemmy.world 6 points 1 month ago

If you want a demo on how bad these AI coding agents are, build a medium-sized script with one, something with a parse -> process -> output flow that isn't trivial. Let it do the debug, too (like tell it the error message or the unwanted behaviour).

You'll probably get the desired output if you're using one of the good models.

Now ask it to review the code or optimize it.

If it was a good coding AI, this step shouldn't involve much, as it would have been applying the same reasoning during the code writing process.

But in my experience, this isn't what happens. For a review, it has a lot of notes. It can also find and implement optimizations. The weighs are the same, the only difference is that the context of the prompt has changed from "write code" to "optimize code", which affects the correlations involved. There is no "write optimal code" because it's trained on everything and the kitchen sink, so you'll get correlations from good code, newbie coders, lesson examples of bad ways to do things (especially if it's presented in a "discovery" format where a prof intended to talk about why this slide is bad but didn't include that on the slide itself).

[–] partofthevoice@lemmy.zip 6 points 1 month ago (1 children)

I think it’s supposed to work like, “well, even if you are right about the massive utility of AI, is that still what we should be aiming for?”

It gets around the combative “you’re wrong, AI is garbage” argument. The people hoisting AI because they believe, even if it does suck, it’ll get better… those people can probably understand this argument much more easily.

[–] ageedizzle@piefed.ca 6 points 1 month ago

It sucks and its at the point now where were hitting diminishing returns so I’m not sire if it sill get better

[–] GamingChairModel@lemmy.world 2 points 1 month ago

It's funny. I see the phrase "AI doomsday scenario" and I immediately picture devastating cascading consequences caused by someone mistakenly putting too much trust in some kind of agentic AI that does things poorly and breaks a lot of big important things.

I'm just not seeing a scenario where AI causes devastating disruption based on its own ultra competence. I'm much more scared of AI incompetence.

[–] LincolnsDogFido@lemmy.zip 2 points 1 month ago (1 children)

Your job sounds really cool! How likely is Alberta to be on fire again this year?

[–] TropicalDingdong@lemmy.world 1 points 1 month ago (1 children)
[–] LincolnsDogFido@lemmy.zip 2 points 1 month ago (1 children)
[–] TropicalDingdong@lemmy.world 14 points 1 month ago (1 children)

Well for one, that area already burned pretty recently. So its pretty unlikely to burn again any time soon.

But as part of a larger picture:

The area does experience fire-weather conditions for some portion of the year:

Here we're looking at HDWI (hot dry windy index), where a "loose" definition of fire weather is if HDWI is above 200. HDWI is based on a few factors, namely, how hot it is, how dry it is, and how fast the air is moving. Hot dry air moving quickly = fire weather.

The number of fire weather days per year has been increasing, and in very recent years (the past decade) the rate of change has increased, and become statistically signficant:

So its not a particularly fire prone area, but its getting worse, and its getting worse at a faster rate.

That would be the first part of the analysis I would run. After that, we'd look for historically "anomalous" periods. Its not enough to look at averages; that will wash over important features in the data. We need to look for specific periods where fire weather manifests.

This is another way of thinking about fire risk. Here we're going to count the amount of time, after 12 hours, that an area is in sustained fire-weather conditions. Basically, a bit of time in bad conditions isn't the end of the world, but as you stay in fire weather conditions, fire risk increases exponentially (as plants/ fuels continue to dry out).

If I were writing an insurance product for you, I would count the number of events in a given magnitude bucket and give you a risk rating. Here, licking my thumb and sticking it in the air, I would say.. "not that bad".

Much of my work is around modeling in the wilderness urban interface. You picked an almost all wilderness area. Since there are no structures, I cant do the next analysis, but it would looks something like this:

Most of my work is about figuring out what the impacts of wildfire on the built environment are going to be. Also, the free structure dataset I have access to doesn't cover Canada and I'm not going to spend money buying the structures for you (unless you REALLY want me to).

Those first figures are all specific to the coordinates you provided. The final figure is just an example.

[–] msage@programming.dev 1 points 1 month ago

Can I subscribe to your AI posts?