this post was submitted on 16 May 2025
175 points (97.3% liked)

LocalLLaMA

3450 readers
8 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

from 10b0t0mized: I miss the days when I had to go through a humiliation ritual before getting my questions answered.

Now days you can just ask your questions from an infinitely patient entity, AI is really terrible.

all 28 comments
sorted by: hot top controversial new old
[–] Vince@lemmy.world 18 points 2 months ago (6 children)
[–] pennomi@lemmy.world 42 points 2 months ago (2 children)

The fast drop yes, but really it’s been in decline for around a decade before that.

[–] MrZee@lemm.ee 12 points 2 months ago (2 children)

Interesting! When I first read your comment, I looked at the chart and thought “it looks to me like the drop starts at the end of 2022. Isn’t that before LLMs started being used broadly?”

Nope. Looks like ChatGPT was released in November 2022. It doesnt feel like it’s been around that long, but I guess it has.

[–] MudMan@fedia.io 6 points 2 months ago

The drop starts in 2013, but people were certainly ready to all bail at once by the time LLMs came around.

[–] Vince@lemmy.world 11 points 2 months ago (1 children)

That sucks, is there an alternative people are using? seems like it would still be a useful knowledge base to have

[–] HellieSkellie@lemmy.dbzer0.com 3 points 2 months ago (1 children)

The common alternative is to just ask ChatGPT your software questions, get false information from the AI, and then try and push that horrible code to production anyway if my past two jobs are any indicator.

Stack Overflow is still useful to find old answers, but fucking sucks to ask new questions on. If you aren't getting an AI answer to your question, then you're getting your question deleted for some made up reason.

The real answer that everyone hates is: If you have a question about something, read the documentation and experiment with it to figure that something out. If the documentation seems wrong, submit an issue report to the devs (usually on GitHub) and see what they say.

The secondary answer is that almost everything FOSS has a slack channel or even sometimes discord channels. Go to the channels and ask people who use/make whatever tool you need help with.

[–] atzanteol@sh.itjust.works 1 points 2 months ago (1 children)

The common alternative is to just ask ChatGPT your software questions, get false information from the AI, and then try and push that horrible code to production anyway if my past two jobs are any indicator.

If you have developers pushing bad and broken code to production your problem isn't AI.

[–] Psaldorn@lemmy.world 21 points 2 months ago

That and they cover up half the fucking page when you try to view it. Google login, giant cookie popup etc

[–] magic_lobster_party@fedia.io 15 points 2 months ago (2 children)

I believe it’s more of a generational shift.

The age groups who used to rely on SO are now skilled enough not to rely on it as much (or they more often have the types of questions SO can’t answer).

Younger age groups probably prefer other means of learning (like ChatGPT, Discord and YouTube videos).

[–] shaserlark@sh.itjust.works 6 points 2 months ago* (last edited 2 months ago) (1 children)

Yeah I’m working in some niche and there is a stackoverflow that they refer newbies to because "no developer support on their discord“. But if you ask a question there no one will ever answer, otoh if you know where and how to ask you’ll actually get help on discord. I feel like SO is pretty much dead with anything where change happens quickly.

[–] errer@lemmy.world 2 points 2 months ago

There’s also only so many ways to ask how to sort a list or whatever and SO removes duplicate questions. So at some point the number of unique questions asked begins to plateau. I think that explains the slow drop before LLMs came on the scene.

[–] Korhaka@sopuli.xyz 3 points 2 months ago

I assumed it was because stackoverflow already had all the answers I needed except for the things too obscure to search for that result in my crying and trying to piece it together from scraps of info on 50 different tabs.

[–] calcopiritus@lemmy.world 10 points 2 months ago (1 children)

Yes. But not just in the "obvious" way.

I first started to contribute back when LLMs first appeared. Then SO allowed became LLM training grounds. Which made me stop contributing instantly.

I guess a not-insignificant amount of people stopped answering questions, which means less search results, which ends in less traffic.

I'm sure the fall wouldn't be as big as it is if they didn't allow LLMs to train on their data.

[–] FaceDeer@fedia.io 11 points 2 months ago (1 children)

How do you disallow LLMs to train on their data while still allowing humans to train on their data?

[–] calcopiritus@lemmy.world 1 points 2 months ago

If they can charge for it. It means they can block it. https://www.wired.com/story/stack-overflow-will-charge-ai-giants-for-training-data/

You can also rate-limit. Blacklist known scrapper IPs.

And if it doesn't work. You make signing-in not optional. Which makes rate-limiting way easier.

The rate of human data consumption is much lower than LLM's. The humans won't even notice that they have a rate limit. At most they would only notice the need to create a stack overflow account.

[–] juli@lemmy.world 9 points 2 months ago (1 children)

It probably started off when reddit/discord became a friendly place for troubleshooting (code among other things), then the AI dropped it off the cliff.

[–] Drbreen@sh.itjust.works 3 points 2 months ago

Rammed it off the cliff

[–] JackbyDev@programming.dev 2 points 2 months ago

Nah, that drop comes WELL before AI answers. Look at the dates. They've had a culture of people overly aggressively closing new questions for pointless/irrelevant reasons as well as being generally nasty to new users for ages. Sure, it started dropping way faster post 2020 because of AI, but the problem was already there.

[–] resipsaloquitur@lemm.ee 13 points 2 months ago* (last edited 2 months ago)

It’s not LLMs — see the peak at 2013. They aggressively started closing any “duplicate” questions around then. The whole premise was that experts were supposed to answer questions for clout that would bolster their resume, but after getting silenced a few times, why would they come back? And anyone with the temerity to ask a question that was asked (with or without a good answer, ten years ago) would also never come back after getting shut down.

They couldn’t decide if they were a forum or Wikipedia and became neither.

[–] JackbyDev@programming.dev 12 points 2 months ago

Note that the decline began well before "AI" stuff became a thing. Stack Overflow has had a major culture problem as well as not treating their users with respect for ages.

For the part about respecting users, they have a history of ignoring Meta (their site specifically for talking about Stack Overflow site itself) while acting like they use it.

[–] zarathustra0@lemmy.world 11 points 2 months ago* (last edited 2 months ago)

In the future we will be dependent on LLMs for everything because the only people with enough money to maintain libraries of data which are untainted by LLMs will be the people who own the LLMs.

Step 1: Steal all of the data (including copyrighted stuff)

Step 2: Poison the well

Step 3: Profit

[–] LaunchesKayaks@lemmy.world 9 points 2 months ago

Posted once in stack overflow in college and got absolutely destroyed. I was not let down lmao

[–] zr0@lemmy.dbzer0.com 8 points 2 months ago

Not surprised, even without the LLM boom, StackOverflow was doomed for the same reason reddit is doomed: power tripping bastards, gatekeeping everything which is not part of their narrow minded world.

[–] letsgo@lemm.ee 8 points 2 months ago (1 children)

I'm not surprised. StackOverflow has moderated itself out of relevance. Ask a question and get flamed. DDG a question plus "stackoverflow" and get something that may well have been correct and useful in 2012 but tech moves on and it's now archaic trivia, somewhat akin to facts about punched cards. "Help me StackOverflow, you're my only hope" hasn't been true for quite some time now.

[–] JackbyDev@programming.dev 4 points 2 months ago

They moderated themselves out of relevance because when you ask new questions that aren't duplicates they still close them as duplicates.

[–] possiblylinux127@lemmy.zip 4 points 2 months ago

I keep getting the Cloudflare checks