This isn't actually the problem. In natural conversation I would say the most likely response to someone saying they need some meth to make it through their work day (actual scenario in this article) is to say "what the fuck dude no" but LLMs don't use just the statistically most likely response. Ever notice how ChatGPT has a seeming sense of "self" that it is an to LLM and you are not? If it were only using the most likely response from natural language, it would talk as if it were human, because that's how humans talk. Early LLMs did this, and people found it disturbing. There is a second part of the process that gives a score to each response based on how likely it is to be voted good or bad and this is reinforced by people providing feedback. This second part is how we got here, because people who make LLMs are selling competing products and found people are much more likely to buy LLMs that act like super agreeable sycophants than LLMs that don't do this. Therefore, they have intentionally tuned their models to prefer agreeable, sycophantic responses because it helps them be more popular. This is why an LLM tells you to use a little meth to get you through a tough day at work if you tell it that's what you need to do.
TL;DR- as with most of the things people complain about with AI, the problem isn't the technology, it's capitalism. This is done intentionally in search of profits.
Climate change, although the younger generations aren't doing much to help with that either.