Yeah, the best is never going to be “now”, which is always drown in uncertainty and chaos. When you look back, everything looks safe and deterministic.
balder1991
I’m just thinking now that the Mac is next.
I thought that as much as these companies preach about LLMs doing their coding, the cost of development would go down, no? So why does it need to reduce everything to a single code base to make it easier for developers?
All I see is people chatting with an LLM as if it was a person. “How bad is this on a scale of 1 to 100”, you’re just doomed to get some random answer based solely on whatever context is being fed in the input and that you probably don’t know the extent of it.
Trying to make the LLM “see its mistakes” is a pointless exercise. Getting it to “promise” something is useless.
The issue with LLMs working with human languages is people eventually wanting to apply human things to LLMs such as asking why as if the LLM knows of its own decision process. It only takes an input and generates an output, it won’t be able to have any “meta thought” explanation about why it outputted X and not Y in the previous prompt.
I just wish I’m long gone before humanity descends into complete chaos.
Or the most common cases can be automated while the more nuanced surgeries will take the actual doctors.
They might, once it becomes too flooded with AI slop.
I like the saying that LLMs are “good” at stuff you don’t know. That’s about it.
When you know the subject it stops being much useful because you’ll already know the very obvious stuff that LLM could help you.
It doesn’t work because the car’s front is shaped to minimize drag, and a turbine would add drag — forcing the motor to work harder to maintain speed. Turbines generate energy by resisting airflow, not letting it slide past. So you’re not harvesting free energy; you’re paying for it with more fuel or battery.
A finances tracker.
If you use the LLM by itself it’s nothing beyond a toy, but I like to have personal coding projects indexed in a way I can discuss things like suggestions on what to do next, looking for mistakes etc.
Not everything you use LLMs for need accuracy, for example brainstorming is a very interesting activity for us humans, trying to see where your flaws in understanding a certain subject.
To be honest, you could do that just by writing (hence why writing is such an important activity), but I think for the majority of people discussing a problem with an LLM is easier than staring at a blank piece of paper.
Have you read The Time Machine book where the humans in the future were all morons and weak because we turned the world into a safe place that didn’t represent any challenge for us anymore?
The author described them like gnomes just playing around with the intelligence of little children.
Marginalia should be one of the most important things to preserve, in a similar importance to Wikipedia.