HedyL

joined 2 years ago
[–] HedyL@awful.systems 10 points 1 month ago

At the very least, many of them were probably unable to differentiate between "coding problems that have been solved a million times and are therefore in the training data" and "coding problems that are specific to a particular situation". I'm not a software developer myself, but that's my best guess.

[–] HedyL@awful.systems 8 points 1 month ago (2 children)

Even the idea of having to use credits to (maybe?) fix some of these errors seems insulting to me. If something like this had been created by a human, the customer would be eligible for a refund.

Yet, under Aron Peterson's LinkedIn posts about these video clips, you can find the usual comments about him being "a Luddite", being "in denial" etc.

[–] HedyL@awful.systems 15 points 1 month ago (19 children)

It is funny how, when generating the code, it suddenly appears to have "understood" what the instruction "The dog can not be left unattended" means, while that was clearly not the case for the natural language output.

[–] HedyL@awful.systems 9 points 1 month ago (2 children)

FWIW, due to recent developments, I've found myself increasingly turning to non-search engine sources for reliable web links, such as Wikipedia source lists, blog posts, podcast notes or even Reddit. This almost feels like a return to the early days of the internet, just in reverse and - sadly - with little hope for improvement in the future.

[–] HedyL@awful.systems 10 points 1 month ago

Google has a market cap of about 2.1 trillion dollars. Therefore the stock price only has to go up by about 0,00007 percent following the iNaturalist announcement for this "investment" to pay off. Of course, this is just a back-of-the-envelope calculation, but maybe popular charities should keep this in mind before accepting money in a context like this.

[–] HedyL@awful.systems 6 points 1 month ago (1 children)

Also, if the LLM had reasoning capabilities that even remotely resembled those of an actual human, let alone someone who would be able to replace office workers, wouldn't they use the best tool they had available for every task (especially in a case as clear-cut as this)? After all, almost all humans (even children) would automatically reach for their pocket calculators here, I assume.

[–] HedyL@awful.systems 11 points 1 month ago

Also, these bots have been deliberately fine-tuned in a way that is supposed to sound human. Sometimes, as a consequence, I find it difficult to describe their answering style without employing vocabulary used to describe human behavior. Also, I strongly suspect that this deliberate "human-like" style is a key reason for the current AI hype. It is why many people appear to excuse the bots' huge shortcomings. It is funny to be accused of being "emotional" when pointing out these patterns as problematic.

[–] HedyL@awful.systems 12 points 1 month ago (7 children)

Also, a lawnmower is unlikely to say: "Sure, I am happy to take you to work" and "I am satisfied with my performance" afterwards. That's why I sometimes find these bots' pretentious demeanor worse than their functional shortcomings.

[–] HedyL@awful.systems 48 points 1 month ago (5 children)

As usual with chatbots, I'm not sure whether it is the wrongness of the answer itself that bothers me most or the self-confidence with which said answer is presented. I think it is the latter, because I suspect that is why so many people don't question wrong answers (especially when they're harder to check than a simple calculation).

[–] HedyL@awful.systems 14 points 1 month ago

LOL - you might not want to believe that, but there is nothing to cut down. I actively steer clear of LLMs because I find them repulsive (being so confidently wrong almost all the time).

Nevertheless, there will probably be some people who claim that thanks to LLMs we no longer need the skills for language processing, working memory, or creative writing, because LLMs can do all of this much better than humans (just like calculators can calculate a square root faster). I think that's bullshit, because LLMs just aren't capable of doing any of these things in a meaningful way.

[–] HedyL@awful.systems 18 points 1 month ago (6 children)

No, but it does mean that little girls no longer learn to write greeting cards to their grandmothers in beautiful feminine handwriting. It's important to note that I was part of Generation X and, due to innate clumsiness (and being left-handed), I didn't have pretty handwriting even before computers became the norm. But I was berated a lot for that, and computers supposedly made everything worse. It was a bit of a moral panic.

But I admit that this is not comparable to chatbots.

[–] HedyL@awful.systems 35 points 1 month ago (12 children)

Similar criticisms have probably been leveled at many other technologies in the past, such as computers in general, typewriters, pocket calculators etc. It is true that the use of these tools and technologies has probably contributed to a decline in the skills required for activities such as memorization, handwriting or mental calculation. However, I believe there is an important difference to chatbots: While typewriters (or computers) usually produce very readable text (much better than most people's handwriting), pocket calculators perform calculations just fine and information from a reputable source retrieved online isn't any less correct than one that had been memorized (probably more so), the same couldn't be said about chatbots and LLMs. They aren't known to produce accurate or useful output in a reliable way - therefore many of the skills that are being lost by relying on them might not be replaced with something better.

view more: ‹ prev next ›