wischi

joined 2 years ago
[–] wischi@programming.dev 8 points 1 month ago (1 children)

Any backups of the repository itself (and not the GitHub rendering)?

[–] wischi@programming.dev 1 points 1 month ago

Exactly what I did and there also is a ICE number in the contact list (number stored a second time with the real name)

[–] wischi@programming.dev 44 points 1 month ago (8 children)

All contacts on my phone are stored with their real full name including my wife, children and parents. Maybe it's my adhd but I would hate trying to find someone but can't remember how I saved their details.

[–] wischi@programming.dev 6 points 1 month ago* (last edited 1 month ago) (1 children)

How about driving with at most that speed that would allow you to break to a full stop inside the range you can see. Or within half that range if the street is to narrow for another passing car.

[–] wischi@programming.dev 1 points 2 months ago

I didn't say they have no knowledge, quite the opposite. Here a quote from the comment you answered:

LLMs are extremely knowledgeable (as in they "know" a lot) but are completely dumb.

There is a subtle difference between intelligent and knowledgeable. LLM know a lot in that sense that they can remember a lot of things, but they are dumb in that sense that they are completely unable to draw conclusions and put that knowledge into action in any other means besides spitting out again what they once learned.

That's why LLMs can tell you a lot about about all different kinds of game theory about tic tac toe but can't draw/win that game consistently.

So knowing a lot and still being dumb is not a contradiction.

[–] wischi@programming.dev 3 points 2 months ago* (last edited 2 months ago)

The "may" carries a lot of weight so it probably depends. The way US law works is pretty weird IMHO and the reason for many of such disclaimers/waivers. "Objects in mirror are closer than they appear", "Contents may be hot", etc.

[–] wischi@programming.dev 1 points 2 months ago (1 children)

But wouldn't you point still be true today that the best AI video models today would be the onces that are not available for consumers?

[–] wischi@programming.dev 2 points 2 months ago (1 children)

That's right and I don't blame anyone who bought a Tesla or anything really (except that you have to be out of your mind to buy a car without knobs 🤣 for that price) and people still my from Amazon and Nestle and I don't blame them either. That's actually something that politics would have to solve, but that's an entirely different story.

[–] wischi@programming.dev 3 points 2 months ago (3 children)

Owning a Tesla doesn't make you a bad person that's right, but Elon was a complete idiot at least since his PayPal time.

[–] wischi@programming.dev 3 points 2 months ago

Wave was actually on to something because it had the capabilities to be federated like email but more modern.

[–] wischi@programming.dev 4 points 2 months ago* (last edited 2 months ago)

Coding isn't special you are right, but it's a thinking task and LLMs (including reasoning models) don't know how to think. LLMs are knowledgeable because they remembered a lot of the data and patterns of the training data, but they didn't learn to think from that. That's why LLMs can't replace humans.

That does certainly not mean that software can't be smarter than humans. It will and it's just a matter of time, but to get there we likely have AGI first.

To show you that LLMs can't think, try to play ASCII tic tac toe (XXO) against all those models. They are completely dumb even though it "saw" the entire Wikipedia article on how xxo works during training, that it's a solved game, different strategies and how to consistently draw - but still it can't do it. It loses most games against my four year old niece and she doesn't even play good/perfect xxo.

I wouldn't trust anything, which is claimed to do thinking tasks, that can't even beat my niece in xxo, with writing firmware for cars or airplanes.

LLMs are great if used like search engines or interactive versions of Wikipedia/Stack overflow. But they certainly can't think. For now, but likely we'll need different architectures for real thinking models than LLMs have.

[–] wischi@programming.dev 0 points 2 months ago* (last edited 2 months ago) (2 children)

I don't see how that follows because I did point out in another comment that they are very useful if used like search engines or interactive stack overflow or Wikipedia.

LLMs are extremely knowledgeable (as in they "know" a lot) but are completely dumb.

If you want to anthropomorphise it, current LLMs are like a person that read the entire internet, remembered a lot of it, but still is too stupid to win/draw tic tac toe.

So there is value in LLMs, if you use them for their knowledge.

view more: ‹ prev next ›