wischi

joined 2 years ago
[โ€“] wischi@programming.dev 4 points 2 months ago (1 children)

Thank you. But are jellyfish really not that far off. Looks like a pretty huge step to me. Jellyfish look complex enough to not just magically reassemble if we grind them through a sieve.

I personally (but I'm not a biologists ๐Ÿคฃ) definitely would consider a jellyfish an animal because different cells (at least ot very much looks like that) have different functions and thus throwing it in the meat grinder (even if individual cells are not damaged) I can't imagine how ot could reassemble itself.

But a sponge seems so homogeneous it (I guess) almost doesn't matter what goes where and that's why it can reassemble. That why (I personally) wouldn't think of that as an animal.

Are there other things that are technically animals that are that homogeneous?

[โ€“] wischi@programming.dev 2 points 2 months ago

Totally agree with that and I don't think anybody would see that as controversial. LLMs are actually good in a lot of things, but not thinking and typically not if you are an expert. That's why LLMs know more about the anatomy of humans than I do, but probably not more than most people with a medical degree.

[โ€“] wischi@programming.dev 3 points 2 months ago (4 children)

I can't speak for Lemmy but I'm personally not against LLMs and also use them on a regular basis. As Pennomi said (and I totally agree with that) LLMs are a tool and we should use that tool for things it's good for. But "thinking" is not one of the things LLMs are good at. And software engineering requires a ton of thinking. Of course there are things (boilerplate, etc.) where no real thinking is required, but non-AI tools like code completion/intellisense, macros, code snippets/templates can help with that and never was I bottle-necked by my typing speed when writing software.

It was always the time I needed to plan the structure of the software, design good and correct abstractions and the overall architecture. Exactly the things LLMs can't do.

Copilot even fails to stick to coding style from the same file, just because it saw a different style more often during training.

[โ€“] wischi@programming.dev -1 points 2 months ago* (last edited 2 months ago)

There actually isn't really any doubt that AI (especially AGI) will surpass humans on all thinking tasks unless we have a mass extinction event first. But current LLMs are nowhere close to actual human intelligence.

[โ€“] wischi@programming.dev 12 points 2 months ago* (last edited 2 months ago) (2 children)

A drill press (or the inventors) don't claim that it can do that, but with LLMs they claim to replace humans on a lot of thinking tasks. They even brag with test benchmarks, claim Bachelor, Master and Phd level intelligence, call them "reasoning" models, but still fail to beat my niece in tic tac toe, which by the way doesn't have a PhD in anything ๐Ÿคฃ

LLMs are typically good in things that happened a lot during training. If you are writing software there certainly are things which the LLM saw a lot of during training. But this actually is the biggest problem, it will happily generate code that might look ok, even during PR review but might blow up in your face a few weeks later.

If they can't handle things they even saw during training (but sparsely, like tic tac toe) it wouldn't be able to produce code you should use in production. I wouldn't trust any junior dev that doesn't set their O right next to the two Xs.

[โ€“] wischi@programming.dev 4 points 2 months ago (10 children)

I don't think it's cherry picking. Why would I trust a tool with way more complex logic, when it can't even prevent three crosses in a row? Writing pretty much any software that does more than render a few buttons typically requires a lot of planning and thinking and those models clearly don't have the capability to plan and think when they lose tic tac toe games.

[โ€“] wischi@programming.dev 5 points 2 months ago* (last edited 2 months ago) (5 children)

Honest question. How is that sponge an animal and how is "animal" defined? If we grind something through a sieve and it reassembles surely the lifeform can't be too complicated.

[โ€“] wischi@programming.dev 6 points 2 months ago

Play ASCII tic tac toe against 4o a few times. A model that can't even draw a tic tac toe game consistently shouldn't write production code.

[โ€“] wischi@programming.dev 12 points 2 months ago* (last edited 2 months ago) (13 children)

Practically all LLMs aren't good for any logic. Try to play ASCII tic tac toe against it. All GPT models lost against my four year old niece and I wouldn't trust her writing production code ๐Ÿคฃ

Once a single model (doesn't have to be a LLM) can beat Stockfish in chess, AlphaGo in Go, my niece in tic tac toe and can one-shot (on the surface, scratch-pad allowed) a Rust program that compiles and works, than we can start thinking about replacing engineers.

Just take a look at the dotnet runtime source code where Microsoft employees currently try to work with copilot, which writes PRs with errors like forgetting to add files to projects. Write code that doesn't compile, fix symptoms instead of underlying problems, etc. (just take a look yourself).

I don't say that AI (especially AGI) can't replace humans. It definitely can and will, it's just a matter of time, but state of the Art LLMs are basically just extremely good "search engines" or interactive versions of "stack overflow" but not good enough to do real "thinking tasks".

[โ€“] wischi@programming.dev 8 points 2 months ago (3 children)

Take your phone number. Now add/subtract 1. Those are your number neighbors.

[โ€“] wischi@programming.dev 4 points 2 months ago (1 children)

Can't really be a bit of both because they can't confirm shit if they don't know what you look like in the first place. It could be to confirm that you are human (and maybe that you don't already have an account) but they can't confirm your "identity".

view more: โ€น prev next โ€บ