this post was submitted on 03 May 2025
1407 points (99.2% liked)

memes

16616 readers
2770 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 

Alt Text: an image of Agent Smith from The Matrix with the following text superimposed, "1999 was described as being the peak of human civilization in 'The Matrix' and I laughed because that obviously wouldn't age well and then the next 25 years happened and I realized that yeah maybe the machines had a point."

you are viewing a single comment's thread
view the rest of the comments
[–] Sludgehammer@lemmy.world 182 points 3 months ago* (last edited 3 months ago) (57 children)

When I heard that line I was like "Yeah, sure. We'll never have AI in my lifespan" and you know what? I was right.

What I wasn't expecting was for a bunch of tech bros to create an advanced chatbot and announce "Behold! We have created AI, let's have it do all of our thinking for us!" while the chatbot spits out buggy code and suggests mixing glue into your pizza sauce.

[–] masterspace@lemmy.ca -3 points 3 months ago* (last edited 3 months ago) (11 children)

When I heard that line I was like "Yeah, sure. We'll never have AI in my lifespan" and you know what? I was right.

Unless you just died or are about to, you can't really confidently make that statement.

There's no technical reason to think we won't in the next ~20-50 years. We may not, and there may be a technical reason why we can't, but the previous big technical hurdles were the amount of compute needed and that computers couldn't handle fuzzy pattern matching, but modern AI has effectively found a way of solving the pattern matching problem, and current large models like ChatGPT model more "neurons" than are in the human brain, let alone the power that will be available to them in 30 years.

[–] lowleveldata@programming.dev 12 points 3 months ago (1 children)

the previous big technical hurdles were the amount of compute needed and that computers couldn’t handle fuzzy pattern matching

Was it? I thought it was always about we haven't quite figure it out what thinking really is

[–] masterspace@lemmy.ca 2 points 3 months ago* (last edited 3 months ago) (1 children)

I mean, no, not really. We know what thinking is. It's neurons firing in your brain in varying patterns.

What we don't know is the exact wiring of those neurons in our brain. So that's the current challenge.

But previously, we couldn't even effectively simulate neurons firing in a brain, AI algorithms are called that because they effectively can simulate the way that neurons fire (just using silicon) and that makes them really good at all the fuzzy pattern matching problems that computers used to be really bad at.

So now the challenge is figuring out the wiring of our brains, and/or figuring out a way of creating intelligence that doesn't use the wiring of our brains. Both are entirely possible now that we can experiment and build and combine simulated neurons at ballpark the same scale as the human brain.

[–] lowleveldata@programming.dev 2 points 3 months ago (1 children)

Aren't you just saying the same thing? We know it has something to do with the neurons but couldn't figure it out exactly how

[–] masterspace@lemmy.ca 2 points 3 months ago* (last edited 3 months ago)

The distinction is that it's not 'something to do with neurons', it's 'neurons firing and signalling each other'.

Like, we know the exact mechanism by which thinking happens, we just don't know the precise wiring pattern necessary to recreate the way that we think in particular.

And previously, we couldn't effectively simulate that mechanism with computer chips, now we can.

load more comments (9 replies)
load more comments (54 replies)