Deebster

joined 4 years ago
[–] Deebster@lemmy.ml 32 points 1 year ago (13 children)

Americans visited the UK during WW2's rationing and never updated their stereotypes.

[–] Deebster@lemmy.ml 6 points 1 year ago (1 children)

Huh, so it is! Growing up in the UK, the US version seemed to be on more, and I'd assumed that that was the original.

[–] Deebster@lemmy.ml 10 points 1 year ago (1 children)

You've missed off the ! so Voyager thinks it's an email address.

!mullets@lemmy.ca

[–] Deebster@lemmy.ml 6 points 1 year ago* (last edited 1 year ago) (2 children)

I doubt that you can get your skin hot enough to denature those proteins without damaging yourself. I've given myself a blister before trying.

[–] Deebster@lemmy.ml 3 points 1 year ago (1 children)

Over many years, I've settled on hydrocortisone cream followed by an ice cube. Those little buggers love me.

[–] Deebster@lemmy.ml 3 points 1 year ago (1 children)

Hmm, I think they're close enough to be able to say a neural network is modelled on how a brain works - it's not the same, but then you reach the other side of the semantics coin (like the "can a submarine swim" question).

The plasticity part is an interesting point, and I'd need to research that to respond properly. I don't know, for example, if they freeze the model because otherwise input would ruin it (internet teaching them to be sweaty racists, for example), or because it's so expensive/slow to train, or high error rates, or it's impossible, etc.

When talking to laymen I've explained LLMs as a glorified text autocomplete, but there's some discussion on the boundary of science and philosophy that's asking is intelligence a side effect of being able to predict better.

[–] Deebster@lemmy.ml 1 points 1 year ago

Humans invent stuff (without realising) it to, so I don't think that's enough to disqualify something from being intelligent.

The interesting question is how much of this is due to the training goal basically being "a sufficiently convincing response to satisfy a person" (pretty much the same as on social media) and how much of it is a fundamental flaw in the whole idea.

[–] Deebster@lemmy.ml 4 points 1 year ago* (last edited 1 year ago) (3 children)

I agree to your broad point, but absolutely not in this case. Large Language Models are 100% AI, they're fairly cutting edge in the field, they're based on how human brains work, and even a few of the computer scientists working on them have wondered if this is genuine intelligence.

On the spectrum of scripted behaviour in Doom up to sci-fi depictions of sentient silicon-based minds, I think we're past the halfway point.

[–] Deebster@lemmy.ml 191 points 1 year ago (1 children)

I had to check, but the real thing is the Dairy Council and this is a parody account. Obviously it's way more interesting than the real @dairyuk account.

[–] Deebster@lemmy.ml 48 points 1 year ago (21 children)

You're claiming that Generative AI isn't AI? Weird claim. It's not AGI, but it's definitely under the umbrella of the term "AI", and at the more advanced end (compared to e.g. video game AI).

[–] Deebster@lemmy.ml 41 points 1 year ago (23 children)

This one's obviously fake because of the capitalisation errors and .. but the fact that it's otherwise (kinda) plausible shows how useless AI is turning out to be.

[–] Deebster@lemmy.ml 1 points 1 year ago

Road distances are in miles if you're driving, but if you're running (maybe also cycling?) then it's in km.

view more: ‹ prev next ›