this post was submitted on 18 Apr 2024
36 points (97.4% liked)

chapotraphouse

13473 readers
1 users here now

Banned? DM Wmill to appeal.

No anti-nautilism posts. See: Eco-fascism Primer

Vaush posts go in the_dunk_tank

Dunk posts in general go in the_dunk_tank, not here

Don't post low-hanging fruit here after it gets removed from the_dunk_tank

founded 4 years ago
MODERATORS
 

one of the main people invested in making it smarter is saying that soon, it might even become self-sustaining and self-replicating

In a podcast interview with the New York Times' Ezra Klein, Anthropic CEO Dario Amodei discussed "responsible scaling" of the technology — and how without governance, it may start to, well, breed.

didnt-kill-himself

Amodei explained to Klein, Anthropic uses virology lab biosafety levels as an analogy for AI. Currently, he says, the world is at ASL 2 — and ASL 4, which would include "autonomy" and "persuasion," may be just around the corner

ASL 4 is going to be more about, on the misuse side, enabling state-level actors to greatly increase their capability, which is much harder than enabling random people," Amodei sai

say-the-line-bart-1

"So where we would worry that North Korea or China or Russia could greatly enhance their offensive capabilities in various military areas with AI in a way that would give them a substantial advantage at the geopolitical level

say-the-line-bart-2

"Various measures of these models," he continued, "are pretty close to being able to replicate and survive in the wild."

When Klein asked how long it would take to get to these various threat levels, Amodei — who said he's wont to thinking "in exponentials" — said he thinks the "replicate and survive in the wild" level could be reached "anywhere from 2025 to 2028."

sentient-ai

you are viewing a single comment's thread
view the rest of the comments
[–] Flyberius@hexbear.net 6 points 1 year ago

I want to play with it