Okay, complete shot in the dark here - the "humanoid robot" part is an attempt to convince investors they're making AI more humanlike or some shit like that
BlueMonday1984
Saatchi says you can type in a few words and the AI will generate scenes — or even a whole show. There are two test shows. One is Exit Valley, which is a copy of South Park set in Silicon Valley. Here’s an excerpt. [Vimeo]
For anyone who decides not to click, you're not missing out - the "episode" was equivalent to one of those millions of shitty GoAnimate "grounded" animations that you can find on YouTube. (in retrospect, GoAnimate/Vyond was basically AI slop before AI slop was a thing)
The closest that has to a use case is the guys who will do obnoxious parodies because the rights holders won’t like them. Let’s get Mickey Mouse doing something edgy!
Considering Tay AI was deliberately derailed into becoming a Hitler-loving sex robot, and the first wave of AI slop featured deliberately offensive Pixar-styled "posters", I can absolutely see this happening. (At least until The Mouse starts threatening Showrunner with getting sued into the ground.)
i am not surprised that they are all this dumb: it takes an especially stupid person to decide “yes, i am fine allowing this machine to speak for me”. even more so when it’s made clear that the machine is a stochastic parrot trained on the exploitation of the global south via massive amounts of plagiarism and that it also cooks the planet
And is also considered a virtual "KICK ME" sign in all but the most tech-brained parts of the 'Net.
are all promptfondlers this fucking dumb?
Short answer: Yes.
Long answer: Abso-fucking-lutely yes. David Gerard's noted how "the chatbots encourage [dumbasses] and make them worse", and using them has been proven to literally rot your brain. Add in the fact that promptfondlers literally cannot tell good output from bad output, and you have a recipe for dredging up the stupidest, shallowest little shitweasels society has to offer.
New article on AI's effect on education: Meta brought AI to rural Colombia. Now students are failing exams
(Shocking, the machine made to ruin humanity is ruining humanity)
This reads like something that'd be considered too offensive for South Park (mildly ironic, considering an entire episode infamously milked hard-R's for all they're worth)
Found an attempt to poison LLMs in the wild recently, aimed at trying to trick people into destroying their Cybertrucks:
This is very much in the same vein as the iOS Wave hoax which aimed to trick people into microwaving their phones, and Eduardo Valdés-Hevia's highly successful attempt at engorging Google's AI summary systems.
Poisoning AI systems in this way isn't anything new (Baldur Bjarnason warned about it back in 2023), but seeing it used this way does make me suspect we're gonna see more ChatGPT-powered hoaxes like this in the future.
Fear: There’s a million ways people could die, but featuring ones that require the fewest jumps in practicality seem the most fitting. Perhaps microdrones equipped with bioweapons that spray urban areas. Or malicious actors sending drone swarms to destroy crops or other vital infrastructure.
I can think of some more realistic ideas. Like AI-generated foraging books leading to people being poisoned, or chatbot-induced psychosis leading to suicide, or AI falsely accusing someone and sending a lynch mob after them, or people becoming utterly reliant on AI to function, leaving them vulnerable to being controlled by whoever owns whatever chatbot they're using.
All of these require zero jumps in practicality, and as a bonus, they don't need the "exponential growth" setup LW's AI Doomsday Scenarios™ require.
EDIT: Come to think of it, if you really wanted to make an AI Doomsday™ kinda movie, you could probably do an Idiocracy-style dystopia where the general masses are utterly reliant on AI, the villains control said masses through said AI, and the heroes have to defeat them by breaking the masses' reliance on AI.
To slightly expand on that, there's also a rather well-known(?) quote by English mathematician G.H. Hardy, written in A Mathematician's Apology in 1940:
A science is said to be useful if its development tends to accentuate the existing inequalities in the distribution of wealth, or more directly promotes the destruction of human life.
(Ironically, two of the theories which he claimed had no wartime use - number theory and relativity - were used to break Enigma encryption and develop nuclear weapons, respectively.)
Expanding further, Pavel has noted on Bluesky that Russia's mathematical prowess was a consequence of the artillery corps requiring it for trajectory calculations.
HUMANCENTiPAD II: LLM Boogaloo
The only way in which it may succeed as a deterrent is that it actually costs some money (film and processing cost real money and it’s not cheap) and requires actual work to do those extra steps.
I expect the "requires actual work" part will work well in deterring AI bros - they're lazy fucks by nature, anything more difficult than "press button for instant gratification" is gonna be a turn-off for them.
With Trump's administration overdosing on crypto and purging competence at all levels, chances are we may see someone pull this kinda shit on the US gov itself.