My only hope for this is that the GPUs in these CDO spiritual successors become dirt cheap afterwards.
They hopefully will, since the end of the AI bubble will kill AI for good and crash GPU demand.
My only hope for this is that the GPUs in these CDO spiritual successors become dirt cheap afterwards.
They hopefully will, since the end of the AI bubble will kill AI for good and crash GPU demand.
Bonus: He also appears to think LLM conversations should be exempt from evidence retention requirements due to ‘AI privilege’ (tweet).
Hot take of the day: Clankers have no rights, and that is a good thing
Sidenote: The rats should count themselves extremely fucking lucky they've avoided getting skewered by South Park, because Parker and Stone would likely have a fucking field day with their beliefs
Apparently linkedin’s cofounder wrote a techno-optimist book on AI called Superagency: What Could Possibly Go Right with Our AI Future.
This sounds like its going to be horrible
Zack of SMBC has thoughts on it:
Ah, good, I'll just take his word for it, the thought of reading it gives me psychic da-
the authors at one point note that in 1984, Big Brother's listening device means there is two way communication, and so the people have a voice. He wonders why Orwell didn't think of this.
The closest thing I have to a coherent response is that Boondocks clip of Uncle Ruckus going "Read, nigga, read!" (from Stinkmeaner Strikes Back, if you're wondering) because how breathtakingly stupid do you have to be to miss the point that fucking hard
“biological civilization is about to create artificial superintelligence” is it though?
I'm gonna give my quick-and-dirty opinion on this, don't expect a lengthy defence.
Short answer, no. Long answer: no, intelligence cannot be created by blindly imitating it with mere silicon
Nitpicking, but at what point do we start calling it race pseudoscience?
"Hating Black People" would be a more fitting name.
“Music is just like meth, cocaine or weed. All pleasure no value. Don’t listen to music.”
(Considering how many rationalists are also methheads, this joke wrote itself)
Ed Zitron's planning a follow-up to "The Subprime AI Crisis":
(Its gonna be a premium column, BTW)
EDIT: Swapped the image for one that's easier-to-read
This is pure speculation, but I get the feeling Microsoft's gonna significantly downsize, if not collapse, by the decade's end.
This recent move's gonna kneecap Microsoft's ability to function as a company, and their heavy investment into AI mean they'll likely take the brunt of the impact when the bubble bursts.
New blogpost from Iris Meredith: Vulgar, horny and threatening, a how-to guide on opposing the tech industry
Me, two months ago
Well, it appears I've fucking called it - I've recently stumbled across some particularly bizarre discourse on Tumblr recently, reportedly over a highly unsubtle allegory for transmisogynistic violence:
You want my opinion on this small-scale debacle, I've got two thoughts about this:
First, any questions about the line between man and machine have likely been put to bed for a good while. Between AI art's uniquely AI-like sloppiness, and chatbots' uniquely AI-like hallucinations, the LLM bubble has done plenty to delineate the line between man and machine, chiefly to AI's detriment. In particular, creativity has come to be increasingly viewed as exclusively a human trait, with machines capable only of copying what came before.
Second, using robots or AI to allegorise a marginalised group is off the table until at least the next AI spring. As I've already noted, the LLM bubble's undermined any notion that AI systems can act or think like us, and double-tapped any notion of AI being a value-neutral concept. Add in the heavy backlash that's built up against AI, and you've got a cultural zeitgeist that will readily other or villainise whatever robotic characters you put on screen - a zeitgeist that will ensure your AI-based allegory will fail to land without some serious effort on your part.