Yeah, in some hypothetical bizarro universe where they get their 8 (cutting edge) GPU limit actually passed, I bet the monitoring scheme would be loose in a way that Microsoft just has a box to tick while strict enough that it imposes untenable costs on private individuals.
scruiser
So if I understood NVIDIA's "strategy" right, their usage of companies like Coreweave is drawing in money from other investors and private equity? Does this mean, that unlike many of the other companies in the current bubble, they aren't going to lose money on net, because they are actually luring in investment from other sources in companies like Coreweave (which is used to buy GPU and thus goes to them), whileleaving the debt/obligations in the hands of companies like Coreweave? If I'm following right this is still a long term losing strategy (assuming some form of AI bubble pop or deflation we are all at least reasonably sure of), but the expected result for NVIDIA is more of a massive drop in revenue as opposed to a total collapse of their company under a mountain of debt?
Side note: The way I've seen clanker used has been for the AIs themselves, not their users. I've mostly seen the term in the context of star wars memers eager to put their anti-droid memes and jokes to IRL usage.
It was nice to see the call out that doomers and accelerationists are two sides of the same coin and both serving the interests of the LLM makers.
Side note, (maybe I should make a top level post), but even within the lesswronger/doomer community there are a lot of people unimpressed to outright critical of Eliezer's book. Apparently it has a lot of rambling, kind of off topic, and/or condescending parables. And key pieces of the AI-doom argument are kind of not explained or even elaborated on.
It's like a cargo cult version of bootstrapping or monte carlo methods.
That thread gives me hope. A decade ago, a random internet discussion in which rationalist came up would probably mention "quirky Harry Potter fanfiction" with mixed reviews, whereas all the top comments on that thread are calling out the alt-right pipeline and the racism.
I hadn't heard of Black Lotus. Also, the article fails to mention rationalist/lesswrong ties to that AI-doom-focused Zen Buddhism cult that was discussed on Lesswrong recently (looking it up, the name is Maple), so you can add that to the cult count.
I'm at least enjoying the many comments calling her out, but damn she just doubles down even after being given many many examples of him being a far-right nationalist monster who engaged in attempts to outright subvert democracy.
The Oracle deal seemed absurd, but I didn't realize how absurd until I saw Ed's compilation of the numbers. Notably, it means even if OpenAI meets its projected revenue numbers (which are absurdly optimistic, like bigger than Netflix and Spotify and several other services combined) paying Oracle (along with everyone else it has promised to buy compute from) will put it net negative on revenue until 2030, meaning it has to raise even more money.
I've been assuming Sam Altman has absolutely no real belief that LLMs would lead to AGI and has instead been cynically cashing in on the sci-fi hype, but OpenAI's choices don't make any long term sense if AGI isn't coming. The obvious explanation is that at this point he simply plans to grift and hype (while staying technically within the bounds of legality) to buy few years of personal enrichment. And to even ask what his "real beliefs" are gives him too much credit.
Just to remind everyone: the market can stay irrational longer than you can stay solvent!
This feels like a symptom of liberals having a diluted incomplete understanding of what made past movements that utilized protest succeed or fail.
It is pretty good as a source for science fiction ideas. I mean, lots of their ideas originate from science fiction, but their original ideas would make fun fantasy sci-fi concepts. Like looking off their current front page... https://www.lesswrong.com/posts/WLFRkm3PhJ3Ty27QH/the-cats-are-on-to-something cat's deliberately latching on to humans as the most lazy way of advancing their own value across the future seems like a solid point of fantasy worldworldbuilding...
There's also Eliezer's nihilistic outlook, which is deftly woven into his parables-- his personal philosophy draws heavily from Godel Escher Bach, for instance. The fans understand this stuff; they have the intellectual capacity to truly appreciate the depths of his parables, to realize that they're not just entertaining- they say something deep about the nature of Intelligence. As a consequence people who dislike IABIED truly ARE idiots- of course they wouldn't appreciate, for instance, the motivation in Eliezier's existencial catchphrase "Tsuyoku Naritai!," which itself is a cryptic reference to Japanese culture. I'm smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as Nate Soares genius unfolds itself on their copy of IABIED. What fools... how I pity them. ๐ And yes by the way, I DO have a rationalist tattoo. And no, you cannot see it. It's for the math pet's eyes only- And even they have to demonstrate that they're within 5 IQ points of my own (preferably lower) beforehand.