TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Bay Area rationalist Sam Kirchner, cofounder of the Berkeley "Stop AI" group, claims "nonviolence isn't working anymore" and goes off the grid. Hasn't been heard from in weeks.
Article has some quotes from Emile Torres.
Jesus, it could be like the Zizians all over again. These guys are all such fucking clowns right up until they very much are not.
Yud’s whole project is a pipeline intended to create zizians, if you believe that Yud is serious about his alignment beliefs. If he isn’t serious then it’s just an unfortunate consequence that he is not trying to address in any meaningful way.
fortunately, yud clarified everything in his recent post concerning the zizians, which indicated that... uh, hmm, that we should use a prediction market to determine whether it's moral to sell LSD to children. maybe he got off track a little
A belief system that inculates the believer into thinking that the work is the most important duty a human can perform, while also isolating them behind impenetrable pseudo-intellectual esoterica, while also funneling them into economic precarity... sounds like a recipe for ~~delicious brownies~~ trouble.
the real xrisk was the terror clowns we made along the way.
just one rationalist got lost in the wilderness? that's nothing, tell me when all of them are gone
The concern is that they're not "lost in the wilderness" but rather are going to turn up in the vicinity of some newly dead people.
Another day, another instance of rationalists struggling to comprehend how they've been played by the LLM companies: https://www.lesswrong.com/posts/5aKRshJzhojqfbRyo/unless-its-governance-changes-anthropic-is-untrustworthy
A very long, detailed post, elaborating very extensively the many ways Anthropic has played the AI doomers, promising AI safety but behaving like all the other frontier LLM companies, including blocking any and all regulation. The top responses are all tone policing and such denying it in a half-assed way that doesn't really engage with the fact the Anthropic has lied and broken "AI safety commitments" to rationalist/lesswrongers/EA shamelessly and repeatedly:
I feel confused about how to engage with this post. I agree that there's a bunch of evidence here that Anthropic has done various shady things, which I do think should be collected in one place. On the other hand, I keep seeing aggressive critiques from Mikhail that I think are low-quality (more context below), and I expect that a bunch of this post is "spun" in uncharitable ways.
I think it's sort of a type error to refer to Anthropic as something that one could trust or not. Anthropic is a company which has a bunch of executives, employees, board members, LTBT members, external contractors, investors, etc, all of whom have influence over different things the company does.
I would find this all hilarious, except a lot of the regulation and some of the "AI safety commitments" would also address real ethical concerns.
If rationalists could benefit from just one piece of advice, it would be: actions speak louder than words. Right now, I don't think they understand that, given their penchant for 10k word blog posts.
One non-AI example of this is the most expensive fireworks show in history, I mean, the SpaceX Starship program. So far, they have had 11 or 12 test flights (I don't care to count the exact number by this point), and not a single one of them has delivered anything into orbit. Fans generally tend to cling on to a few parlor tricks like the "chopstick" stuff. They seem to have forgotten that their goal was to land people on the moon. This goal had already been accomplished over 50 years ago with the 11th flight of the Apollo program.
I saw this coming from their very first Starship test flight. They destroyed the launchpad as soon as the rocket lifted off, with massive chunks of concrete flying hundreds of feet into the air. The rocket itself lost control and exploded 4 minutes later. But by far the most damning part was when the camera cut to the SpaceX employees wildly cheering. Later on there were countless spin articles about how this test flight was successful because they collected so much data.
I chose to believe the evidence in front of my eyes over the talking points about how SpaceX was decades ahead of everyone else, SpaceX is a leader in cheap reusable spacecraft, iterative development is great, etc. Now, I choose to look at the actions of the AI companies, and I can easily see that they do not have any ethics. Meanwhile, the rationalists are hypnotized by the Anthropic critihype blog posts about how their AI is dangerous.
(e, cw: genocide and culturally-targeted hate by the felon bot)
world's most divorced man continues outperforming black holes at sucking
404 also recently did a piece on his ego-maintenance society-destroying vainglory projects
imagine what it's like in his head. era-defining levels of vacuous.
hi please hate this article headline with me
If TCP/IP stack had feelings, it would have a great reason to feel insulted.
hi hi I am budweiser jabrony please join my new famous and good website 'tapering incorrectness dot com' where we speculate about which OSI layers have the most consciousness (zero is not a valid amount of consciousness) also give money and prima nocta. thanks
Hey Google, did I give you permission to delete my entire D drive?
It's almost as if letting an automated plagiarism machine execute arbitrary commands on your computer is a bad idea.
The documentation for "Turbo mode" for Google Antigravity:
Turbo: Always auto-execute terminal commands (except those in a configurable Deny list)
No warning. No paragraph telling the user why it might be a good idea. No discussion on the long history of malformed scripts leading to data loss. No discussion on the risk for injection attacks. It's not even named similarly to dangerous modes in other software (like "force" or "yolo" or "danger")
Just a cool marketing name that makes users want to turn it on. Heck if I'm using some software and I see any button called "turbo" I'm pressing that.
It's hard not to give the user a hard time when they write:
Bro, I didn’t know I needed a seatbelt for AI.
But really they're up against a big corporation that wants to make LLMs seem amazing and safe and autonomous. One hand feeds the user the message that LLMs will do all their work for them. While the other hand tells the user "well in our small print somewhere we used the phrase 'Gemini can make mistakes' so why did you enable turbo mode??"
yeah as I posted on mastodong.soc, it continues to make me boggle that people think these fucking ridiculous autoplag liarsynth machines are any good
but it is very fucking funny to watch them FAFO
After the bubble collapses, I believe there is going to be a rule of thumb for whatever tiny niche use cases LLMs might have: "Never let an LLM have any decision-making power." At most, LLMs will serve as a heuristic function for an algorithm that actually works.
Unlike the railroads of the First Gilded Age, I don't think GenAI will have many long term viable use cases. The problem is that it has two characteristics that do not go well together: unreliability and expense. Generally, it's not worth spending lots of money on a task where you don't need reliability.
The sheer expense of GenAI has been subsidized by the massive amounts of money thrown at it by tech CEOs and venture capital. People do not realize how much hundreds of billions of dollars is. On a more concrete scale, people only see the fun little chat box when they open ChatGPT, and they do not see the millions of dollars worth of hardware needed to even run a single instance of ChatGPT. The unreliability of GenAI is much harder to hide completely, but it has been masked by some of the most aggressive marketing in history towards an audience that has already drunk the tech hype Kool-Aid. Who else would look at a tool that deletes their entire hard drive and still ever consider using it again?
The unreliability is not really solvable (after hundreds of billions of dollars of trying), but the expense can be reduced at the cost of making the model even less reliable. I expect the true "use cases" to be mainly spam, and perhaps students cheating on homework.
Kovid Goyal, the primary dev of ebook management tool Calibre, has spat in the face of its users by forcing AI "features" into it.
https://awful.systems/post/5776862/8966942 😭
also this guy is a bit of a doofus, e.g. https://bugs.launchpad.net/calibre/+bug/853934, where he is a dick to someone reporting a bug, and https://bugs.launchpad.net/calibre/+bug/885027, where someone points out that you can execute anything as root because of a security issue, and he argues like a total shithead
You mean that a program designed to let an unprivileged user
mount/unmount/eject anything he wants has a security flaw because it allows
him to mount/unmount/eject anything he wants? I'm shocked.
Implement a system that allows an appilcation to mount/unmount/eject USB
devices connected to the system securely, then make sure that system is
universally adopted on every linux install in the universe. Once you've done that, feel free to
re-open this ticket.
i would not invite this person to my birthday
Reposted from sunday, for those of you who might find it interesting but didn’t see it: here’s an article about the ghastly state of it project management around the world, with a brief reference to ai which grabbed my attention, and made me read the rest, even though it isn’t about ai at all.
Few IT projects are displays of rational decision-making from which AI can or should learn.
Which, haha, is a great quote but highlights an interesting issue that I hadn’t really thought about before: if your training data doesn’t have any examples of what “good” actually is, then even if your llm could tell the difference between good and bad, which it can’t, you’re still going to get mediocrity out (at best). Whole new vistas of inflexible managerial fashion are opening up ahead of us.
The article continues to talk about how we can’t do IT, and wraps up with
It may be a forlorn request, but surely it is time the IT community stops repeatedly making the same ridiculous mistakes it has made since at least 1968, when the term “software crisis” was coined
It is probably healthy to be reminded that the software industry was in a sorry state before the llms joined in.
This looks like it's relevant to our interests
Hayek's Bastards: Race, Gold, IQ, and the Capitalism of the Far Right by Quinn Slobodian
https://press.princeton.edu/books/hardcover/9781890951917/hayeks-bastards
anyone else spent their saturday looking for gas turbine datasheets? no?
anyway, the bad, no good, haphazard power engineering of crusoe
neoclouds on top of silicon need a lot of power that they can't get because they can't get substation big enough, or maybe provider denied it, so they decided that homemade is just as fine. in order to turn some kind of fuel (could be methane, or maybe not, who knows) into electricity they need gas turbines and a couple of weeks back there was a story that crusoe got their first aeroderivative gas turbines from GE https://www.tomshardware.com/tech-industry/data-centers-turn-to-ex-airliner-engines-as-ai-power-crunch-bites this means that these are old, refurbished, modified jet engines put in a chassis with generator and with turbofan removed. in total they booked 29 turbines from GE, LM2500 series, and some other, PE6000 from other company called proenergy* and probably others (?) for alleged 4.5GW total. for neoclouds generators of this type have major advantage that 1. they exist and backlog isn't horrific, the first ones delivered were contracted in december 2024, so about 10 months, and onsite construction is limited (sometimes less than month) 2. these things are compact and reasonably powerful, can be loaded on trailer in parts and just delivered wherever 3. at the same time these are small enough that piecewise installation is reasonable (34.4MW per, so just from GE 1GW total spread across 29)
and that's about it from advantages. these choices are fucking weird really. the state of the art in turning gas to electricity is to first, take as big gas turbine as practical, which might be 100MW, 350MW, there are even bigger ones. this is because efficiency of gas turbines increases with size, because big part of losses comes from gas slipping through the gap between blades and stator/rotor. the bigger turbine, the bigger cross-sectional area occupied by blades (~ r^2), and so gap (~ r) is less important. this effect is responsible for differences in efficiency of couple of percent just for gas turbine, for example for GE, aeroderivative 35MW-ish turbine (LM2500) we're looking at 39.8% efficiency, while another GE aeroderivative turbine (LMS100) at 115MW has 43.9% efficiency. our neocloud disruptors stop there, with their just under 40% efficient turbines (and probably lower*) while exhaust is well over 500C and can be used to boil water, which is what any serious powerplant does in combined cycle. this additional steam turbine gives about third of total generated energy, bringing total efficiency to some 60-63%.
so right off the bat, crusoe throws away about third of usable energy, or alternatively for the same amount of power they burn 50-70% more gas, if they even use gas and not for example diesel. they specifically didn't order turbines with this extra heat recovery mechanism, because, based on datasheet https://www.gevernova.com/content/dam/gepower-new/global/en_US/downloads/gas-new-site/products/gas-turbines/gev-aero-fact-sheets/GEA35746-GEV-LM2500XPRESS-Product-Factsheet.pdf they would get over 1.37GW, while GE press announcement talked about "just under 1GW" which matches only with the oldest type of turbine there (guess: cheapest), or maybe some mix with even older ones than what is shown. this is not what serious power generating business would do, because for them every fraction of percent matters. while it might be possible to get heat recovery steam boiler and steam turbine units there later, this means extra installation time (capex per MW turns out to be similar) and more backlog, and requires more planning and real estate and foresight, and if they had that they wouldn't be there in the first place, would they. even then, efficiencies get to maybe 55% because turns out that these heat exchangers required for for professional stuff are huge and can't be loaded on trailer, so they have to go with less
so it sorta gets them power short term, and financially it doesn't look well long term, but maybe they know that and don't care because they know they won't be there to pay bills for gas, but also if these glorified gensets are only used during outages or otherwise not to their full capacity then it doesn't matter that much. also gas turbines in order to run efficiently need to run hot, but the hottest possible temperature with normal fuels would melt any material we can make blades of, so the solution is to take double or triple amount of air than needed and dilute hot gases this way, which also means these are perfect conditions for nitric oxide synthesis, which means smog downwind. now there are SCRs which are supposed to deal with it, but it didn't stop musk from poisoning people of memphis when he did very similar thing
* proenergy takes the same jet engine that GE does and turns it into PE6000, which is probably mostly the same stuff as LM6000, except that GE version is 51MW and proenergy 48MW. i don't know whether it's derated or less efficient still, but for the same gas consumption it would be 37.5%
e: proenegy was contracted for 1GW, 21x48MW turbines https://spectrum.ieee.org/ai-data-centers GE another 1GW, 29x34.4MW https://www.gevernova.com/news/articles/going-big-support-data-center-growth-rising-renewables-crusoe-ordering-flexible-gas this leaves 2.5GW unaccounted for. another big one is siemens but they haven't said anything. then 1.5GW nuclear??? from blue energy and from 2031 on (lol)
something i was thinking about yesterday: so many people i ~~respect~~ used to respect have admitted to using llms as a search engine. even after i explain the seven problems with using a chatbot this way:
- wrong tool for the job
- bad tool
- are you fucking serious?
- environmental impact
- ethics of how the data was gathered/curated to generate^[they call this "training" but i try to avoid anthropomorphising chatbots] the model
- privacy policy of these companies is a nightmare
- seriously what is wrong with you
they continue to do it. the ease of use, together with the valid syntax output by the llm, seems to short-circuit something in the end-user's brain.
anyway, in the same way that some vibe-coded bullshit will end up exploding down the line, i wonder whether the use of llms as a search engine is going to have some similar unintended consequences
"oh, yeah, sorry boss, the ai told me that mr. robot was pretty accurate, idk why all of our secrets got leaked. i watched the entire series."
additionally, i wonder about the timing. will we see sporadic incidents of shit exploding, or will there be a cascade of chickens coming home to roost?
Yes i know the kid in the omelas hole gets tortured each time i use the woe engine to generate an email. Is that bad?
Is there any search engine that isn't pushing an "AI mode" of sorts? Some are more sneaky or give option to "opt out" like duckduckgo, but this all feels temporary until it is the only option.
I have found it strange how many people will say "I asked chatgpt" with the same normalcy as "googling" was.
They’re doing eugenics ads on the nyc subway https://hexa.club/@phooky/115640559205329476
that being a hung banner (rather than wall-mount or so) borders on being a tacit acknowledgement that they know their shit is unpopular and would get vandalised in a fucking second if it were easy (or easier!) to get to
even then, I suspect that banner will not stay unscathed for long
Dexerto has reported on an unnamed Japanese game studio weeding out promptfondlers (by having applicants draw something in-person during interviews).
Unsurprisingly, the replies have become a promptfondler shooting gallery. Personal "favourite" goes to the guy who casually admits he can't tell art from slop:

/r/SneerClub discusses MIRI financials and how Yud ended up getting paid $600K per year from their cache.
Malo Bourgon, MIRI CEO, makes a cameo in the comments to discuss Ziz's claims about SA payoffs and how he thinks Yud's salary (the equivalent of like 150.000 malaria vaccines) is defensible for reasons that definitely exist, but they live in Canada, you can't see them.
Guy does a terrible job explaining literally anything. Why, when trying to explain all the SA based drama, does he choose to create an analogy where the former employee is heavily implied to have murdered his wife?
S/o to cinnaverses for mixing it up in there.
Workers organizing against genai policies in the workplace: http://workersdecide.tech/
Sounds like exactly the thing unions and labor organizing is good for. Glad to see it.
A lobster wonders why the news that a centi-millionaire amateur jet pilot has decided to offload the cost of developing his pet terminal software onto peons begging for contributions has almost 100 upvotes, and is absolutely savaged for being rude to their betters
https://lobste.rs/s/dxqyh4/ghostty_is_now_non_profit#c_b0yttk
bring back rich people rolling their own submarines and getting crushed to death in the bathyal zone
Apparently we are part of the rising trend of AI denialism
Author Louis Rosenberg is "an engineer, researcher, inventor, and entrepreneur" according to his PR-stinking Wikipage: https://en.wikipedia.org/wiki/Louis_B._Rosenberg. I am sure he is utterly impartial and fair with regards to AI.
i hereby propose a new metric for a popular publication, the epstein number (Ē), denoting the number of authors who took flights to epstein's rape island. generally, credible publications should have Ē=0. this one, after a very quick look, has Ē=2, and also hosts sabine hossenfelder.
More grok shit: https://futurism.com/artificial-intelligence/grok-doxxing it in contrast to most other models, is very good at doxing people.
Amazing how everything Musk makes is the worst in class (and somehow the Rationalists think he will be their saviour (that is because he is a eugenicist)).
(thinks) groxxing
the base use for LLMs is gonna be hypertargetted advertising, malware, political propaganda etc
well the base case for LLMs is that, right now
the privacy nerds won't know what hit them
Major RAM/SSD manufacturer Micron just shut down its Crucial brand to sell shovels in the AI gold rush, worsening an already-serious RAM shortage for consumer parts.
Just another way people are paying more for less, thanks to AI.
2 links from my feeds with crossover here
Lawyers, Guns and Money: The Data Center Backlash
Techdirt: Radicalized Anti-AI Activist Should Be A Wake Up Call For Doomer Rhetoric
Unfortunately Techdirt's Mike Masnick is a signatory some bullshit GenAI-collaborationist manifesto called The Resonant Computing Manifesto, along with other suspects like Anil Dash. Like so many other technolibertarian manifestos, it naturally declines to say how their wonderful vision would be economically feasible in a world without meaningful brakes on the very tech giants they profess to oppose.
From Lila Byock:
A 4th grader was assigned to design a book cover for Pippi Longstocking using Adobe for Education.
The result is, in technical terms, four pictures of a schoolgirl waifu in fetishwear.
