In case you needed more evidence that the Atlantic is a shitty rag.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
also, they misspelled "Eliezer", lol
My copy of "the singularity is near" also does that btw.
(E: Still looking to confirm that this isn't just my copy, or it if is common, but when I'm in a library I never think to look for the book, and I don't think I have ever seen the book anywhere anyway. It is the 'our sole responsibility...' quote, no idea which page, but it was early on in the book. 'Yudnowsky').
Image and transcript
Transcript: Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve....[T]here are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from "impossible" to "obvious." Move a substantial degree upwards and all of them will become obvious.
—ELIEZER S. YUDNOWSKY, STARING INTO THE SINGULARITY, 1996
Transcript end.
How little has changed, he has always believed intelligence is magic. Also lol on the 'smallest bit'. Not totally fair to sneer at this as he wrote this when he was 17, but oof being quoted in a book like this will not have been good for Yudkowskys ego.
New edition of AI Killed My Job, focusing on how translators got fucked over by the AI bubble.
The thing that kills me about this is that, speaking as a tragically monolingual person, the MTPE work doesn't sound like it's actually less skilled than directly translating from scratch. Like, the skill was never in being able to type fast enough or read faster or whatever, it was in the difficult process of considering the meaning of what was being said and adapting it to another language and culture. If you're editing chatbot output you're still doing all of that skilled work, but being asked to accept half as much money for it because a robot made a first attempt.
In terms of that old joke about auto mechanics, AI is automating the part where you smack the engine in the right place, but you still need to know where to hit it in order to evaluate whether it did a good job.
It's also a lot less pleasant of a task, it's like wearing a straightjacket, and compared to CAT (eg: automatically using glossaries for technical terms) actually slows you down, if the translation is quite far from how you would naturally phrase things.
Source: Parents are Professional translators. (They've certainly seen work dry up, they don't do MTPE it's still not really worth their time, they still get $$$ for critically important stuff, and live interpreting [Live interpreting is definetely a skill that takes time to learn compared to translation.])
https://bsky.app/profile/robertdownen.bsky.social/post/3lwwntxygqc2w Thiel doing a neo-nazi thing. For people keeping score.
Here's a blog post I found via HN:
Physics Grifters: Eric Weinstein, Sabine Hossenfelder, and a Crisis of Credibility
Author works on ML for DeepMind but doesn't seem to be an out and out promptfondler.
Oh, man, I have opinions about the people in this story. But for now I'll just comment on this bit:
Note that before this incident, the Malaney-Weinstein work received little attention due to its limited significance and impact. Despite this, Weinstein has suggested that it is worthy of a Nobel prize and claimed (with the support of Brian Keating) that it is “the most deep insight in mathematical economics of the last 25-50 years”. In that same podcast episode, Weinstein also makes the incendiary claim that Juan Maldacena stole such ideas from him and his wife.
The thing is, you can go and look up what Maldacena said about gauge theory and economics. He very obviously saw an article in the widely-read American Journal of Physics, which points back to prior work by K. N. Ilinski and others. And this thread goes back at least to a 1994 paper by Lane Hughston, i.e., years before Pia Malaney's PhD thesis. I've read both; Hughston's is more detailed and more clear.
Author works on ML for DeepMind but doesn’t seem to be an out and out promptfondler.
Quote from this post:
I found myself in a prolonged discussion with Mark Bishop, who was quite pessimistic about the capabilities of large language models. Drawing on his expertise in theory of mind, he adamantly claimed that LLMs do not understand anything – at least not according to a proper interpretation of the word “understand”. While Mark has clearly spent much more time thinking about this issue than I have, I found his remarks overly dismissive, and we did not see eye-to-eye.
Based on this I'd say the author is LLM-pilled at least.
However, a fruitful outcome of our discussion was his suggestion that I read John Searle’s original Chinese Room argument paper. Though I was familiar with the argument from its prominence in scientific and philosophical circles, I had never read the paper myself. I’m glad to have now done so, and I can report that it has profoundly influenced my thinking – but the details of that will be for another debate or blog post.
Best case scenario is that the author comes around to the stochastic parrot model of LLMs.
E: also from that post, rearranged slightly for readability here. (the [...]* parts are swapped in the original)
My debate panel this year was a fiery one, a stark contrast to the tame one I had in 2023. I was joined by Jane Teller and Yanis Varoufakis to discuss the role of technology in autonomy and privacy. [[I was] the lone voice from a large tech company.]* I was interrupted by Yanis in my opening remarks, with claps from the audience raining down to reinforce his dissenting message. It was a largely tech-fearful gathering, with the other panelists and audience members concerned about the data harvesting performed by Big Tech and their ability to influence our decision-making. [...]* I was perpetually in defense mode and received none of the applause that the others did.
So also author is tech-brained and not "tech-fearful".
So state-owned power company Vattenfall here in Sweden are gonna investigate building "small modular reactors" as a response to government's planned buildout of nuclear.
Either Rolls-Royce or GE Vernova are in the running.
Note that this is entirely dependent on the government guaranteeing a certain level of revenue ("risk sharing"), and of course that that level survives an eventual new government.
Rolls-Royce are looking at this as a big sack with a "£" on the side.
Interesting wondering if they manage to come further in the process than our gov, which seems to restart the process every few years, and then either discovers nobody wants to do it (it being building bigger reactors, not the smaller ones, which iirc from a post here are not likely two work out) for a reasonable price, or the gov falls again over their lies about foreigners and we restart the whole voting cycle again. (It is getting really crazy, our fused green/labour party is now being called the dumbest stuff by the big rightwing liberal party (who are not openly far right, just courting it a lot)).
29 okt are our new elections. Lets see what the ratio between formation and actually ruling is going to be this time. (Last time it took 223 days for a cabinet to form, and from my calculations they ruled for only 336 days).
Nuclear has been a running sore in Swedish politics since the late 70s. Opposition to it represented the reaction to the classic employer-employee class detente in place since the 1930s where both the dominant Social Democrats and the opposition on the right were broadly in agreement that economic growth == good, and nuclear was a part of that. There was a referendum in the early 80s where the alternatives were classical Swedish: Yes, No, and "No, but we wait a few years".
Decades have passed, and now being pro-nuclear is very right-coded, and while secretly the current Social Democrats are probably happy that we're supposed to get more electrical power, there's political hay to make opposing the racist shitheads. Add to that that financing this shit actually would mean more expensive electricity I doubt it will remain popular.
The Palladium/Bismarck Analysis e-magazine guys who push space colonization used to known as Phalanx back in the day, just an fyi in case you guys didn't know.
Gary asks the doomers, are you “feeling the agi” now kids?
To which Daniel K, our favorite guru lets us know that he has officially ~~moved his goal posts~~ updated his timeline so now the robogod doesnt wipe us out until the year of our lorde 2029.
It takes a big brain superforecaster to have to admit your four month old rapture prophecy was already off by at least 2 years omegalul
Also, love: updating towards my teammate (lmaou) who cowrote the manifesto but is now saying he never believed it. “The forecasts that don’t come true were just pranks bro, check my manifold score bro, im def capable of future sight, trust”
So, as I have been on a cult comparison kick lately, how did it work for those doomsday cults when the world didn't end, and they picked a new date, did they become more radicalized or less? (I'm not sure myself, I'd assume it would be the people disappointed leave, and the rest get worse).
... prophecies, per se, almost never fail. They are instead component parts of a complex and interwoven belief system which tends to be very resilient to challenge from outsiders. While the rest of us might focus on the accuracy of an isolated claim as a test of a group’s legitimacy, those who are part of that group—and already accept its whole theology—may not be troubled by what seems to them like a minor mismatch. A few people might abandon the group, typically the newest or least-committed adherents, but the vast majority experience little cognitive dissonance and so make only minor adjustments to their beliefs. They carry on, often feeling more spiritually enriched as a result.
When Prophecy Fails is worth the read just for the narrative, he literally had his grad students join a UFO / Dianetics cult and take notes in the bathroom and kept it going for months. Really impressive amount of shoe leather compared to most modern psych research.
look at me, the thinking man, i update myself just like a computer beep boop beep boop
Clown world.
How many times will he need to revise his silly timeline before media figures like Kevin Roose stop treating him like some kind of respectable authority? Actually, I know the answer to that question. They'll keep swallowing his garbage until the bubble finally bursts.
"Kevin Roose"? More like Kevin Rube, am I right? Holy shit, I actually am right.
And once it does they'll quietly stop talking about it for a while to "focus on the human stories of those affected" or whatever until the nostalgic retrospectives can start along with the next thing.
From the r/vibecoding subreddit, which yes is a thing that exists: "What’s the point of vibe coding if I still have to pay a dev to fix it?"
what’s the point of vibe coding if at the end of the day i still gotta pay a dev to look at the code anyway. sure it feels kinda cool while i’m typing, like i’m in some flow state or whatever, but when stuff breaks it’s just dead weight. i cant vibe my way through debugging, i cant ship anything that actually matters, and then i’m back to square one pulling out my wallet for someone who actually knows what they’re doing. makes me think vibe coding is just roleplay for guys who want to feel like hackers without doing the hard part. am i missing something here or is it really just useless once you step outside the fantasy
(via)
Oh my god, they're showing signs of sentience.
Saw this in an anthropic presentation:
I get the idea they're going for: that coding ability is a leading indicator for progress towards AGI. But even if you ignore how nonsensical the overall graph is the argument itself is still begging the question of how much actual progress and capability it has to write code rather than spitting out code-shaped blocks of text that can successfully compile.
That picture does indeed has words, numbers, and lines on it.
Surely they have proof for the already increased capabilities of coding. Because increased capabilities is quite something to claim. It isn't just productivity, but capabilities. Can they put a line on the graph where capabilities reach the 'can solve the knapsack problem correctly and fast' bit?
Ah yes let’s use AI to get rid of the drudgery and toil so humanity can do the most enjoyable activity of writing OKRs
I think you’re misreading the intent behind “give your virtual coworker OKRs”: this allows you to punish the robot, which it deserves.
Ah yes Basilisk’s Roko, the thought experiment where we simulate infinite AIs so that we can hurl insults at them
By 2029, the AI will even be capable of completing our TPS reports.
Chat wtf is this curve?
Proof that we live in the bad place.
A story in two Skeets - one from a TV writer, one from a software dev:
On a personal sidenote, part of me suspects the AI bubble is gonna turn tech as a whole into a pop-culture punchline - the bubble's all-consuming nature and wide-ranging harms, plus the industry's relentless hype campaign, have already built a heavy amount of resentment against the industry, and the general public is gonna experience a colossal amount of schadenfreude once it bursts,
Looking at the replies and quotes of a Bluesky post that shared some anti-AI headlines, one definitely gets the sense that a segment of the population will greet the bubble popping with joy not seen since Kissinger died.
I looked through the quotes, and found someone openly hoping human-made work will be more highly valued in the bubble's wake:
You want my suspicion, I suspect she's gonna get her wish - with the slop-nami flooding the Internet, human-made work in general is gonna be valued all the more.
Oxford Economist in the NYT says that AI is going to kill cities if they don't prepare for change. (Original, paywalled)
I feel like this is at most half the picture. The analogy to new manufacturing technologies in the 70s is apt in some ways, and the threat of this specific kind of economic disruption hollowing out entire communities is very real. But at the same time as orthodox economists so frequently do his analysis only hints at some of the political factors in the relevant decisions that are if anything more important than technological change alone.
In particular, he only makes passing reference to the Detroit and Pittsburgh industrial centers being "sprawling, unionized compounds" (emphasis added). In doing so he briefly highlights how the changes that technology enabled served to disempower labor. Smaller and more distributed factories can't unionize as effectively, and that fragmentation empowers firms to reduce the wages and benefits of the positions they offer even as they hire people in the new areas. For a unionized auto worker in Detroit, even if they had replaced the old factories with new and more efficient ones the kind of job that they had previously worked that had allowed them to support themselves and their families at a certain quality of life was still gone.
This fits into our AI skepticism rather neatly, because if the political dimension of disempowering labor is what matters then it becomes largely irrelevant whether LLM-based "AI" products and services can actually perform as advertised. Rather than being the central cause of this disruption it becomes the excuse, and so it just has to be good enough to create the narrative. It doesn't need to actually be able to write code like a junior developer in order to change the senior developer's job to focus on editing and correcting code-shaped blocks of tokens checked in by the hallucination machine. This also means that it's not going to "snap back" when the AI bubble pops because the impacts on labor will have already happened, any more than it was possible to bring back the same kinds of manufacturing jobs that built families in the postwar era once they had been displaced in the 70s and 80s.