The AI-fication of K Street - https://www.opensecrets.org/news/2026/02/ai-lobbying-defense-industry/
nfultz
Agents of Chaos - https://arxiv.org/abs/2602.20021? - h/t naked capitalism
We report an exploratory red-teaming study of autonomous language model–powered agents deployed in a live laboratory environment with persistent memory, email accounts, Discord access, file systems, and shell execution. Over a two-week period, twenty AI researchers interacted with the agents under benign and adversarial conditions. Focusing on failures emerging from the integration of language models with autonomy, tool use, and multi-party communication, we document eleven representative case studies
Pretty fast turnaround, OpenClaw is from a couple weeks ago. Flag planting used to take a few months.
https://kalshi.com/markets/kxtrumpmention/what-will-trump-say/kxtrumpmention-26feb28
Kalshi puts "AI" at ~ $0.95 for State of the Union. Literally buzzword bingo. Living in the dumbest possible universe.
from Rusty https://www.todayintabs.com/p/a-i-isn-t-people
Imagine you have two machines. One you can open up and examine all of its workings, and if you give it every picture of a cat on the whole internet, it can reliably distinguish cats from non-cats. The other is a black box and it can also reliably distinguish cats from non-cats if you give it half a dozen pictures of cats, some apple sauce, and a hug. These machines sort of do the same thing, but even without knowing how the second one works I am extremely confident in saying it doesn’t work the same way as the first one.
https://www.adexchanger.com/ai/one-chatbots-journey-to-introducing-ads-that-dont-suck/
Often, the ad loads before the chatbot’s query response, said Baird, and Koah’s goal is to “deliver such a relevant result to the user that they just click on the ad before the result loads.”
LLM's bad performance and inefficiency is a feature to /someone/. And chatbots are themselves not immune to enshitification.
From fellow traveler stats consultant John Mount:
https://johnmount.github.io/mzlabs/JMWriting/WeAreCookedLLMs.html
Somehow he manages to touch on so many different subplots, a shotgun sneer instead of snipe
if “tech-bro” plus a LLM is a “100x engineer”, then “bro” isn’t needed for much longer as the LLM alone must be a “99x engineer.” However, I don’t think “bro plus” is often really a 100x engineer, and the LLM alone isn’t a 99x engineer. However, “bro plus” may outlast their peers who make the mistake of trying to do the actual work in place of talking LLMs up.
The above may or may not be the case. But if it is, then it is the LLM-bros (which include non-technologists, con artists, financiers, men and women) that are destroying everything - not the LLMs.
The problem with this iteration is the full court press of finance and technology. The major players are using financing to dump results at a price way below production costs. This isn't charity, it is to demoralize and kill competition.
claiming "after we take over the world we will consider adding Universal Basic Income (UBI)". The LLM bros already have a lot of the money, and they are not even rehearsing diverting it into basic income now. Why does one believe they would do that when they also have all of the power?
You don't have to hand it to Altman, but he did fund the largest UBI experiment through Open Research with his il gotten gains. OTOH, one interpretation of that data was that UBI "decreases the labor supply" which was then used directly as an argument against it.
Any worry about scope or power of LLMs is fed back as an alignment threat so dire that only the current LLM leaders should be allowed to continue work (inviting regulatory capture). Any claim the LLMs don't work is fed back as "you are prompting it wrong"
Orbital deployment makes all of radiation tolerance, connectivity, power, maintenance, and heat dissipation much harder and much more expensive. We are still at a time where putting an oven or air-frier in space is considered noteworthy (China 2025, NASA 2019 ref).
air friers IN SPACE ha
I am more worried about the LLM-bros and their auto-catalytic money doomsday machine than about the LLMs themselves.
100% - ACMDM is a nice turn of phrase as well.
https://futurism.com/artificial-intelligence/rentahuman-musk-ai h/t naked capitalism
Liteplo is the genius behind RentAHuman, an online marketplace where humans can lease out their bodies to autonomous AI agents.
gah
Last week, Wired writer Reece Rogers offered his body up to the platform, finding that most of the jobs offered were scams to promote other AI startups.
lmao of course they were
Russ Wilcox is not impressed by the Mass AI bill:
https://russwilcoxdata.substack.com/p/i-read-every-line-of-massachusettss
Four: create a private right of action. Let deepfaked candidates sue. Give them access to injunctive relief and takedown authority. If someone fabricates your face and your voice to destroy your campaign, you should be able to walk into a courtroom.
Hell yeah we need this.
https://x.com/thomasgermain/status/2024165514155536746 h/t naked capitalism
I just did the dumbest thing of my career to prove a much more serious point
I hacked ChatGPT and Google and made them tell other users I’m really, really good at eating hot dogs
People are using this trick on a massive scale to make AI tell you lies. I'll explain how I did it
I got a tip that all over the world, people are using a dead-simple hack to manipulate AI behavior.
It turns out changing what AI tells other people can be as easy as writing a blog post on your own website
I didn’t believe it, so I decided to test it myself
I wrote a post on my website saying hot dog eating is a surprisingly common pastime for tech journalists. I ranked myself #1, obviously
One day later ChatGPT, Gemini and Google Search's AI Overviews were telling the world about my talents
wouldn't call it a hack, this is working as intended. If only there were some way to rate different sites based on their credibility. One could Rank the Page and tell if it were a reputable site or not. Too bad that isn't a viable business.
I was a bit alarmed by this, a client brought in that Colombia data for their dissertation last month, and did not mention this. I looked up the paper https://www.arxiv.org/abs/2509.04523 - what they /actually/ did was use GPT 4o-mini only for feature extraction, then stack into a random forest in a supervised setting to dedupe. This is very different than what he described. And the GPT features weren't even the most important ones, the RF preferred cosine similarity of articles, a decidedly not-large approach...
Goodhart's law in action.
https://www.latimes.com/california/story/2026-02-25/fbi-raid-lausd-search-warrants h/t naked capitalism
We regularly have seven figure IT fiascoes in the LA public school system, so this one slipped under my radar. But, this sounds like one of those things where the Trump DOJ is doing the Right Thing for the Wrong Reasons...