corbin

joined 2 years ago
[–] corbin@awful.systems 13 points 1 week ago (3 children)

I only sampled some of the docs and interesting-sounding modules. I did not carefully read anything.

First, the user-facing structure. The compiler is far too configurable; it has lots of options that surely haven't been tested in combination. The idea of a pipeline is enticing but it's not actually user-programmable. File headers are guessed using a combination of magic numbers and file extensions. The dog is wagged in the design decisions, which might be fair; anybody writing a new C compiler has to contend with old C code.

Next, I cannot state enough how generated the internals are. Every hunk of code tastes bland; even when it does things correctly and in a way which resembles a healthy style, the intent seems to be lacking. At best, I might say that the intent is cargo-culted from existing code without a deeper theory; more on that in a moment. Consider these two hunks. The first is generated code from my fork of META II:

while i < len(self.s) and self.clsWhitespace(ord(self.s[i])): i += 1

And the second is generated code from their C compiler:

while self.pos < self.input.len() && self.input[self.pos].is_ascii_whitespace() {
    self.pos += 1;
}

In general, the lexer looks generated, but in all seriousness, lexers might be too simple to fuck up relative to our collective understanding of what they do. There's also a lot of code which is block-copied from one place to another within a single file, in lists of options or lists of identifiers or lists of operators, and Transformers are known to be good at that sort of copying.

The backend's layering is really bad. There's too much optimization during lowering and assembly. Additionally, there's not enough optimization in the high-level IR. The result is enormous amounts of spaghetti. There's a standard algorithm for new backends, NOLTIS, which is based on building mosaics from a collection of low-level tiles; there's no indication that the assembler uses it.

The biggest issue is that the codebase is big. The second-biggest issue is that it doesn't have a Naur-style theory underlying it. A Naur theory is how humans conceptualize the codebase. We care about not only what it does but why it does. The docs are reasonably-accurate descriptions of what's in each Rust module, as if they were documents to summarize, but struggle to show why certain algorithms were chosen.

Choice sneer, credit to the late Jessica Walter for the intended reading: It's one topological sort, implemented here. What could it cost? Ten lines?

I do not believe that this demonstrates anything other than they kept making the AI brute force random shit until it happened to pass all the test cases.

That's the secret: any generative tool which adapts to feedback can do that. Previously, on Lobsters, I linked to a 2006/2007 paper which I've used for generating code; it directly uses a random number generator to make programs and also disassembles programs into gene-like snippets which can be recombined with a genetic algorithm. The LLM is a distraction and people only prefer it for the ELIZA Effect; they want that explanation and Naur-style theorizing.

[–] corbin@awful.systems 8 points 1 week ago (1 children)

Yesterday I pointed out that nVidia, unlike OpenAI, has a genuine fiduciary responsibility to its owners. As a result, nVidia isn't likely to enter binding deals without proof of either cash or profitability.

[–] corbin@awful.systems 8 points 1 week ago (1 children)

I haven't listened yet. Enron quite interestingly wasn't audited. Enron participated in the dot-com bubble; they had an energy-exchange Web app. Enron's owners, who were members of the stock-holding public, started doing Zitron-style napkin math after Enron posted too-big-to-believe numbers, causing Enron's stock price to start sliding down. By early 2001, a group of stockholders filed a lawsuit to investigate what happened to stock prices, prompting the SEC to open their own investigation. It turns out that Enron's auditor, Arthur Andersen, was complicit! The scandal annihilated them internationally.

From that perspective, the issue isn't regulatory capture of SEC as much as a complete lack of stock-holding public who could partially own OpenAI and hold them responsible. But nVidia is publicly traded…

I've now listened to the section about Enron. The point about Coreweave is exactly what I'm thinking with nVidia; private equity can say yes but stocks and bonds will say no. I think that it's worth noting that private equity is limited in scale and the biggest players, Softbank and Saudi/UAE sovereign wealth, are already fully engaged; private equity is like musical chairs and people must sit somewhere when the music stops.

[–] corbin@awful.systems 12 points 1 week ago (1 children)

Nakamoto didn't invent blockchains; Merkle did, in 1979. Nakamoto's paper presented a cryptographic scheme which could be used with a choice of blockchain. There are several non-cryptocurrency systems built around synchronizing blockchains, like git. However, Nakamoto was clearly an anarcho-libertarian trying to escape government currency controls, as the first line of the paper makes clear:

A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.

Not knowing those two things about the Bitcoin paper is why you're getting downvoted. Nakamoto wasn't some random innocent researcher.

[–] corbin@awful.systems 9 points 1 week ago (1 children)

Larry Garfield was ejected from Drupal nearly a decade ago without concrete accusations; at the time, I thought Dries was overreacting, likely because I was in technical disagreement with him, but now I'm more inclined to see Garfield as a misogynist who the community was correct to eject.

I did have a longpost on Lobsters responding to this rant, but here I just want to focus on one thing: Garfield has no solutions. His conclusion is that we should resent people who push or accept AI, and also that we might as well use coding agents:

As I learn how to work with AI coding agents, know that I will be thinking ill of [people who have already shrugged and said "it is what it is"] the entire time.

[–] corbin@awful.systems 10 points 1 week ago (8 children)

PHP is even older and even more successful. The test of time says nothing about quality.

[–] corbin@awful.systems 10 points 1 week ago (1 children)

I wonder whether his holdings could be nationalized as a matter of national security.

[–] corbin@awful.systems 12 points 1 week ago (2 children)

Ammon Bundy has his own little hillbilly elegy in The Atlantic this week. See, while he's all about armed insurrection against the government, he's not in favor of ICE. He wants the Good Old Leppards to be running things, not these Goose-Stepping Nazi-Leopards. He just wanted to run his cattle on federal lands and was willing to be violent about it, y'know? Choice sneer, my notes added:

Bundy had always thought that he and his supporters stood for a coherent set of Christian-libertarian principles that had united them against federal power. "We agreed that there’s certain rights that a person has that they’re born with. Everybody has them equally, not just in the United States," he said. "But on this topic [i.e. whether to commit illegal street violence against minorities] they are willing to completely abandon that principle."

All cattle, no cap. I cannot give this man a large-enough Fell For It Again Award. The Atlantic closes:

And so Ammon Bundy is politically adrift. He certainly sees no home for himself on the "communist-anarchist" left. Nor does he identify anymore with the "nationalist" right and its authoritarian tendencies.

Oh, the left doesn't have a home for Bundy or other Christofascists. Apology not accepted and all that.

[–] corbin@awful.systems 8 points 2 weeks ago (1 children)

From this post, it looks like we have reached the section of the Gibson novel where the public cloud machines respond to attacks with self-repair. Utterly hilarious to read the same sysadmin snark-reply five times, though.

[–] corbin@awful.systems 9 points 2 weeks ago (1 children)

Yes and yes. I want to stress that Yud's got more of what we call an incubator of cults; in addition to the Zizians, they also are responsible for incubating the principals of (the principals of) the now-defunct FTX/Alameda Research group, who devolved into a financial-fraud cult. Previously, on Awful, we started digging into the finances of those intermediate groups as well, just for funsies.

[–] corbin@awful.systems 6 points 2 weeks ago

I've started grading and his grade is ready to read. I didn't define an F tier for this task, so he did not place on the tier list. The most dramatic part of this is overfitting to the task at agent runtime (that is, "meta in-context learning"); it was able to do quite well at the given benchmark but at the cost of spectacular failure on anything complex outside of the context.

[–] corbin@awful.systems 8 points 2 weeks ago (2 children)

I know what it says and it's commonly misused. Aumann's Agreement says that if two people disagree on a conclusion then either they disagree on the reasoning or the premises. It's trivial in formal logic, but hard to prove in Bayesian game theory, so of course the Bayesians treat it as some grand insight rather than a basic fact. That said, I don't know what that LW post is talking about and I don't want to think about it, which means that I might disagree with people about the conclusion of that post~

 

I’m tired of hearing about vibecoding on Lobsters, so I’ve written up three of my side tasks for coding agents. Talk is cheap; show us the code.

 

Happy Holiday and merry winter solstice! I'm sharing a Nix flake that I've been slowly growing in my homelab for the past few months. It incorporates this systemd feature, switches from CppNix to Lix, and disables a handful of packages. That PR inspired me, and I'm releasing this in turn to inspire you. Paying it forward and all that.

Should you use this? As-is, probably not. It will rebuild systemd at a minimum and you probably don't have enough RAM for that; building from this flake crashed my development laptop and I had to build it on a workstation instead. Also, if you have good taste in packages then this will be a no-op aside from systemd and Lix, and you can do both of those on your own.

Isn't this merely virtue-signalling? I think that the original systemd PR was definitely signalling, since it's unlikely to ever get deployed on the systems of our friends. However, I really do sleep better at night knowing that it's unlikely that jart or suckless have any code running on my machines.

Why not make a proper repository and organization? Mostly the possibility that GitHub might actually take down a repository named nixpkgs-antifa. If there's any interest then I could set up a Codeberg repo. However, up to this point, I've only used it internally and my homelab has its own internal git service.

Mods: You've indicated that you don't like it when people write code to approach our social problems. That's fine; I'm not publishing an application or service and certainly not starting a social movement, just sharing some of my internal code.

9
submitted 1 month ago* (last edited 1 month ago) by corbin@awful.systems to c/techtakes@awful.systems
 

Did catgirl Riley cheat at a videogame, or is she just that good? Detective Karl Jobst is on the case. Are the critics from platform One True King (OTK), like Asmongold and Tectone, correct in their analysis of Riley's gameplay? Or are they just haters who can't stand how good she is? Bonus appearance from Tommy Tallarico.

Content warning: Quite a bit of transmisogyny. Asmongold and Tectone are both transphobes who say multiple slurs and constantly misgender Riley, and their Twitch chats also are filled with slurs. Jobst does not endorse anything that they say, but he also quotes their videos and screenshots directly.

too long, didn't watch

This video is a takedown of an AI slop channel, "Call of Shame". As hinted, this is something of a ROBLOX_OOF.mp3 essay, where it's not just about the cryptofascists pushing the culture war by attacking a trans person, but about one specific rabbit hole surrounding one person who has made many misleading claims. Just like how ROBLOX_OOF.mp3 permanently hobbled Tallarico's career, it seems that Call of Shame has pivoted twice and turned to evangelizing Christianity instead as a result of this video's release.

 

A straightforward dismantling of AI fearmongering videos uploaded by Kyle "Science Thor" Hill, Sci "The Fault in our Research" Show, and Kurz "We're Sorry for Summarizing a Pop-Sci Book" Gesagt over the past few months. The author is a computer professional but their take is fully in line with what we normally post here.

I don't have any choice sneers. The author is too busy hunting for whoever is paying SciShow and Kurzgesagt for these videos. I do appreciate that they repeatedly point out that there is allegedly a lot of evidence of people harming themselves or others because of chatbots. Allegedly.

 

A straightforward product review of two AI therapists. Things start bad and quickly get worse. Choice quip:

Oh, so now I'm being gaslit by a frakking Tamagotchi.

 

The answer is no. Seth explains why not, using neuroscience and medical knowledge as a starting point. My heart was warmed when Seth asked whether anybody present believed that current generative systems are conscious and nobody in the room clapped.

Perhaps the most interesting takeaway for me was learning that — at least in terms of what we know about neuroscience — the classic thought experiment of the neuron-replacing parasite, which incrementally replaces a brain with some non-brain substrate without interrupting any computations, is biologically infeasible. This doesn't surprise me but I hadn't heard it explained so directly before.

Seth has been quoted previously, on Awful for his critique of the current AI hype. This talk is largely in line with his other public statements.

Note that the final 10min of the video are an investigation of Seth's position by somebody else. This is merely part of presenting before a group of philosophers; they want to critique and ask questions.

 

A complete dissection of the history of the David Woodard editing scandal as told by an Oregonian Wikipedian. The video is sectioned into multiple miniature documentaries about various bastards and can be watched piece-by-piece. Too long to watch? Read the link above.

too long, didn't watch, didn't read, summarize anyway

David Woodard is an ethnonationalist white supremacist whose artistic career has led to an intersection with a remarkable slice of cult leaders and serial killers throughout the past half-century. Each featured bastard has some sort of relationship to Woodard, revealing an entire facet of American Nazism which runs in parallel to Christian TREACLES, passed down through psychedelia. occult mysticism, and non-Christian cults of capitalism.

 

Cross-posting a good overview of how propaganda and public relations intersect with social media. Thanks @Soatok@pawb.social for writing this up!

 

Tired of going to Scott "Other" Aaronson's blog to find out what's currently known about the busy beaver game? I maintain a community website that has summaries for the known numbers in Busy Beaver research, the Busy Beaver Gauge.

I started this site last year because I was worried that Other Scott was excluding some research and not doing a great job of sharing links and history. For example, when it comes to Turing machines implementing the Goldbach conjecture, Other Scott gives O'Rear's 2016 result but not the other two confirmed improvements in the same year, nor the recent 2024 work by Leng.

Concretely, here's what I offer that Other Scott doesn't:

  • A clear definition of which problems are useful to study
  • Other languages besides Turing machines: binary lambda calculus and brainfuck
  • A plan for how to expand the Gauge as a living book: more problems, more languages and machines
  • The content itself is available on GitHub for contributions and reuse under CC-BY-NC-SA
  • All tables are machine-computed when possible to reduce the risk of handwritten typos in (large) numbers
  • Fearless interlinking with community wikis and exporting of knowledge rather than a complexity-zoo-style silo
  • Acknowledgement that e.g. Firoozbakht is part of the mathematical community

I accept PRs, although most folks ping me on IRC (korvo on Libera Chat, try #esolangs) and I'm fairly decent at keeping up on the news once it escapes Discord. Also, you (yes, you!) can probably learn how to write programs that attempt to solve these problems, and I'll credit you if your attempt is short or novel.

 

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

 

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

view more: next ›