Architeuthis

joined 2 years ago
[–] Architeuthis@awful.systems 9 points 1 month ago

I live in the Balkans, I have br-word privilege.

[–] Architeuthis@awful.systems 16 points 1 month ago* (last edited 1 month ago) (11 children)

I like how even by ACX standards scoot's posts on AI are pure brain damage

One level lower down, your brain was shaped by next-sense-datum prediction - partly you learned how to do addition because only the mechanism of addition correctly predicted the next word out of your teacher’s mouth when she said “three plus three is . . . “ (it’s more complicated than this, sorry, but this oversimplification is basically true). But you don’t feel like you’re predicting anything when you’re doing a math problem. You’re just doing good, normal mathematical steps, like reciting “P.E.M.D.A.S.” to yourself and carrying the one.

The most compelling analogy: this is like expecting humans to be “just survival-and-reproduction machines” because survival and reproduction were the optimization criteria in our evolutionary history. [...] This simple analogy is slightly off, because it’s confusing two optimization levels: the outer optimization level (in humans, evolution optimizing for reproduction; in AIs, companies optimizing for profit) with the inner optimization level (in humans, next-sense-datum prediction; in AIs, next-token prediction). But the stochastic parrot people probably haven’t gotten to the point where they learn that humans are next sense-datum predictors, so the evolution/reproduction one above might make a better didactic tool.

He also threatens an Anti-Stochastic-Parrot FAQ.

Here's hoping if this happens Bender et al enthusiastically point out this is coming from a guy whose long term master plan is to fight evil AI with eugenics. Or who uses the threat of evil AI to make eugenics great again if they are feeling less charitable.

[–] Architeuthis@awful.systems 19 points 1 month ago* (last edited 1 month ago)

Also being in a strategic partnership with fucking Palantir does tend to make one's stand against mass surveillance seem less than genuine.

[–] Architeuthis@awful.systems 11 points 1 month ago* (last edited 1 month ago)

I mean, sure, but it's still the CEO of XBOX on her second day on the job throwing her hat in the legendarily sus declining birthrates discourse in service of AI solutionism, it's not nothing.

[–] Architeuthis@awful.systems 22 points 1 month ago* (last edited 1 month ago) (3 children)
[–] Architeuthis@awful.systems 14 points 1 month ago* (last edited 1 month ago)

Using talking points meant for c-suites to a general audience and outing yourself as a complete psychopath, the San Fran CEO Story.

[–] Architeuthis@awful.systems 8 points 1 month ago

I the post he keeps referring to Ollama as an LLM (it's a desktop app that runs a local server that lets you download and interface with a local LLM via CLI or http API) so it's possible he's just that far behind in his technical understanding of LLMs that he's fallen to taking the wrong people's word for it.

The post certainly reads like he doesn't even know which local LLM he's using, let alone what it takes to make one.

[–] Architeuthis@awful.systems 10 points 1 month ago

That he went from that all the way to it's mostly ok when sam altman steals all your data, misrepresents it and then steals all your traffic is... bad.

At any rate it's definitely good to know that that war crime forensics data project isn't quite the unintentional shambles corey makes it out to be.

[–] Architeuthis@awful.systems 16 points 1 month ago* (last edited 1 month ago) (5 children)

That was a good read.

Corey doc wrote:

It's not "unethical" to scrape the web in order to create and analyze data-sets. That's just "a search engine"

Equivocating what LLMs do and what goes into LLM web scraping with "a search engine" is messed up. His article that he links about scraping is mostly about how badly copyright works and how analysing trade-secret-walled data can be beneficial both to consumers and science but occasionally bad for citizen privacy, which you'll recognize as mostly irrelevant to the concerns people tend to have against LLM training data providers ddosing the fuck out of everything, and all the rest of the stuff tante does a good job of explaining.

Corey also provides this anecdote:

As a group of human-rights defending forensic statisticians, HRDAG has always relied on cutting edge mathematics in its analysis. With its Colombia project, HRDAG used a large language model to assign probabilities for responsibility for each killing documented in the databases it analyzed.

That is, HRDAG was able to rigorously and legibly say, “This killing has an X% probability of having been carried out by a right-wing militia, a Y% probability of having been carried out by the FARC, and a Z% probability of being unrelated to the civil war.”

The use of large language models — produced from vast corpuses of scraped data — to produce accurate, thorough and comprehensible accounts of the hidden crimes that accompany war and conflict is still in its infancy. But already, these techniques are changing the way we hold criminals to account and bring justice to their victims.

Scraping to make large language models is good, actually.

what the actual shit

edit: I mean, he tried transformer powered voice-to-text and liked it, and now he's all in on the LLMs are a rigorous and accurate tool actually bandwagon?

Also the web scraping article is from 2023 but CD linked it in the recent pluralistic post so I assume his views haven't changed.

[–] Architeuthis@awful.systems 13 points 1 month ago* (last edited 1 month ago) (2 children)

Timnit briefly weighs in about being included in the doc, apparently she regrets it and says the filmmakers "sprinkle some [AI skeptics] in like chocolate chips to perform ethics".

She also calls Yud a eugenicist cult leader with nothing to show for.

[–] Architeuthis@awful.systems 7 points 1 month ago* (last edited 1 month ago)

Also code helper tools don't even work like that, there's an absurd amount of MCP and RAG based hand holding for the chatbot to even get a grip on what it's supposed to be doing at any given time.

Prompting an LLM with your entire code base isn't really a thing, even though the hype makes it feel like it would be.

[–] Architeuthis@awful.systems 6 points 1 month ago

I mean, they mostly don't have a problem with AI instances inheriting the earth as long as they're sufficiently rationalist.

view more: ‹ prev next ›