blakestacey

joined 2 years ago
MODERATOR OF
[–] blakestacey@awful.systems 12 points 1 month ago* (last edited 1 month ago) (1 children)

Highlights from the comments: @wjpmitchell3 writes,

Actual psychology researcher: the problem with IQ is A) We don't really know what it's measuring, B.) We don't really know how it's useful, C.) We don't really know how context-specific it is, D.) When people make arguments about IQ, it's often couched around prejudiced ulterior motives. No one actually cares about IQ; they care about what it's a proxy measure of and we don't have good evidence yet to say "This is a reliable and broadly-encompassing representation of intelligence." or whatever else, so if you are trying to use IQ differences to say that there are race differences in intelligence, you have no grounds. The best you can say is there are race differences in this proxy measure that we're still trying to understand. It's dangerous to use an unreliable and possibly inaccurate representation of a phenomena to make policy changes or inform decisions around race. The evidence threshold has to be extremely high because we're entering sensitive ethical spaces, which is something that rationalist don't do well in because their utilitarian calculus has difficulty capturing the intangibles.

@arnoldkotlyarevsky383 says,

Nothing wrong with being self educated but she comes across as being not as far along as you would want someone to be in their self-education before being given a platform.

@User123456767 observes,

You can kind of tell she grew up as a Calvinist because she still seems to think she's part of the elect she's just replaced an actual big G God with some sort of AI God.

@jaredsarnie3712 begins,

I feel like so much of what she says boils down to finding bizarre hypothetical situations where child sexual abuse is morally acceptable.

And from @Fruuuuuuuuuck:

Doomscroll gooner arc

[–] blakestacey@awful.systems 8 points 1 month ago* (last edited 1 month ago) (1 children)

"DS" in the Retraction Watch comments makes a good observation:

What scientific book only has 46 references?

A question for future work: This book is part of a "Transactions on Computer Systems and Networks" series. How many of the others in that series are also slop?

[–] blakestacey@awful.systems 13 points 1 month ago (4 children)

Oh, and looking back at the comments on titotal’s post… his detailed elaboration of some pretty egregious errors in AI 2027 didn’t really change anyone’s mind, at most moving them back a year to 2028.

Huh, what's this I have open in another browser tab:

The Great Disappointment in the Millerite movement was the reaction that followed Baptist preacher William Miller's proclamation that Jesus Christ would return to the Earth by 1844, which he called the Second Advent. His study of the Daniel 8 prophecy during the Second Great Awakening led him to conclude that Daniel's "cleansing of the sanctuary" was cleansing the world from sin when Christ would come, and he and many others prepared. When Jesus did not appear by October 22, 1844, Miller and his followers were disappointed.

[–] blakestacey@awful.systems 12 points 1 month ago

For what it's worth I know one of the founders of e/acc and they told me they were radicalized by a date they had with you where they felt you bullied them about this subject.

A-and yep, that's my dose of cursed for the day

[–] blakestacey@awful.systems 14 points 1 month ago (2 children)

It's a bird! It's a plane! It's... Evangelion Unit 1 with a Superman logo and a Diabolik mask.

[–] blakestacey@awful.systems 12 points 1 month ago (1 children)

"A case for courage, when speaking of made-up sci-fi bullshit"

[–] blakestacey@awful.systems 10 points 1 month ago

Thomas Claburn writes in The Register:

IT consultancy Gartner predicts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027 due to rising costs, unclear business value, or insufficient risk controls.

That implies something like 60 percent of agentic AI projects would be retained, which is actually remarkable given that the rate of successful task completion for AI agents, as measured by researchers at Carnegie Mellon University (CMU) and at Salesforce, is only about 30 to 35 percent for multi-step tasks.

[–] blakestacey@awful.systems 7 points 1 month ago (1 children)

It's like when Scott Aaronson got me to sympathize with a cop. A sneersmas miracle.

[–] blakestacey@awful.systems 9 points 1 month ago

I poked around the search results being pointed to, saw a Ray Kurzweil book and realized that none of these people are worth taking seriously. My condolences to anyone who tries to explain the problems with the "improved" sources on offer.

[–] blakestacey@awful.systems 2 points 1 month ago

Adding https://en.wikipedia.org/wiki/Inner_alignment to the compendium for completeness' sake.

[–] blakestacey@awful.systems 9 points 1 month ago (2 children)

Rather than trying to participate in the "article for deletion" dispute with the most pedantic nerds on Earth (complimentary) and the most pedantic nerds on Earth (derogatory), I will content myself with pointing and laughing at the citation to Scientific Reports, aka "we have Nature at home"

[–] blakestacey@awful.systems 11 points 1 month ago* (last edited 1 month ago) (3 children)

Wow, this is shit: https://en.wikipedia.org/wiki/Inner_alignment

Edit: I have been informed that the correct statement in line with Wikipedia's policies is WP:WOWTHISISSHIT

view more: ‹ prev next ›