TechTakes

2284 readers
104 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
976
977
 
 

[this is probably off-topic for this forum, but I found it on HN so...]

Edit "enjoy" the discussion: https://news.ycombinator.com/item?id=38233810

978
 
 

nitter archive

just in case you haven't done your daily eye stretches yet, here's a workout challenge! remember to count your reps, and to take a break between paragraphs! duet your score!

oh and, uh.. you may want to hide any loose keyboards before you read this. because you may find yourself wanting to throw something.

979
 
 

replaced with essay of lament by creator.

My only hot take: a thing being x amount of good for y amount of people is not justification enough for it to exist despite it being z amount of bad for var amount of people.

980
 
 

will this sure is gonna go well :sarcmark:

it almost feels like when Google+ got shoved into every google product because someone had a bee in their bonnet

flipside, I guess, is that we'll soon (at scale!) get to start seeing just how far those ideas can and can't scale

981
982
 
 

Title is ... editorialized.

983
984
985
 
 

Don't mind me I'm just here to silently scream into the void

Edit: I'm no good at linking to HN apparently, made link more stable.

986
17
submitted 2 years ago* (last edited 2 years ago) by gerikson@awful.systems to c/techtakes@awful.systems
 
 

Title quote stolen from JZW: https://www.jwz.org/blog/2023/10/the-best-way-to-profit-from-ai/

Yet again, the best way to profit from a gold rush is to sell shovels.

987
 
 

Non-paywalled link: https://archive.ph/9Hihf

In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen's recent excrescence on so-called "techno-optimism". It wasn't exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.

But when Andreessen included "existential risk" and transhumanism on his list of enemy ideas, I'm sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox's EA-promoting "Future Perfect" vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.

So have at at, Marc and Ezra. Fight. And maybe take each other out.

988
989
990
 
 

One reason that, three and a half years later, Andreessen is reiterating that “it’s time to build” instead of writing posts called “Here’s What I Built During the Building Time I Previously Announced Was Commencing” is that Marc Andreessen has not really built much of anything.

991
992
 
 

I don’t really have much to say… it kind of speaks for itself. I do appreciate the table of contents so you don’t get lost in the short paragraphs though

993
994
 
 

archive.org | and .is

this is almost a NSFW? some choice snippets:

more than 1.5 million people have used it and it is helping build nearly half of Copilot users’ code

Individuals pay $10 a month for the AI assistant. In the first few months of this year, the company was losing on average more than $20 a month per user, according to a person familiar with the figures, who said some users were costing the company as much as $80 a month.

good thing it's so good that everyone will use it amirite

starting around $13 for the basic Microsoft 365 office-software suite for business customers—the company will charge an additional $30 a month for the AI-infused version.

Google, ..., will also be charging $30 a month on top of the regular subscription fee, which starts at $6 a month

I wonder how long they'll try that, until they try forcing it on everyone (and raise all prices by some n%)

995
 
 

Carole Piovesan (formerly of McCarthy Tétrault, now at INQ Law) describes this as a "step in the process to introducing some more sort of enforceable measures".

In this case the code of conduct has some fairly innocuous things. Managing risk, curating to avoid biases, safeguarding against malicious use. It's your basic industrial safety government boilerplate as applied to AI. Here, read it for yourself:

https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems

Now of course our country's captains of industry have certain reservations. One CEO of a prominent Canadian firm writes that "We don’t need more referees in Canada. We need more builders."

https://twitter.com/tobi/status/1707017494844547161

Another who you will recognize from my prior post (https://awful.systems/post/298283) is noted in the CBC article as concerned about "the ability to put a stifling growth in the industry". I am of course puzzled about this concern. Surely companies building these products are trivially capable of complying with such a basic code of conduct?

For my part I have difficulty seeing exactly how "testing methods and measures to assess and mitigate risk of biased output" and "creating safeguards against malicious use" would stifle industry and reduce building. My lack of foresight in this regard could be why I am a scrub behind a desk instead of a CEO.

Oh, and for bonus Canadian content, the name Desmarais from the photo (next to the Minister of Industry) tweaked my memory. Oh right, those Desmarais. Canada will keep on Canada'ing to the end.

https://dailynews.mcmaster.ca/articles/helene-and-paul-desmarais-change-agents-and-business-titans/

https://en.wikipedia.org/wiki/Power_Corporation_of_Canada#Politics

996
 
 

Representative take:

If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

997
 
 

Source: nitter, twitter

Transcribed:

Max Tegmark (@tegmark):
No, LLM's aren't mere stochastic parrots: Llama-2 contains a detailed model of the world, quite literally! We even discover a "longitude neuron"

Wes Gurnee (@wesg52):
Do language models have an internal world model? A sense of time? At multiple spatiotemporal scales?
In a new paper with @tegmark we provide evidence that they do by finding a literal map of the world inside the activations of Llama-2! [image with colorful dots on a map]


With this dastardly deliberate simplification of what it means to have a world model, we've been struck a mortal blow in our skepticism towards LLMs; we have no choice but to convert surely!

(*) Asterisk:
Not an actual literal map, what they really mean to say is that they've trained "linear probes" (it's own mini-model) on the activation layers, for a bunch of inputs, and minimizing loss for latitude and longitude (and/or time, blah blah).

And yes from the activations you can get a fuzzy distribution of lat,long on a map, and yes they've been able to isolated individual "neurons" that seem to correlate in activation with latitude and longitude. (frankly not being able to find one would have been surprising to me, this doesn't mean LLM's aren't just big statistical machines, in this case being trained with data containing literal lat,long tuples for cities in particular)

It's a neat visualization and result but it is sort of comically missing the point


Bonus sneers from @emilymbender:

  • You know what's most striking about this graphic? It's not that mentions of people/cities/etc from different continents cluster together in terms of word co-occurrences. It's just how sparse the data from the Global South are. -- Also, no, that's not what "world model" means if you're talking about the relevance of world models to language understanding. (source)
  • "We can overlay it on a map" != "world model" (source)
998
 
 

Direct link to the video

B-b-but he didn't cite his sources!!

999
1000
 
 

After several months of reflection, I’ve come to only one conclusion: a cryptographically secure, decentralized ledger is the only solution to making AI safer.

Quelle surprise

There also needs to be an incentive to contribute training data. People should be rewarded when they choose to contribute their data (DeSo is doing this) and even more so for labeling their data.

Get pennies for enabling the systems that will put you out of work. Sounds like a great deal!

All of this may sound a little ridiculous but it’s not. In fact, the work has already begun by the former CTO of OpenSea.

I dunno, that does make it sound ridiculous.

view more: ‹ prev next ›