zogwarg

joined 2 years ago
[–] zogwarg@awful.systems 4 points 2 years ago* (last edited 2 years ago)

Ah, but each additional sentence strikes home the point of absurd over-abundance!

Quite poetically, the sin of verbosity is commited to create the illusion of considered thought and intelligence, in the case of hpmor literally by stacking books.

Amusingly him describing his attempt as "striking words out" rather than "rewording" or "distilling", i think illustrates his lack of editing ability.

[–] zogwarg@awful.systems 6 points 2 years ago

Fair enough, I will note he fails to specify the actual car to Remote Assistance operator ratio. Here's to hoping that the burstiness readiness staff is not paid pennies when on "stand-by".

[–] zogwarg@awful.systems 15 points 2 years ago* (last edited 2 years ago) (9 children)

It makes you wonder about the specifics:

  • Did the 1.5 workers assigned for each car mostly handle issues with the same cars?
  • Was it a big random pool?
  • Or did each worker have their geographic area with known issues ?

Maybe they could have solved context issues and possible latency issues by seating the workers in the cars, and for extra quick intervention speed put them in the driver's seat. Revolutionary. (Shamelessly stealing adam something's joke format about trains)

[–] zogwarg@awful.systems 6 points 2 years ago* (last edited 2 years ago) (1 children)

Possible countermeasure: Insist on “crediting” the LLM as the commit author, to regain sanity when doing git blame.

I agree that worse doc is a bad enough future, though I remain optimistic that including LLM in compile step is never going to be mainstream enough (or anything approaching stable enough, beyond some dumb useless smoke and mirrors) for me to have to deal with THAT.

[–] zogwarg@awful.systems 5 points 2 years ago (11 children)

In such a (unlikely) future of build tooling corruption, actual plausible terminology:

  • Intent Annotation Prompt (though sensibly, this should be for doc and validation analysis purposes, not compilation)
  • Intent Pragma Prompt (though sensibly, the actual meaning of the code should not change, and it should purely be optimization hints)
[–] zogwarg@awful.systems 16 points 2 years ago (1 children)

Student: I wish I could find a copy of one of those AIs that will actually expose to you the human-psychology models they learned to predict exactly what humans would say next, instead of telling us only things about ourselves that they predict we're comfortable hearing. I wish I could ask it what the hell people were thinking back then.

I think this part conveys the root insanity of Yud, failing to understand that language is a co-operative game between humans, that have to trust in common shared lived experiences, to believe the message was conveyed successfully.

But noooooooo, magic AI can extract all the possible meanings, and internal states of all possible speakers in all possible situations from textual descriptions alone: because: ✨bayes✨

The fact that such a (LLM based) system would almost certainly not be optimal for any conceivable loss function / training set pair seems to completely elude him.

[–] zogwarg@awful.systems 5 points 2 years ago (1 children)

The fact that “artificial intelligence” suggests any form of quality is already a paradox in itself ^^. Would you want to eat an artificial potato? The smokes and mirrors should be baked in.

[–] zogwarg@awful.systems 4 points 2 years ago

I need eye and mind bleach, c'est très ironique tout ça quand même.

[–] zogwarg@awful.systems 8 points 2 years ago (2 children)

Unhinged is another suitable adjective.

It's noteworthy that how the operations plan seems to boil down to "follow you guts" and "trust the vibes", above "Communicating Well" or even "fact-based" and "discussion-based problem solving". It's all very don't think about it, let's all be friends and serve the company like obedient drones.

This reliance on instincts, or the esthetics of relying on instincts, is a disturbing aspect of Rats in general.

[–] zogwarg@awful.systems 11 points 2 years ago

^^ Quietly progressing from humans are not the only ones able to do true learning, to machines are the only ones capable of true learning.

Poetic.

PS: Eek at the *cough* extrapolation rules lawyering 😬.

[–] zogwarg@awful.systems 11 points 2 years ago* (last edited 2 years ago) (1 children)

Not even that! It looks like a blurry jpeg of those sources if you squint a little!

Also I’ve sort of realized that the visualization is misleading in three ways:

  1. They provide an animation from shallow to deep layers to show the dots coming together, making the final result more impressive than it is (look at how many dots are in the ocean)
  2. You see blobby clouds over sub-continents, with nothing to gauge error within the cloud blobs.
  3. Sorta-relevant but obviously the borders as helpfully drawn for the viewer to conform to “Our” world knowledge aren’t even there at all, it’s still holding up a mirror (dare I say a parrot?) to our cognition.
view more: ‹ prev next ›