This one's been making the rounds, so people have probably already seen it. But just in case...
Meta did a live "demo" of their ~~recording~~ new AI.
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
This one's been making the rounds, so people have probably already seen it. But just in case...
Meta did a live "demo" of their ~~recording~~ new AI.
A second trashfire has hit the Ruby language - RubyGems was hit with a hostile takeover from Ruby Central, seemingly to squash an attempt at putting together an official governance policy.
Between that and DHH's fascist screed, its not been a good week for Ruby.
Wonder if, esp considering the DHH situations this is sort of a nazi bar style takeover. Where the people who don't want to make a fuss let in the nice but iffy people who then go mask off, and let the rest in. (The thing the far right accused the left of doing, in a bit of projections). But I know nothing about the politics of anybody involved, could also just be a regular hostile takeover.
(Doesn't feel like one just looking at the rubycentral bsky account for a second though. They do have an amazing spin on it. It was to protect against supply chain attacks (also a link to an email article of them, which just feels weird)).
Enjoy this Rat pitch for a "pastor" who shall spread the gospel of Bayes to the unwashed masses:
OK I think this has gone a bit too far
Hmm, it’s still on the funny side of graph for me. I think it could go on for at least another week.
I almost wanna use some reverse psychology to try and make him stop.
'hey im from sneerclub and we are loving this please dont stop this strike'
(I mean he clearly mentally prepped against arguments and even force (and billionaires), but not someone just making fun of him. Of course he prob doesn't know about any of these places and hasn't build us up to Boogeyman status, but imagine it worked)
There's an ACX guest post rehashing the history of Project Xanadu, an important example of historical vaporware that influenced computing primarily through opinions and memes. This particular take is focused on Great Men and isn't really up to the task of humanizing the participants, but they do put a good spotlight on the cults that affected some of those Great Men. They link to a 1995 article in Wired that tells the same story in a better way, including the "six months" joke. The orange site points out a key weakness that neither narrative quite gets around to admitting: Xanadu's micropayment-oriented transclusion-and-royalty system is impossible to correctly implement, due to a mismatch between information theory and copyright; given the ability to copy text, copyright is provably absurd. My choice sneer is to examine a comment from one of the ACX regulars:
The details lie in the devil, for sure...you'd want the price [of making a change to a document] low enough (zero?) not to incur Trivial Inconvenience penalties for prosocial things like building wikis, yet high enough to make the David Gerards of the world think twice.
Disclaimer: I know Miller and Tribble from the capability-theory community. My language Monte is literally a Python-flavored version of Miller's E (WP, esolangs), which is itself a Java-flavored version of Tribble's Joule. I'm in the minority of a community split over the concept of agoric programming, where a program can expand to use additional resources on demand. To me, an agoric program is flexible about the resources allocated to it and designed to dynamically reconfigure itself; to Miller and others, an agoric program is run on a blockchain and uses micropayments to expand. Maybe more pointedly, to me a smart contract is what a vending machine proffers (see How to interpret a vending machine: smart contracts and contract law for more words); to them, a smart contract is how a social network or augmented/virtual reality allows its inhabitants to construct non-primitive objects.
The 17 rules also seem to have abuse build in. Documents need to be stored redundantly (without any mention of how many copies that means), and it has a system where people are billed for the data they store. Combine these and storing your data anywhere runs the risk of a malicious actor emptying your accounts. In a 'it costs ten bucks to store a file here' 'sorry we had to securely store ten copies of your file, 100 bucks please'. Weird sort of rules. Feels a lot like it never figured out what it wants to be a centralized or distributed system, a system where writers can make money, or they need to pay to use. And a lot of technical solutions for social problems.
hot off the heels of months of “agentic! it can do things for you!” llm hype, they have to make special APIs for the chatbots, I guess because otherwise they make too many whoopsies?
Quick PSA for anyone who's still on LinkedIn: the site's stealing your data to train the slop machines
That reminds me I still need to wipe my reddit an twitter archives. Wonder if wiping it all in one go would cause more trouble for them, or if deleting it slowly (or overwriting with random words in the case of reddit) causes more changes in the datasets and messes with them more like that.
from when I last looked into this: twitter 100% has[0] (unstated) web API ratelimits for various subservices[1], but getting direct API creds became a "give us your actual phone number" thing even before felon took it over...
so I just decided to tombstone my account by making it private, updating bio, and never logging in again
not willing to give them what they want for API access. might at some point go write some web automation to recurringly click a delete button? idunno
[0] - ....well, 4 years ago, "had". probably maybe still does, on whatever parts of the haproxy or whatever config didn't get absolutely fucking destroyed in felon's mania to rebrand it to "x" overnight (a process which failed hilariously badly for weeks and I still think fondly of to laugh at)
[1] - when going through the "your interests" list (hidden deep in settings), if you unticked too many boxes too quickly you'd hit a webserver-enforced ratelimit on request limits and then half the webapp would get a bit fucky for an hour. ratelimit was something like 30/min with a 1/m type token-bucket refresh. quite the shitshow
Yeah, I figured I would need some web automation script for that, I have looked into them in the past, but never gotten far with it before something else was more important. Still silly that is needed and will hit the servers harder than an API would. Just strange priorities.
When I looked at 'your interests' in the past it was so incredibly wrong I resisted the urge to update it because I though 'sure if that is what you think is important to me fine'. Gotta make sure the basilisk can't simulate you ;).
Ratelimits would be the big worry, heard people reached those by just deleting tweets by hand. And the whole like system is broken anyway. If you remove enough of them by hand you get in the situation where tweets show in your list but they do not look like they were liked by that account. (I always had the suspicion the whole likes system, which people got mad over a lot is badly implemented anyway, and that explains the weirdness people saw, a thing this story seems to confirm).
I also heard blocklists put a high strain on the twitter so not going to look into removing that. (Not sure I can even find the list anymore anyway or at least a complete list, mine always stopped after 100 accounts or so, while I block a few more than that).
heard people reached those by just deleting tweets by hand.
yeah, the various backend interactions tied to web controls are extremely low-count limited
you could probably do it by smacking together a userscript (or whatever the fuck is the these-days version of greasemonkey/tampermonkey/??? to use) with a moderately simple algorithm.. open a window, click execute, leave it going by itself for however long it takes to get through everything. it doesn't have to do everything in minutes
I also heard blocklists put a high strain on the twitter so not going to look into removing that
probably the feed compute stuff only has this computational expense incurred for any displayed feeds (pruning off calculating stuff for long-enough-inactive users is one of the cheapest easy gains in that type of content feed), so this might not matter much. don't have enough insight into real ops there to know one way or the other tho