this post was submitted on 16 Feb 2026
23 points (89.7% liked)

TechTakes

2523 readers
70 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine's Day!)

(page 3) 50 comments
sorted by: hot top controversial new old
[–] CinnasVerses@awful.systems 8 points 1 month ago

Do we have any idea why some of the Zizians ended up in Vermont? The only thing in their network that comes to mind is the Monastic Academy for the Preservation of Life on Earth (MAPLE, a Buddhist-flavoured CFAR offshoot with the usual Medium post accusing leaders of sexual and psychological abuse)

Vermont and New Hampshire have clusters of generic Libertarians.

[–] mirrorwitch@awful.systems 8 points 1 month ago

Starting the week with this fairly extensive compendium of intractable issues with LLMs: GenAI has an Alignment Problem… But it’s not the machine’s hostility we need to worry about.

[–] blakestacey@awful.systems 8 points 1 month ago (1 children)
load more comments (1 replies)
[–] lurker@awful.systems 8 points 1 month ago* (last edited 1 month ago) (1 children)
load more comments (1 replies)
[–] nfultz@awful.systems 8 points 1 month ago* (last edited 1 month ago)
[–] BlueMonday1984@awful.systems 8 points 1 month ago (1 children)

New and nicely made sneer caught my attention: Rely On AI And Get Left Behind

load more comments (1 replies)
[–] e8d79@discuss.tchncs.de 8 points 1 month ago* (last edited 1 month ago) (5 children)

Well bcachefs is kill. Enjoy AI support and all userspace code generated by a slot machine. Kent also does that weird anthropomorphising of his LLM by giving it a blog.

load more comments (5 replies)
[–] fiat_lux@lemmy.world 8 points 1 month ago* (last edited 1 month ago)

An article I would write if I were confident I wouldn't dox myself and lose my ability to eat: "AI as a postmodern Malthusian trap. Tech has forgotten the laws of entropy."

[–] BlueMonday1984@awful.systems 8 points 1 month ago (1 children)

WD and Seagate confirm: Hard drives for 2026 sold out (because the AI datacentres have stolen them all)

Related thread on Bluesky:

idk if the bubble will pop or slowly deflate, but im certain that in 10 years we'll look back at 2020s as the decade where tech stopped progressing in the way we know it - since we're diverting all our resources to ai, there's no longer any room left for anything else to grow

the 2010s crypto gpu shortage was the warning siren for this. it really hampered the growth of gpus because they permanently became so much more expensive - now the same is happening to memory, storage, and...well, gpus again! we've reached the point of reverse progress

[–] macroplastic@sh.itjust.works 8 points 1 month ago (1 children)

2020s as the decade where tech stopped progressing in the way we know it

I mean, sure, but I think the underlying cause here is the end of Moore's law and exponential growth of potential userbases as the world becomes fully connected. The Enshittocene can be viewed as a consequence of capital's attempts to continue exponential growth while the fundamentals are no longer capable of sustaining it.

load more comments (1 replies)
[–] antifuchs@awful.systems 7 points 1 month ago (7 children)

Good news, everyone’s favorite emacs is using AI now: https://www.vim.org/vim-9.2-released.php

[–] cstross@wandering.shop 8 points 1 month ago (2 children)

@antifuchs @techtakes Oh goodie they enshittified vim IS NOTHING SACRED?!? HAVE WE LIVED AND FOUGHT IN VAIN?!?!?

load more comments (2 replies)
load more comments (6 replies)
[–] hrrrngh@awful.systems 7 points 1 month ago (2 children)

context: I wanted to know if the open source projects currently being spammed with PRs would be safe from people running slop models on their computer if they weren't able to use claude or whatever. Answer: yes, these things are still terrible

but while I was searching I found this comment and the fact that people hated it is so funny to me. It's literally the person who posted the thread. less thinking and words, more hype links please.

conversationhttps://www.reddit.com/r/LocalLLaMA/comments/1qvjonm/first_qwen3codernext_reap_is_out/o3jn5db/

32k context? is that usable for coding?

(OP's response, sitting at a steady -7 points)

LLMs are useless anyway so, okay-ish, depends on your task obviously

If LLMs were actually capable of solving actual hard tasks, you'd want as much context as possible

A good way to think about is that tokens compress text roughly 1:4. If you have a 4MB codebase, it would need 1M tokens theoretically.

That's one way to start, then we get into the more debatable stuff...

Obviously text repeats a lot and doesn't always encode new information each token. In fact, it's worse than that, as adding tokens can _reduce_ information contained in text, think inserting random stuff into a string representing dna. So to estimate how much ctx you need, think how much compressed information is in your codebase. That includes stuff like decisions (which LLMs are incapable of making), domain knowledge, or even stuff like why does double click have 33ms debounce and not 3ms or 100ms in your codebase which nobody ever wrote down. So take your codebase, compress it as a zip at normal compression level, and then think how large the output problem space is, shrink it down quadratically, and you have a good estimate of how much ctx you need for LLMs to solve the hardest problems in your codebase at any given point during token generation

*emphasis added by me

load more comments (2 replies)
[–] TinyTimmyTokyo@awful.systems 7 points 1 month ago* (last edited 1 month ago) (3 children)
load more comments (3 replies)
[–] froztbyte@awful.systems 7 points 1 month ago* (last edited 1 month ago) (1 children)

in today's news about magical prompts that super totes give you superpowers:

We introduced SKILLSBENCH, the first benchmark to systematically evaluate Agent Skills as first-class artifacts. Across 84 tasks, 7 agent-model configurations, and 7,308 trajectories under three conditions (no Skills, curated Skills, self-generated Skills), our evaluation yields four key findings: (1) curated Skills provide substantial but variable benefit (+16.2 percentage points average, with high variance across domains and configurations); (2) self-generated Skills provide negligible or negative benefit (–1.3pp average), demonstrating that effective Skills require human-curated domain expertise

I am jack's surprised face

...and given I have other yaks, I shall not step on my "software and tools don't have to suck" soapbox right now

[–] istewart@awful.systems 8 points 1 month ago* (last edited 1 month ago)

This reminds me of when Steve Jobs would introduce every new Mac release by talking about how fast it could render in Photoshop. I wonder how he would do in our brave new era of completely ass-pulling your own bespoke benchmark frameworks.

[–] gerikson@awful.systems 7 points 1 month ago (1 children)

slop "fact checking" is coming to LW:

https://www.lesswrong.com/posts/hhbibJGt2aQqKJLb7/shortform-1?commentId=fE5cg6pmWrChW8Rtu

wonder what model/prompt they will use. Prolly Grok

[–] Soyweiser@awful.systems 8 points 1 month ago

Time to say 'Sneerclub was correct' a lot, so we can frontload the 'factcheckers'.

load more comments
view more: ‹ prev next ›