this post was submitted on 06 Apr 2026
866 points (99.4% liked)

A Boring Dystopia

16306 readers
1320 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS
(page 2) 47 comments
sorted by: hot top controversial new old
[–] greyscale@lemmy.grey.ooo 15 points 1 day ago* (last edited 1 day ago)

Ever notice that they're just doing what Clavicular or whatever his name is doing? They're inventing lingo to sound like its not bullshit.

Its not hitting yourself with a hammer its looksmaxxing. Its not standing around being a dork, its mogging. Its not a context window, its the chat scrollback. Its not asking chatgpt its "silicon sampling".

They're making it seem legit by making its own terminology and in-group lingo.

Bullshit artists.

[–] Naich@piefed.world 16 points 1 day ago

Making shit up, but with extra steps.

[–] Aceticon@lemmy.dbzer0.com 8 points 22 hours ago (1 children)

The thing is, logically the distribution of opinions or individual situations/beliefs which lead to those opinions, has been baked into the model when the data used to train it was captured, which means that at best and if the entire principle of the thing works (which itself isn't mathematically proven in any way form or shape) they're still getting only poll results for the past and which will not actually change beyond some random noise until the next time data is captured and the model is retrained.

It's like repeatedly using an old picture of a street to make realtime claims about the traffic there.

load more comments (1 replies)
[–] ceenote@lemmy.world 13 points 1 day ago (1 children)

The ideal would be that clients who actually want useful information will stop paying the pollsters for their useless crap.

The reality will be that slack will be more than picked up by people who want sham poll results to back up their agenda.

[–] UnderpantsWeevil@lemmy.world 8 points 1 day ago

Polls have always been leveraged as a form of propaganda.

We had Push Polling from back during the early Bush Era, where the ostensible polling cold call was just a marketing tool. We had "Unskewed Polls" during the Obama/Romney election, wherein Republicans tried to insist they were far more popular in order to influence everyone else through bandwagon appeal. Polling about Transgender Athletes was used as an excuse to dismantle civil protections for the LGBTQ community. Polls online are used to gather information on the public through responses and attendant metadata. Call-in shows are a form of engagement bait.

You can talk about the useful information gleaned from a public survey. But by and large, we only take polls when we want to change people's opinions. Its the first step in market research that ends with a blizzard of advertisements.

[–] TropicalDingdong@lemmy.world 10 points 1 day ago (1 children)
[–] M0oP0o@mander.xyz 2 points 19 hours ago (1 children)

We all wish this was as fulfilling as that.

[–] TropicalDingdong@lemmy.world 1 points 18 hours ago (1 children)

Not that I would know, but I've heard it feels a lot more like sucking dick than it does having your dick sucked.

[–] M0oP0o@mander.xyz 3 points 18 hours ago

Both scenarios seem to be a lot more desirable then an AI fake poll.

[–] GreatBlueHeron@lemmy.ca 6 points 22 hours ago (1 children)

I've read a lot of fucked up shit in the last few years - that's the first time I've thrown my phone in response!

(My phone's fine - I just threw it onto the sofa beside me, but still..)

[–] prex@aussie.zone 5 points 21 hours ago

Here, have one of these:

Sorry, wrong pic:

[–] gothic_lemons@lemmy.world 4 points 22 hours ago

Tempted to start a AI survey company, only take big corp customers. Then always give them the opposite of intended survey results I assume they want.

Really if its fuckin "AI" doing the survey why even go through to steps. Just make some graphs, demographic breakdowns and what not, then let a random number generator do the rest. Don't even need gen ai.

[–] zd9@lemmy.world 6 points 1 day ago

So, use AI to just make shit up, then report that as information? I wouldn't expect anything less in our post-truth era. However, come on Axios, I thought you were better than that.

[–] N0t_5ure@lemmy.world 5 points 1 day ago

What could possibly go wrong?

[–] AeonFelis@lemmy.world 2 points 20 hours ago

Surprisingly unrelated to the recently compromised npm package.

[–] _fryerDan@sh.itjust.works 4 points 1 day ago (1 children)

Their enshittification was inevitable once they were acquired by Cox media

[–] paraphrand@lemmy.world 4 points 1 day ago

Ah that explains it.

I was wondering why it seemed everyone soured on Axios.

[–] paraphrand@lemmy.world 3 points 1 day ago
[–] merc@sh.itjust.works -2 points 1 day ago (2 children)

This is dumb. But, there's a hint of an interesting idea in there. If LLMs sample all human text and produce statistical averages from it, there's a sense in which they contain a statistically average opinion.

It's basically like how if you use Google to search for "how many calories are there in a" it will suggest the next word. The word it suggests is the statistically average way to finish that sentence. That also means it's the food item for which people most want to know the calories. At least, it's the item they most type into a Google search box. It's just matching text patterns, but it reveals something about people that say a fast food company might find useful.

If you scale up the population of humanity to 8 trillion people and have thousands of years of data in these LLMs, maybe you actually do get useful insights about what people care about. And, maybe that's how you get psychohistory from Asimov's foundation series.

[–] jacksilver@lemmy.world 4 points 20 hours ago

Even if it did spit out the average value, it would be the average of the training data. I don't think people/opinions are evenly distributed in LLM training data.

Just look into how racist computer vision models can be.

[–] DisgruntledGorillaGang@reddthat.com 1 points 22 hours ago (1 children)

Its one thing if you present your findings like that, but pretending like its a poll is complete bullshit.

[–] merc@sh.itjust.works 1 points 22 hours ago

Yeah, definitely.

load more comments
view more: ‹ prev next ›