Seminar2250

joined 5 months ago
[–] Seminar2250@awful.systems 2 points 2 months ago (1 children)

this is gonna live in my head forever without paying any rent and i am upset

[–] Seminar2250@awful.systems 20 points 2 months ago* (last edited 2 months ago) (5 children)

is anyone else fucking sick and tired of discord? it's one thing if it's gaming-related^[i guess. not really, fuck discord.], but when i'm at a repo for some non-gaming project and they say "ask for help in our discord server", i feel like i'm in a fever dream and i'm going to wake up and discover that the simulation i was in was managed by chatgpt

[–] Seminar2250@awful.systems 9 points 3 months ago (2 children)

ugh cybersecurity is already a fucking nightmare i should have braced myself

[–] Seminar2250@awful.systems 6 points 3 months ago* (last edited 3 months ago) (1 children)

an unintended side effect of this is people who can't or don't want to verify their age going to less reputable sources. so even though it can be done in a "privacy-respecting fashion" (see, for example, soatok's post on this^[https://soatok.blog/2025/07/31/age-verification-doesnt-need-to-be-a-privacy-footgun/] ), it's still a bad idea.

additionally, in my opinion no one who wants to enact such a thing is doing it in good faith. it is a pretense towards an ulterior goal^[e.g. "steam porn games" → "this person's existence is inherently sexual" → "ban lgbtq content"]

[–] Seminar2250@awful.systems 9 points 3 months ago* (last edited 3 months ago)

i made the stupid mistake of doing math, then cs

surrounded by

my narrow technical specialty + lack of experience or knowledge of other fields = i'm the smartest, here is a trivial solution to your problem

constantly

[–] Seminar2250@awful.systems 11 points 3 months ago* (last edited 3 months ago)

you can save even more time by not doing the work at all

the output is more consistent than what an LLM shits out, too

Edit: serious note, even though you probably aren't worth anyone's time: you may be conflating the technology's actual use cases (as an accountability sink and to spread misinformation) with the intentions of its creators. and the real reason higher-ups are pushing this is because they're pliant dipshits that would eat dogfood if the bowl was labelled "FOMO". also they hate paying employees

[–] Seminar2250@awful.systems 4 points 3 months ago

it won’t eat oats

not sure why but this really ruins it for me

[–] Seminar2250@awful.systems 9 points 3 months ago* (last edited 3 months ago) (4 children)

the question was rhetorical, but also thank you for the links! <3

i am not surprised that they are all this dumb: it takes an especially stupid person to decide "yes, i am fine allowing this machine to speak for me". even more so when it's made clear that the machine is a stochastic parrot trained via exploitation of the global south and massive amounts of plagiarism and that it also cooks the planet

[–] Seminar2250@awful.systems 15 points 3 months ago* (last edited 3 months ago) (6 children)

i bought some bullshit from amazon and left a ~~somewhat~~ pretty mean review because debugging it was super frustrating

the seller reached out and offered a refund, so i told them basically "no, it's ok, just address the concerns in my review. let me update my review to be less mean-spirited


i was pretty frustrated setting it up but it mostly works fine"

then they sent a message that had the "llm vibe", and the rest of the conversation went

Seller: You're right — we occasionally use LLM assistance for responses, but every message is reviewed to ensure accuracy and relevance to your concerns. We sincerely apologize if our previous replies dissatisfied you; this was our oversight.

Me: I am not simply dissatisfied. I will no longer communicate with your company and will update my review to note that you sent me synthetic text without my consent. Please do not reply to this message.

Seller: All our replies are genuine human-to-human communication with you, without using any synthetic text. It's possible our communication style gave you a different impression. We aim to better communicate with you and absolutely did not intend any offense. With every customer, we maintain a conscientious and responsible attitude in our communications.

Me: "we occasionally use LLM assistance for responses"
"without using any synthetic text"
pick one

are all promptfondlers this fucking dumb?

[–] Seminar2250@awful.systems 2 points 3 months ago* (last edited 3 months ago)

is it cynical that i find the idea of a "latent space" a bit goofy? may just be because i think data science is kind of a joke and much machine learning is "we threw shit at our model but don't know what a category error is"

like it's just dimensionality reduction with new vocab?

view more: ‹ prev next ›