this post was submitted on 05 Sep 2025
5 points (100.0% liked)

Asshole Design (web edition)

82 readers
1 users here now

Similar to an “Asshole Design” gripe fest but with a specific focus on #enshitified websites. It’s also a discussion on how to improve your web UX.

Share your strategies for how to deshitify the web here (or at !disenshittify@lemmy.cafe)

Related:
!enshitification@slrpnk.net

Prefix Tags

[ew]: prefix for posts about a specific “Enshitified Website”.

Rules

founded 2 years ago
MODERATORS
 

A chatbot erroneously told a traveler they get free travel in a particular situation. I don’t recall exact circumstances but it was something like a last minute trip for a funeral. The airline then denied him the free ticket. He sued. The court found that the chatbot represents the company and is therefore legally bound to agreements.

It’s interesting to note that agreements are now being presented which you must click to accept before talking to a chatbot. E.g., from Flixbus:

You are interacting with an automated chatbot. The information provided is for general guidance only and is not binding. If you require further clarification or additional information, please contact a member of our staff directly or check out our terms and conditions and privacy notice.

(emphasis mine)

I’m not in Canada so that may be true. I just wonder if this agreement is enforceable in Europe.

top 4 comments
sorted by: hot top controversial new old
[–] JeeBaiChow@lemmy.world 4 points 3 weeks ago (1 children)

So companies now have bots on their official customer engagement channels, but they are not binding. Semi atonomous vehicles are ferrying passengers but are not liable in a crash. Bots are writing legal papers, but the precedents may or may not exist. Term papers are being written by bots, but are factually inaccurate or blatantly false. Recently an AI summary claimed Novak Djokovic had beaten Carlos alcaraz in the us open, only they are actually meeting tonight. Under what circumstances can I trust ai?

Someone tell me once again how this thing is supposed to replace humans and make our jobs easier...? Anyone?

[–] activistPnk@slrpnk.net 2 points 3 weeks ago

Which jurisdiction are you referring to, exactly?

Is Canada the only country to make companies responsible for what their bots agree to?

[–] Hirom@beehaw.org 2 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Either it's dependable and useful, or it's not.

When they realize there's (a chatbot giving) unreliable info on their website, they should scrap or replace it. Not add fine prints explaining it's unreliable and people shouldn't rely on it. Stop wasting everyone's time.

[–] activistPnk@slrpnk.net 1 points 3 weeks ago* (last edited 3 weeks ago)

Either it’s dependable and useful, or it’s not.

Or it’s worse yet, causes injury.

When they realize there’s (a chatbot giving) unreliable info on their website, they should scrap or replace it.

That’s not what I’m finding. You overestimate their competence, but even if they have enough competence to scrap the bot, that only fixes future outcomes. It does not remedy past blunders.

Not add fine prints explaining it’s unreliable and people shouldn’t rely on it.

You seen to not be grasping the post. Adding a disclaimer is in fact what they have done here.

In Canada, the disclaimer itself would be misinfo (that is, it IS legally binding b/c the court ruled that the airline had to comply with the bot’s words).

Stop wasting everyone’s time.

Stop wasting everone’s time with uninformative emotional drivel.