FaceDeer

joined 2 years ago
[–] FaceDeer@fedia.io 7 points 1 year ago (4 children)

If they feel less need to add proper alt-text because peoples' browsers are doing a better job anyway, I don't see why that's a problem. The end result is better alt text.

[–] FaceDeer@fedia.io 4 points 1 year ago* (last edited 1 year ago)

I would expect it'd be not too hard to expand the context fed into the AI from just the pixels to including adjacent text as well. Multimodal AIs can accept both kinds of input. Might as well start with the basics though.

[–] FaceDeer@fedia.io 33 points 1 year ago (3 children)

This is in a community specifically on the subject of Reddit.

[–] FaceDeer@fedia.io 25 points 1 year ago (3 children)

Yeah, nothing against Ernest but developing and running kbin is just too big to be a one-man show.

[–] FaceDeer@fedia.io 5 points 1 year ago

There's going to be bubbles everywhere. I've been called a troll and downvoted heavily in various communities because I don't hate Microsoft or AI in general, for example.

[–] FaceDeer@fedia.io 42 points 1 year ago (10 children)

If you want to get away from the Lemmy codebase entirely I can vouch that mBin works quite nicely. I've been on fedia.io for months now and only once or twice hit some kind of technical problem, which was resolved quickly.

[–] FaceDeer@fedia.io 37 points 1 year ago

I would imagine that if an admin is doing this the modlog could simply be faked, you wouldn't be able to trust anything that the instance is reporting to the outside world.

[–] FaceDeer@fedia.io 17 points 1 year ago* (last edited 1 year ago) (1 children)

It is true AI, it's just not AGI. Artificial General Intelligence is the sort of thing you see on Star Trek. AI is a much broader term and it encompasses large language models, as well as even simpler things like pathfinding algorithms or OCR. The term "AI" has been in use for this kind of thing since 1956, it's not some sudden new marketing buzzword that's being misapplied. Indeed, it's the people who are insisting that LLMs are not AI that are attempting to redefine a word that's already been in use for a very long time.

You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.

Reminds me of the classic quote from Charles Babbage:

"On two occasions I have been asked, – "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question"

How is the chatbot supposed to know that the information it's been given is wrong?

If you were talking with a human and they thought something was true that wasn't actually true, do you not count them as an intelligence any more?

[–] FaceDeer@fedia.io 22 points 1 year ago (1 children)

You're falling into a no true Scotsman fallacy. There are plenty of uses for recent AI developments, I use them quite frequently myself. Why are those uses not "true" uses?

[–] FaceDeer@fedia.io 8 points 1 year ago (1 children)

You used an LLM for one of the things it is specifically not good at. Dismissing its overall value on that basis is like complaining that your snowmobile is bad at making its way up and down your basement stairs, and so it is therefore useless.

[–] FaceDeer@fedia.io 3 points 1 year ago* (last edited 1 year ago)

Yup. Ironically, it only hurts non-AI-users.

view more: ‹ prev next ›