this post was submitted on 03 Aug 2025
409 points (86.7% liked)

Fuck AI

3671 readers
1426 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Source (Bluesky)

you are viewing a single comment's thread
view the rest of the comments
[–] jjjalljs@ttrpg.network 2 points 6 days ago (1 children)

Oh, I see the point you're making.

I assumed that the information was there to be found, and a regular search would have returned it. Thus it would not have taken hours.

Personally I don't really trust the LLMs to synthesize disparate sources.

[–] pixxelkick@lemmy.world 3 points 6 days ago

Personally I don’t really trust the LLMs to synthesize disparate sources.

The #1 best use case for LLMs is using them as extremely powerful fuzzy searchers on very large datasets, so stuff like hunting down published papers on topics.

Dont actually use their output as the basis for reasoning, but use it to find the original articles.

For example, as a software dev, I use them often to search for the specific documentation for what I need. I then go look at the actual documentation, but the LLM is exceptionally fast at locating the document itself for me.

Basically, using them as a powerful resource to look up and find resources is key, and was why I was able to find documentation on the symptoms of my pet so fast. It would have taken me ages to find those esoteric published papers on my own, there's so much to sift through, especially when many papers cover huge amounts of info and what Im looking for is one small piece of info in that one paper.

But with an LLM I can trim down the search space instantly to a way way smaller set, and then go through that by hand. Thousands of papers turn into a couple in a matter of seconds.