Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
6. Defend your opinion
This is a bit of a mix of rules 4 and 5 to help foster higher quality posts. You are expected to defend your unpopular opinion in the post body. We don't expect a whole manifesto (please, no manifestos), but you should at least provide some details as to why you hold the position you do.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
view the rest of the comments
LLM is just a slow way to do things that have better ways to do them.
Or to have an expensive autocorrect do your thinking.
Upvoted. It’s utterly useless.
So you agree that pressing a button to bring up a box that you can query with natural language is a good feature you just think the LLM part is slower and computationally inefficient? I could agree with that if there was something better proposed. I just see an LLM being a good tech for this because of how dynamic it is and with the addition of tools to do specific tasks in a determinism fashion its a powerful tool for the users.
Clearly you haven't worked with one.
Its great for getting detailed references on code, or finding sources for info that would take a LOT longer otherwise.
Not who you're responding to, but I used one extensively in a recent work project. It was a matter of necessity, as I didn't know how to word my question in the technical terms specific to the product, and it was something that was just perfect for search engines to go "I think you actually mean this completely different thing". There was also a looming deadline.
Being able to search using natural language, especially when you know conceptually what you're lookong for but not the product or system specific technical term, is useful.
Being able to get disparate information that is related to your issue but spread across multiple pages of documentation in one spot is good too.
But detailed references on code? Reliable sources?
I have extensive technical background. I had a middling amount of background in the systems of this project, but no experience with the specific aspects this project touched. I had to double check every answer it gave me due to how critical what I was working on was.
Every single response I got had a significant error, oversight, or massive concealed footgun. Some were resolved by further prompting. Most were resolved by me using my own knowledge to work from what it gave me back to things I could search on my own, and then find ways to non-destructively confirm the information or poke around in it myself.
Maybe I didn't prompt it right. Maybe the LLM I used wasn't the best choice for my needs.
But I find the attitude of singing praises without massive fucking warnings and caveats to be highly dangerous.
Great response.
It’s great until you realize it’s led you down the garden path and the stuff it’s telling you about doesn’t exist.
It’s horrendously untrustworthy.
I've spent many many hours working with LLMs to produce code. Actually, it's an addictive loop, like pulling a slot machine. You forget what you're actually trying to accomplish, you just need the code to work. It's kinda scary. But the deeper you get, the worse the code gets. And eventually you realize, the LLM doesn't know what it's talking about. Not sometimes, ever.
It has been useful for me with poorly documented libraries, not generating more than code snippets or maybe small utilities.
It's more of an API search engine to me. I find it's about 80% correct but it's easier to search for a specific method to make sure it does what you expect than scroll through pages of generated class documentation, half of which look like internal implementation details I won't need to care about unless I'm really digging into it as a power user.
Also, even if the method isn't correct or is more convoluted to use than a more direct one. it's usually in the same module as the correct one.
That’s some funny shit.
Clearly you’ve not been fact checking the shit it hallucinates.
Skill issue. I'm better at retrieving and then actioning real and pertinent information than you and an AI combined, guaranteed.
Maybe. It adds to the list of sources you have to check from, but i've found i still have to manually check to see if it's actually on topic rqther than only tangentially related to what I'm writing about. But that's fair enough, because otherwise it'd be like cheating, having whole essays written for you.
I know it's perhaps unreasonable to ask, but if you can share examples/anecdotes of this I'd like to see. To understand better how people are utilising LLMs