Off My Chest
RULES:
I am looking for mods!
1. The "good" part of our community means we are pro-empathy and anti-harassment. However, we don't intend to make this a "safe space" where everyone has to be a saint. Sh*t happens, and life is messy. That's why we get things off our chests.
2. Bigotry is not allowed. That includes racism, sexism, ableism, homophobia, transphobia, xenophobia, and religiophobia. (If you want to vent about religion, that's fine; but religion is not inherently evil.)
3. Frustrated, venting, or angry posts are still welcome.
4. Posts and comments that bait, threaten, or incite harassment are not allowed.
5. If anyone offers mental, medical, or professional advice here, please remember to take it with a grain of salt. Seek out real professionals if needed.
6. Please put NSFW behind NSFW tags.
view the rest of the comments
How do you feel about LLMs such as chat gpt being used to find birth defects, problematic readings in radiology, design flaws in architecture/engineering or performance bottlenecks in code?
LLMs do not most of the things that you say (like radiology readings, birth defects...) Those are language models. It's on the name.
What you're thinking are generally NN, trained to categorize those things in particular. You can give those tasks to ChatGPT and will hallucinate an answer that somebody who doesn't know would feel correct.
The fact that everything is labeled AI makes it so that people like you greatly overestimate what ChatGPT does to idiots.
Those aren't LLMs. Except maybe the code one.
Design flaws in engineering? You have a source for that? (Practical, not some experimental PR stunt)