this post was submitted on 09 Jul 2023
3 points (100.0% liked)

robotics and AI

562 readers
5 users here now

A community dedicated to advances in artificial intelligence and robotics.

We love the weird, the crazy, and the science. Feel free to share article on the latest advances and even papers you find interesting.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] CamilleMellom@mander.xyz 3 points 2 years ago

In the study, physicians found more inaccuracies and irrelevant information in answers provided by Google’s Med-PaLM and Med-PalM 2 than those of other doctors.

It’s a bit like every other use of AI IMO: the challenge is to make people understand that it’s a fancy information retrieval system and thus it is flowed and not to be blindly trusted. There was study on the use of professional settings that showed that model such as ChatGPT helped low performers much more than high performers (which had barely any improvement thanks to the model). If this model is used to help less competent doctors (without judgement, they could be beginning their careers) while maintaining a certain degree of doubt, then that could be very good.

However, the ramification of a wrong diagnosis from the AI is quite scary, especially considering that AI tend to repeat the biais of their training dataset, and even curated data is not exempt from biais