CamilleMellom

joined 2 years ago
MODERATOR OF
[–] CamilleMellom@mander.xyz 3 points 2 years ago

Such a stupid lie. A lot of policies are based today on this idea of nudges. I know in my company they tried to improve safety mindsets through that « science » and that it is all fake quite literally killed people.

[–] CamilleMellom@mander.xyz 3 points 2 years ago

I read that paper and it’s really really incredible. The results are super impressive.

[–] CamilleMellom@mander.xyz 2 points 2 years ago

That’s a fairly terrifying scenario. Like putting open doors into our brains

[–] CamilleMellom@mander.xyz 4 points 2 years ago (1 children)

Thank you so much! I was also advised to use dish soap with water :). Is that good?

[–] CamilleMellom@mander.xyz 4 points 2 years ago

Thanks! I know what to google now!

[–] CamilleMellom@mander.xyz 1 points 2 years ago

I’m sorry my answer triggered something; it wasn’t meant to. In my experience people here are nice and friendly, so I hope you manage to feel safe and welcomed.

[–] CamilleMellom@mander.xyz 0 points 2 years ago* (last edited 2 years ago) (2 children)

What 😅? Ok I guess 😅

I was just saying it’s hard to make a company spend more “just” for the environment, but it’s still important to do.

[–] CamilleMellom@mander.xyz 0 points 2 years ago (4 children)

I’m trying at mine and it goes exactly as you would think…

[–] CamilleMellom@mander.xyz 4 points 2 years ago

I have no good advice for this but I’m sorry this is happening to you :(. That’s really not fair and very inconsiderate.

[–] CamilleMellom@mander.xyz 3 points 2 years ago

In the study, physicians found more inaccuracies and irrelevant information in answers provided by Google’s Med-PaLM and Med-PalM 2 than those of other doctors.

It’s a bit like every other use of AI IMO: the challenge is to make people understand that it’s a fancy information retrieval system and thus it is flowed and not to be blindly trusted. There was study on the use of professional settings that showed that model such as ChatGPT helped low performers much more than high performers (which had barely any improvement thanks to the model). If this model is used to help less competent doctors (without judgement, they could be beginning their careers) while maintaining a certain degree of doubt, then that could be very good.

However, the ramification of a wrong diagnosis from the AI is quite scary, especially considering that AI tend to repeat the biais of their training dataset, and even curated data is not exempt from biais

[–] CamilleMellom@mander.xyz 5 points 2 years ago

I would advise not training your own model but instead use tools like langchain and chroma, in combination with a open model like gpt4all or falcon :).

So in general explore langchain!

[–] CamilleMellom@mander.xyz 2 points 2 years ago

merci j’avais pas réaliser ça!

 

Synthetic data is often used to trian model because data is hard to collect. But, now that we have a lot data generated by AI, we are realizing it may not be as valuable

 

A community about the latest advances in robotics and AI. Focused on the science and state-of-the-art.

Robotics and AI

!robotics_and_ai@mander.xyz

or mander.xyz/c/robotics_and_ai

 

I'm not sure if I shouldn't mark this one as NSFW X).

I don't often feel like science went too far, but necrobiotics might be a step too far... I tend to agree with this take against it.

 

Hi everyone! Guess I’ll make an intro post to try to find some cool people and cool communities. In the real world I’m the head of research lab for a company in a university (hopefully the best of both world instead of the worse of both), and my research focuses on machine learning and robotics. Otherwise I like math, the environment, nature. I saw quite some communities in mander on nature but not so much math, machine learning, and robotics?

Outside of work nothing better than a good hike, a nice climbing session, or an evening playing video games :).

view more: ‹ prev next ›