this post was submitted on 05 Jul 2023
23 points (92.6% liked)

Technology

73727 readers
3651 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The top quartile of funds selected by the AI model generated 2.1x the original investment versus an industry average of 1.85x.

you are viewing a single comment's thread
view the rest of the comments
[–] Olap@lemmy.world 2 points 2 years ago (2 children)

For now. Companies are going to quickly have prospectuses, quarterly reports and annual statements sanitised after this

And that's what AI is going to really do. Make the world more grey. Lemmy instance owners: ban LLMs now!

[–] tal@kbin.social 2 points 2 years ago

I think that there are other possible risks.

Exploiting AI fragility is a serious concern. A given "AI" today is vastly more simple than a human. It may be able to operate well given certain assumptions, such as that statements are trying to affect human investors rather than AI models. What happens if I try to craft information and inject it so as to swing AI-deiven investments?

I remember going through an article a while back on AI for serious applications, like military and the like, and one point is that unless you are very rigorously careful about where you are pulling your training data from -- and a lot of people training models are not -- an adversary may be able to attack an AI-deiven system by aiming to poison its training data.

[–] jsqribe@feedly.j-cloud.uk 1 points 2 years ago

@RemindMe@mstdn.social 3 days