sailor_sega_saturn

joined 2 years ago

πŸ™ƒπŸ™ƒπŸ™ƒ

[–] sailor_sega_saturn@awful.systems 14 points 16 hours ago* (last edited 16 hours ago) (4 children)

NotAwfulTech and AwfulTech converged with some ffmpeg drama on twitter over the past few days starting here and still ongoing. This is about an AI generated security report by Google's "Big Sleep" (with no corresponding Google authored fix, AI or otherwise). Hackernews discussed it here. Looking at ffmpeg's security page there have been around 24 bigsleep reports fixed.

ffmpeg pointed out a lot of stuff along the lines of:

  • They are volunteers
  • They have not enough money
  • Certain companies that do use ffmpeg and file security reports also have a lot of money
  • Certain ffmpeg developers are willing to enter consulting roles for companies in exchange for money
  • Their product has no warranty
  • Reviewing LLM generated security bugs royally sucks
  • They're really just in this for the video codecs moreso than treating every single Use-After-Free bug as a drop-everything emergency
  • Making the first 20 frames of certain Rebel Assault videos slightly more accurate is awesome
  • Think it could be more secure? Patches welcome.
  • They did fix the security report
  • They do take security reports seriously
  • You should not run ffmpeg "in production" if you don't know what you're doing.

All very reasonable points but with the reactions to their tweets you'd think they had proposed killing puppies or something.

A lot of people seem to forget this part of open source software licenses:

BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW

Or that venerable old C code will have memory safety issues for that matter.

It's weird that people are freaking out about some UAFs in a C library. This should really be dealt with in enterprise environments via sandboxing / filesystem containers / aslr / control flow integrity / non-executable memory enforcement / only compiling the codecs you need... and oh gee a lot of those improvements could be upstreamed!

[–] sailor_sega_saturn@awful.systems 12 points 5 days ago* (last edited 5 days ago) (2 children)

Grokipedia just dropped: https://grokipedia.com/

It's a bunch of LLM slop that someone encouraged to be right wing with varying degrees of success. I won't copy paste any slop here, but to give you an idea:

  • Grokipedia's article on Wikipedia uses the word "ideological" or "ideologically" 23 times (compared with Wikipedia using it twice in it's Wikipedia article).
  • Any articles about transgender topics tend to mix in lots of anti-transgender misinformation / slant, and use phrases like "rapid-onset gender dysphoria" or "biological males". The last paragraph of the article "The Wachowskis" is downright unhinged.
  • The articles tend to be long and meandering. I doubt even Grokipedia proponents will ultimately get much enjoyment out of it.

Also certain articles have this at the bottom:

The content is adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License.

[–] sailor_sega_saturn@awful.systems 8 points 1 week ago (3 children)

Check out the graphics on their homepage. It has that terrible "scroll driven" web-design but the graphics look like placeholder art cooked up by a programmer.

Usually these sorts of VC bait companies at least hire a graphics designer but I guess that's not actually necessary.

[–] sailor_sega_saturn@awful.systems 8 points 1 week ago (6 children)

Crypto Investor Proposes 450-Foot Statue of Greek God on Alcatraz Island is a story making the rounds in the press lately and aaaaaah I hate it. I'd say something more coherent than that but it's already given me quite a headache.

He has a personal website as well as a website for his stupid statue idea. Both of which are buggy / ugly -- apparently after saving $450 million for a dumb statue he has none left for good website coding.

[–] sailor_sega_saturn@awful.systems 5 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Yet another billboard.

https://www.reddit.com/r/bayarea/comments/1ob2l2o/replacement_ai_billboard_in_san_francisco_who/

https://replacement.ai/

This time the website is a remarkably polished satire and I almost liked it... but the email it encourages you to send to your congressperson is pretty heavy on doomer talking points and light on actual good ideas (but maybe I'm being too picky?):

spoilerI am a constituent living in your district, and I am writing to express my urgent concerns about the lack of strong guardrails for advanced AI technologies to protect families, communities, and children.

As you may know, companies are releasing increasingly powerful AI systems without meaningful oversight, and we simply cannot rely on them to police themselves when the stakes are this high. While AI has the potential to do remarkable things, it also poses serious risks such as the manipulation of children, the enablement of bioweapons, the creation of deepfakes, and significant unemployment. These risks are too great to overlook, and we need to ensure that safety measures are in place.

I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections, as local efforts can serve as crucial safeguards.

Thank you for your time and attention to this critical issue.

I'm starting to think some of these tech skeptics are only pretending to be skeptics.

[–] sailor_sega_saturn@awful.systems 9 points 2 weeks ago* (last edited 2 weeks ago)

The latest in the long line of human-hostile billboards:

https://www.reddit.com/r/bayarea/comments/1o8s3lz/humanity_had_a_good_run_billboard/

https://dearworld.ai/

This is positioning itself as an AI doomer website; but it could also be an attempt at viral marketing. We'll see I guess.

[–] sailor_sega_saturn@awful.systems 5 points 3 weeks ago* (last edited 3 weeks ago)

That Wikipedia article is cursed:

For instance, in discussions on climate change mitigation, countries with lesser contributions to greenhouse gas emissions might still benefit from global efforts to reduce emissions, enjoying a stable climate without proportionally shouldering the costs of emission reductions.

[–] sailor_sega_saturn@awful.systems 6 points 3 weeks ago (1 children)

I'm getting a lot of questions already answered by my "before anyone asks I'm pro LGBTQ and pro immigrant" shirt.

[–] sailor_sega_saturn@awful.systems 14 points 3 weeks ago* (last edited 3 weeks ago) (6 children)

New AI alignment problem just dropped: https://xcancel.com/elonmusk/status/1976304803744501775

Best I can do now is try to make sure that at least one AI is truth-seeking and not a super woke nanny with an iron fist that wants to turn everyone into diverse women 😬

Edit: It only just now occured to me that hes' probably whining about generative AI rather than an army of superintelligent robots marching across the earth transing people, but I'm leaving my comment.

I tend to think of Toys (1992) for these sorts of themes though I haven't watched the film from start to finish since I was a kid. It's about the militarization of a wealthy family toy factory and has a lot of scenes that stuck with me.

It's a Christmas family movie that reviewed horribly so definitely counts as a cult classic, but those who like it tend to really like it.

 

You may remember this youtuber from such famous videos as "Harder Drive", "Uppestcase and Lowestcase Letters", or "30 Weird Chess Algorithms". He tends to put out videos around once a year, often about not-awful machine learning.

This time it is a video about solving a horrible high dimensional optimization problem involving convex polyhedra. As well as 100% clearing Call of Duty Black Ops: 6.

https://www.youtube.com/watch?v=QH4MviUE0_s

 

https://www.reuters.com/technology/artificial-intelligence/openai-co-founder-sutskevers-new-safety-focused-ai-startup-ssi-raises-1-billion-2024-09-04/

http://web.archive.org/web/20240904174555/https://ssi.inc/

I have nothing witty or insightful to say, but figured this probably deserved a post. I flipped a coin between sneerclub and techtakes.

They aren't interested in anything besides "superintelligence" which strikes me as an optimistic business strategy. If you are "cracked" you can join them:

We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

 

Saw the title and knew I had to post here. Not quite as big of a self-own as Square selling Tomb Raider for a blockchain / AI pivot; but amusing nonetheless.

Join the excitement of the Olympic Games Paris 2024 with nWay's officially licensed, commemorative Paris 2024 NFT Digital Pin collection!

You can claim a legendary or epic pin showcasing the Paris 2024 mascot holding a flag and waving. You can add these digital gems to your collection through Magic Eden’s friendly NFT marketplace as part of Coinbase's Onchain Summer event. Be sure to have an ETH L2 Base-supported wallet to secure yours today!

Remember when companies let you download wallpapers or something instead of figuring out what the heck an ETH L2 Base-supported wallet is?

I remember.

 

Follow up to https://awful.systems/post/1109610 (which I need to go read now because I completely overlooked this)

Now OpenAI has responded to Elon Musk's lawsuit with an email dump containing a bunch of weird nerd startup funding drama: https://openai.com/blog/openai-elon-musk

Choice quote from OpenAI:

As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

OpenAI have learned how to redact text properly now though, a pity really.

 

OK OK old news I know. But this is a metal cover of a bitconnect speech that I found pretty amusing: https://www.youtube.com/watch?v=iZ-Ayj-ht_I

 

OpenAI blog post: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation

Orange discuss: https://news.ycombinator.com/item?id=39207291

I don't have any particular section to call out. May post thoughts ~~tomorrow~~ today it's after midnight oh gosh, but wanted to post since I knew ya'll'd be interested in this.

Terrorists could use autocorrect according to OpenAI! Discuss!

 

#1 We're All Gonna Make It: https://www.youtube.com/watch?v=yp0diaVLPrQ

#2 Ethereum: https://www.facebook.com/randizberg/videos/nobodyme-ok-heres-another-music-video-had-a-blast-on-this-collab-with-hila-the-k/531145045349722/

#3 Hello This Is Defi: https://twitter.com/randizuckerberg/status/1494416366710910992

Surgeon General's Warning: watching all of these back to back may make your brain ooze out of your nose.

 

Don't mind me I'm just here to silently scream into the void

Edit: I'm no good at linking to HN apparently, made link more stable.

view more: next β€Ί