FaceDeer

joined 2 years ago
[–] FaceDeer@fedia.io 0 points 1 year ago

Just in case people think this is a literal excerpt from the article (that was my first impression) the actual survey results were:

33% are very likely to trust 32% are somewhat likely to trust 21% are neutral 14% express some level of distrust

[–] FaceDeer@fedia.io 6 points 1 year ago (1 children)

Or, you're in a bubble and are surprised to discover that most people aren't in it with you.

[–] FaceDeer@fedia.io 4 points 1 year ago (7 children)

But we need to attract a hip young new audience! The sort of audience that doesn't care about Star Trek, and just wants teen drama and unprofessional nonsense!

[–] FaceDeer@fedia.io 23 points 1 year ago

And of course the Russians try to blame everyone but themselves.

The obvious evidence that this was a Kh-101 aside, even if it had been a stray anti-air missile this damage would still be the direct result of Russia firing on civilian targets. Russia's attempt to deflect blame is utterly pathetic.

[–] FaceDeer@fedia.io 3 points 1 year ago

There's a lot of outright rejection of the possibilities of AI these days, I think because it's turning out to be so capable. People are getting frightened of it and so jump to denial as a coping mechanism.

I recalled reading about an LLM that had been developed just a couple of weeks ago for translating source code into intermediate representations (a step along the way to full compilation) and when I went hunting for a reference to refresh my memory I found this article from March about exactly what's being discussed here - an LLM that translates assembly language into high-level source code. Looks like this one's just a proof of concept rather than something highly practical, but prove the concept it does.

I wonder if there are research teams out there sitting on more advanced models right now, fretting about how big a bombshell it'll be when this gets out.

[–] FaceDeer@fedia.io 15 points 1 year ago (6 children)

We're back to "privacy is a good thing even if it enables 'criminals'"? Yesterday there was rather a lot of negativity towards GNU Taler and other means of transferring money privately because it enabled tax evasion and such.

[–] FaceDeer@fedia.io 11 points 1 year ago (7 children)

As others have mentioned, it's possible but very complicated. Decompilers produce code that isn't very readable for humans.

I am indeed awaiting the big news headlines that will for some reason catch everyone by surprise when a LLM comes along that's trained to "translate" machine code into a nice easily-comprehensible high-level programming language. It's going to be a really big development, even though it doesn't make programs legally "open source" it'll make it all source available.

[–] FaceDeer@fedia.io 3 points 1 year ago (2 children)

Past results are no guarantee of future performance.

[–] FaceDeer@fedia.io 10 points 1 year ago (1 children)

Just because it worked out in the end didn't mean it was a good idea to try.

[–] FaceDeer@fedia.io 2 points 1 year ago

DAI has been around for six and a half years at this point.

How exactly is its "scam" supposed to work?

[–] FaceDeer@fedia.io 15 points 1 year ago (7 children)

I'd love to hear about any studies explaining the mechanism of human cognition.

Right now it's looking pretty neural-net-like to me. That's kind of where we got the idea for neural nets from in the first place.

[–] FaceDeer@fedia.io 29 points 1 year ago (31 children)

The meme would work just the same with the "machine learning" label replaced with "human cognition."

view more: ‹ prev next ›