semibreve42

joined 2 years ago
[–] semibreve42@lemmy.dupper.net 6 points 2 years ago (2 children)

If a large corp wants to do what you’re suggesting, they don’t need to launch a big announced project.

They can spin up a federated instance with just one user and no references to who owns it, then have patsy accounts on other instances subscribe to their instance and get all the data they want sent to their semi secret instance.

It would be very difficult to identify this in a large, healthy federation with tons of users and lots of small personal instances.

[–] semibreve42@lemmy.dupper.net 4 points 2 years ago (3 children)

Post it on relevant Reddit threads?

[–] semibreve42@lemmy.dupper.net 9 points 2 years ago (3 children)

In the US, with slander and libel, there are two standards.

If someone is a public figure, they need to show actual damages in order to be successful, this is the scenario you're describing.

If you are not a public figure, then you can sue for slander or libel without needing to show actual damages, just harm to your reputation or similar.

So the answer on that turns on whether Christian Selig is a public figure or not - I do not know the answer to that question.

[–] semibreve42@lemmy.dupper.net 0 points 2 years ago (1 children)

Super cool approach. I wouldn't have guessed it would be that effective if someone had explained it to me without the data.

I'm curious how easy it is to "defeat". If you take an AI generated text that is successfully identified with high confidence and superficially edit it to include something an LLM wouldn't usually generate (like a few spelling errors), is that enough to push the text out of high confidence?

I ask because I work in higher ed, and have been sitting on the sidelines watching the chaos. My understanding is that there's probably no way to automate LLM detection to a high enough certainty for it to be used in an academic setting as cheat detection, the false positives are way too high.

[–] semibreve42@lemmy.dupper.net 2 points 2 years ago

It’s not quite identical but sniper elite 5 may scratch the itch for you.

[–] semibreve42@lemmy.dupper.net 3 points 2 years ago (1 children)

Thank you for the answer.

Any suggestions on further reading?

[–] semibreve42@lemmy.dupper.net 1 points 2 years ago

Not sure then, sorry :/

[–] semibreve42@lemmy.dupper.net 2 points 2 years ago (4 children)

Yeah. Did you down and up the docker and see if the issue is resolved?

[–] semibreve42@lemmy.dupper.net 1 points 2 years ago (6 children)

Does your docker network setup have the lemmy server on an internal network with no external access? The default docker config is setup that way and caused what you're describing for me. Fastest fix is comment out the "internal: true" from the docker compose file, but you may want to consider the security implications of that.

[–] semibreve42@lemmy.dupper.net 3 points 2 years ago (1 children)

I had no idea that subreddit existed… if you start a m4/3 community here I’ll join and submit some content.

[–] semibreve42@lemmy.dupper.net 3 points 2 years ago* (last edited 2 years ago)

Mostly I am depending on reverse proxy yes.

Otherwise there's not critical data on the box that could cause a problem for me if the server was owned and everything exfiltrated. Worst case if I had to completely wipe the box it would be annoying but not worse then that.

[–] semibreve42@lemmy.dupper.net 9 points 2 years ago (7 children)

Interesting article, thank you for sharing.

I almost stopped reading at the octopus analogy because I think it's pretty obviously flawed and I assumed the rest of the article might be, but it wasn't.

A question I have. The subject of the article states as fact that the human mind is much more complex and functions differently then an LLM. My understanding is that we still do not have a great consensus on how our own brains operate - how we actually think. Is that out of date? I'm not suggesting we are all fundamentally "meat LLM's", to extremely simplify, but also I wasn't aware we've disproven that either.

If anyone has some good reading on the above to point to I'd love to get links!

view more: ‹ prev next ›