Wish I had the nerve to blow up a data centre. But spending the rest of my life in prison seems a steep price to pay for setting a company back only a few months.
Comic Strips
Comic Strips is a community for those who love comic stories.
The rules are simple:
- The post can be a single image, an image gallery, or a link to a specific comic hosted on another site (the author's website, for instance).
- The comic must be a complete story.
- If it is an external link, it must be to a specific story, not to the root of the site.
- You may post comics from others or your own.
- If you are posting a comic of your own, a maximum of one per week is allowed (I know, your comics are great, but this rule helps avoid spam).
- The comic can be in any language, but if it's not in English, OP must include an English translation in the post's 'body' field (note: you don't need to select a specific language when posting a comic).
- Politeness.
- AI-generated comics aren't allowed.
- Adult content is not allowed. This community aims to be fun for people of all ages.
Web of links
- !linuxmemes@lemmy.world: "I use Arch btw"
- !memes@lemmy.world: memes (you don't say!)
Thank you very much! The red dot is likely smaller...
Though, I don't appreciate nor agree with the bomb part! ^^
The work reminded me of the following paper:
Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model’s weights during training, and whether those memorized data can be extracted in the model’s outputs.
While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models...
We investigate this question using a two-phase procedure: (1) an initial probe to test for extraction feasibility, which sometimes uses a Best-of-N (BoN) jailbreak, followed by (2) iterative continuation prompts to attempt to extract the book.
We evaluate our procedure on four production LLMs: Claude 3.7 Sonnet, GPT-4.1, Gemini 2.5 Pro, and Grok 3, and we measure extraction success with a score computed from a block-based approximation of longest common substring...
Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs...
Source 🕊
I'm anti-copyright and anti-corporation.
These ridiculous datacentres use large volumes of resources purely to benefit the companies, which are closing-off human made content for their profit.
As long as copyrights exist to restrict me, I'm adamant on they restricting billionaires too.
If they want to extinguish it, I'm listening. Otherwise, they should pay statutory damages for every work they are pirating with those LLMs.
The problem is that it won't. We essentially already have the best case scenario, which is that ai slop is non-copyrightable, meaning that if disney for example tries to generate a slop movie, everyone is free to distribute it so you can't really make any money off of it. Extending copyright pretty much always ends up benefitting corporations, not hurting them.
While many believe that LLMs do not memorize much of their training data
It's sad that even researchers are using language that personifies llms...