Indeed. And any modern AI training system is going to be extensively curating any training data that ends up being fed into the AI, probably processing it through other AIs to generate synthetic data from it. The days of early ChatGPT where LLMs were trained by just dumping giant piles of random text on them and hoping it'll figure it out somehow are long past.
This reminds me of Nightshade, the supposed anti-art-AI technique that could be defeated by resizing the image (which all art AI training systems do as a matter of course). It may make people "feel better" but it's not going to have any real impact on anything.
A bot that's ignoring robots.txt is likely going to be pretending to be human. If your site has valuable content that you want to show to humans, how do you distinguish them from the bots?