doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
This article even explicitely says as much.
My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.
That’s not exactly how it works.
It can “understand” different concepts and mix them, without having to see the combination before hand.
As for the training thing, that would probably be more LORA. They’re like add-ons you can put on your AI to draw certain things better like a character, a pose, etc. not needed for the base model.