doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?
This article even explicitely says as much.
My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.
The model hasn’t necessarily been trained with CSAM, rather you can create things called LORAs which help influence the image output of a model so that it’s better at producing very specific content that it may have struggled with before. For example I downloaded some recently that help Stable Diffusion create better images of Battleships from Warhammer 40k. My guess is that criminals are creating their own versions for kiddy porn etc.