- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.
It differs in basically being something completely different. This is a classification model, doesn’t have generative capabilities. Even if you were to get the model and it’s weights, and you tried to reverse engineer an “input” that it would classify as CP, it would most likely look like pure noise to you.
Moron
Generate porn, classificate output, result very young looking models.
Moron
So you need to have a model that generates CP to begin with. Flawless reasoning there.
Look, it’s clear you have no clue what you’re talking about. Stop demonstrating it, moron.