> "The creation of CSAM using AI is inherently harmful to children because the machine-learning models utilized by AI have been trained on datasets containing thousands of depictions of known CSAM victims," it says, "revictimizing these real children by using their likeness to generate AI CSAM images into perpetuity."
The word "inherently" there seems like a big stretch to me. I see how it could be harmful to them, but I also see an argument for how such AI generated material is a substitute for the actual CSAM. Has this actually been studied, or is it a taboo topic for policy research?
AI-generated "CSAM" is the perfect form of kompromat. Any computer with a GPU can generate images that an American judge can find unpalatable. Once tarred by the sex offender brush and cooled by 20+ years in prison for possession, any individual will be effectively destroyed. Of course, no real children are being abused, which makes it all the more ludicrous.
Between the 3D printed weapons and the AI CSAM, this year is already shaping up to be wild in terms of misuses of technology. I suppose that’s one downside of adoption.
Be interesting to see how this pans out in terms of the 1st amendment. Without a victim, it would be interesting to see how the courts rule. They could say its inherently unconstitutional but for sake of the general public, it is fine. This would be similar to the supreme court ruling on DUI checkpoints.
If it's AI-generated, it is fundamentally not CSAM.
The reason we shifted to the terminology "CSAM", away from "child pornography", is specifically to indicate that it is Child Sexual Abuse Material: that is, an actual child was sexually abused to make it.
You can call it child porn if you really want, but do not call something that never involved the abuse of a real, living, flesh-and-blood child "CSAM". (Or "CSEM"—"Exploitation" rather than "Abuse"—which is used in some circles.) This includes drawings, CG animations, written descriptions, videos where such acts are simulated with a consenting (or, tbh, non consenting—it can be horrific, illegal, and unquestionably sexual assault without being CSAM) adult, as well as anything AI-generated.
These kinds of distinctions in terminology are important, and yes I will die on this hill.
In British Columbia both text-based accounts (real and fictional, such as stories) and drawings of underage sexual activity are illegal (basically any sort of depiction, even if it just comes out of your mouth.)
Under new law, cops bust famous cartoonist for AI-generated CSAM
(arstechnica.com)40 points by LinuxBender 3 hours ago | 46 comments
Comments
The word "inherently" there seems like a big stretch to me. I see how it could be harmful to them, but I also see an argument for how such AI generated material is a substitute for the actual CSAM. Has this actually been studied, or is it a taboo topic for policy research?
The reason we shifted to the terminology "CSAM", away from "child pornography", is specifically to indicate that it is Child Sexual Abuse Material: that is, an actual child was sexually abused to make it.
You can call it child porn if you really want, but do not call something that never involved the abuse of a real, living, flesh-and-blood child "CSAM". (Or "CSEM"—"Exploitation" rather than "Abuse"—which is used in some circles.) This includes drawings, CG animations, written descriptions, videos where such acts are simulated with a consenting (or, tbh, non consenting—it can be horrific, illegal, and unquestionably sexual assault without being CSAM) adult, as well as anything AI-generated.
These kinds of distinctions in terminology are important, and yes I will die on this hill.
So California is just starting to catch up.