fbpx

Can AI Child Porn Be Stopped?

The Supreme Court has held that child pornography is an exception to the First Amendment, and few would argue otherwise. But it also limited the exception to actual kiddie porn, not fake computer generated porn where no child was sexually abuse.

The court held, 6 to 3, that the Child Pornography Prevention Act is overly broad and unconstitutional, despite its supporters’ arguments that computer-generated smut depicting children could stimulate pedophiles to molest youngsters.

“The sexual abuse of a child is a most serious crime and an act repugnant to the moral instincts of a decent people,” Justice Anthony M. Kennedy wrote in the majority decision. Nevertheless, he said, if the 1996 law were allowed to stand, the Constitution’s First Amendment right to free speech would be “turned upside down.”

Even then, however, C.J. Rehnquist realized that technology would soon come up with new and worse ways to create virtual porn where real children were put at risk.

Chief Justice William H. Rehnquist wrote the dissent. “Congress has a compelling interest in ensuring the ability to enforce prohibitions of actual child pornography, and we should defer to its findings that rapidly advancing technology soon will make it all but impossible to do so,” he wrote.

And while the Court held that part of the Child Pornography Prevention Act of 1996 was unconstitutional, it left a third section intact.

The High Court voided two sections of the law, but a third section was not challenged and is still in force. It bans some computer alterations of innocent pictures of children; grafting a child’s school picture onto a naked body, for example.

With the now ubiquitous availability of AI, we’re not only there, but inundated with AI generated fakes that use the face of a real child atop naked, and often sexual, images. Because of the ease with which these images can be generated, the problem has become overwhelming.

The images are indistinguishable from real ones, experts say, making it tougher to identify an actual victim from a fake one. “The investigations are way more challenging,” said Lt. Robin Richards, the commander of the Los Angeles Police Department’s Internet Crimes Against Children task force. “It takes time to investigate, and then once we are knee-deep in the investigation, it’s A.I., and then what do we do with this going forward?”

Law enforcement agencies, understaffed and underfunded, have already struggled to keep pace as rapid advances in technology have allowed child sexual abuse imagery to flourish at a startling rate. Images and videos, enabled by smartphone cameras, the dark web, social media and messaging applications, ricochet across the internet.

Investigation, not to mention prosecution, is not only difficult because of the ease of creation and sheer volume of images involved, but because technology has put up significant, perhaps insurmountable, hurdles.

The use of artificial intelligence has complicated other aspects of tracking child sex abuse. Typically, known material is randomly assigned a string of numbers that amounts to a digital fingerprint, which is used to detect and remove illicit content. If the known images and videos are modified, the material appears new and is no longer associated with the digital fingerprint.

While United States IP addresses are hard enough to track, and change with every modification of an image, much of the problem derives from outside the United States, beyond the reach of our law enforcement in any event.

Adding to those challenges is the fact that while the law requires tech companies to report illegal material if it is discovered, it does not require them to actively seek it out.

And therein lies the rub, If laws and their enforcers can’t stop the creators of AI generated child porn, it can reach the transmitters of these images.

While more than 90 percent of CSAM [child sexual abuse material] reported to NCMEC is uploaded in countries outside the United States, the vast majority of it is found on, and reported by, U.S.-based online platforms, including Meta’s Facebook and Instagram, Google, Snapchat, Discord and TikTok.

And the leading “experts” dealing with this problem are not known for their concern for the “cultish” First Amendment.

Wednesday’s Senate hearing will test whether lawmakers can turn bipartisan agreement that CSAM is a problem into meaningful legislation, said Mary Anne Franks, professor at George Washington University Law School and president of the Cyber Civil Rights Initiative.

“No one is really out there advocating for the First Amendment rights of sexual predators,” she said. The difficulty lies in crafting laws that would compel tech companies to more proactively police their platforms without chilling a much wider range of legal online expression.

The implications for vague and overbroad laws are one thing. Should that adorable picture of little Timmy playing with his rubber ducky in the bathtub you sent to Aunt Sadie land you in federal prison for half a decade? Should tween Timmy be sent to reform school for putting Hannah Montana’s face on Taylor Swift’s naked body? But coopting internet enterprises as the pornography police upon pain of prosecution or liability presents a huge incentive for Meta, etc., to shut down anything that remotely seems wrong to its algos. And when it does, to whom do you complain?

Much like Franks’ last jihad, revenge porn, which meshes unsurprisingly well with her latest foray into internet censorship, free speech takes a distant back seat to the fears and harms generated by AI fake child porn. As she correctly notes, crafting laws that don’t violate the First Amendment will be difficult, if not impossible. But which will Congress give away, the First Amendment or AI-generated fake child porn?

Related Articles

Responses

Your email address will not be published. Required fields are marked *