The filing, posted on September 19, is closely redacted and Hive cofounder and CEO Kevin Guo informed MIT Know-how Evaluate that he couldn’t focus on the small print of the contract, however confirmed it includes use of the corporate’s AI detection algorithms for little one sexual abuse materials (CSAM).
The submitting quotes data from the Nationwide Middle for Lacking and Exploited Youngsters that reported a 1,325% improve in incidents involving generative AI in 2024. “The sheer quantity of digital content material circulating on-line necessitates using automated instruments to course of and analyze information effectively,” the submitting reads.
The primary precedence of kid exploitation investigators is to seek out and cease any abuse at present occurring, however the flood of AI-generated CSAM has made it troublesome for investigators to know whether or not photos depict an actual sufferer at present in danger. A device that would efficiently flag actual victims could be an enormous assist after they attempt to prioritize instances.
Figuring out AI-generated photos “ensures that investigative sources are centered on instances involving actual victims, maximizing this system’s affect and safeguarding weak people,” the submitting reads.
Hive AI presents AI instruments that create movies and pictures, in addition to a variety of content material moderation instruments that may flag violence, spam, and sexual materials and even determine celebrities. In December, MIT Know-how Evaluate reported that the corporate was promoting its deepfake-detection know-how to the US navy.