Senator Marsha Blackburn (R-TN) has launched a formal inquiry into eight of the world's largest technology companies, demanding answers about their efforts to combat Child Sexual Abuse Material (CSAM). The probe targets Meta, X, TikTok, Google, Microsoft, Snap, Amazon, and Apple, and follows critical reports from the National Center for Missing and Exploited Children (NCMEC) alleging significant deficiencies in how these platforms report illegal content.
At the heart of the inquiry are concerns that current systems are inadequate and unprepared for new threats posed by generative artificial intelligence. According to NCMEC, which serves as the national clearinghouse for CSAM reports, generative AI can create novel, photorealistic abusive content that bypasses traditional detection methods. These systems often rely on matching digital fingerprints (hashes) of known illegal images, a technique rendered ineffective against entirely new, AI-generated material.
In a letter to the tech giants, Senator Blackburn requested detailed information on their current detection technologies, reporting statistics, and policies specifically addressing AI-generated CSAM. The inquiry seeks to understand what proactive measures are being taken to prevent platforms from being used to create or distribute this content, placing direct pressure on Silicon Valley to enhance its safety protocols.
This investigation is part of a broader legislative push to increase tech company accountability for harmful content. It aligns with ongoing debates surrounding laws like the EARN IT Act and the Kids Online Safety Act (KOSA), which aim to create new legal responsibilities for platforms to protect children. The outcome of the inquiry could influence future regulation and force significant changes in how major technology companies police their services.




