Sen. Josh Hawley (R‑MO) opened Wednesday’s Senate Judiciary Committee Crime & Terrorism Subcommittee hearing on AI with a stinging rebuke: “They knew exactly what they were doing. They pirated these materials willfully, as the idea of pirating and copyrighted works percolated through Meta.” He detailed internal warnings ignored by management: “This is not trivial,” one employee cautioned. Another shared an article on “the probability of getting arrested for using torrents, illegal downloads, in the United States.”
Bestseller David Baldacci described the personal toll: “Every single one of my books was presented to me … in three seconds. It really felt like I had been robbed of everything of my entire adult life that I had worked on.” He joined authors and legal experts in testifying that AI firms ingested over 200 terabytes of copyrighted text—transforming creative work into unlicensed training fodder.
Hawley flatly called Meta’s approach “criminal conduct” rather than “aggressive business.” Highlighting seized evidence of off‑site, non‑company servers, he warned: “Meta trained its AI model to lie to users about what data it had been trained on.” This unchecked torrent of data undercuts property rights and offloads costs onto creators—and ultimately, taxpayers forced to underwrite enforcement.
Unchecked copyright theft fuels a bureaucratic overreach that rewards tech giants while penalizing innovators. A security‑first approach demands clear legal guardrails and accountability, lest the U.S. lose its competitive edge to rivals willing to flout the law. With voices from the bench and the creators’ podium in agreement, Congress faces a choice: defend the rule of law or let Big Tech write its own copyright rules.