Key points:
- AI governance groups are urging the U.S. to create a centralized database to track serious AI failures.
- Supporters argue incident reporting could improve safety, trust and legal defensibility.
- Skeptics warn overly broad mandates could increase compliance costs and litigation risk.
The push for stronger oversight of artificial intelligence is moving beyond principles and into infrastructure, as support grows for a federal database to track when AI systems fail. Advocates say a centralized reporting mechanism could help companies, regulators and the public identify emerging risks before they escalate.
The proposal, detailed in a recent Bloomberg Law report, draws comparisons to long-standing federal systems that track aviation accidents and medical-device failures. The idea is to create a similar national repository for AI incidents, allowing organizations to report breakdowns, near-misses and high-consequence events.
The Future Society, an AI governance nonprofit, is among the groups advocating for the database. Its U.S. AI governance director, Caroline Jeanmarie, argues that the absence of centralized reporting prevents companies from learning from one another’s failures. Without shared data, she said, organizations are left to manage risks in isolation, deepening what she described as a “trust deficit” around AI systems.
Momentum is building as AI-related incidents increase. Data compiled by the Organisation for Economic Co-operation and Development shows reported AI incidents rose by nearly 50% between April and October, according to the OECD’s publicly available AI incident monitoring initiative. High-profile missteps in the U.S. have included private conversations with large language models appearing in search results and autonomous agents taking unintended actions.
Under the proposal, reporting obligations would be tiered. Mandatory disclosures would apply to high-impact events involving deaths, critical infrastructure disruptions or chemical, biological, radiological or nuclear risks. Lower-level issues, such as operational failures or near-misses, would be reported voluntarily, creating a broader dataset without imposing uniform mandates.
The concept aligns with language in the White House’s AI Action Plan, which calls for incorporating AI incident response into existing public- and private-sector frameworks. The plan states that the federal government should promote standardized approaches to AI incident response, signaling openness to more formal reporting mechanisms.
Not everyone is convinced. Adam Thierer, a senior fellow at the R Street Institute, cautioned that overly expansive disclosure requirements could burden developers and produce low-value information. He also warned that mandatory reporting could expose companies to litigation risk if incident data is later used against them.
Proponents counter that transparency can reduce, rather than increase, legal exposure. Jeanmarie argues that companies participating in industry-wide safety monitoring can demonstrate reasonable care and due diligence if disputes arise, particularly where they can show they learned from reported incidents and improved their systems.
Industry executives see parallels with cybersecurity. Ben Colman, CEO of deepfake detection firm Reality Defender, noted that hidden breaches tend to surface eventually, often to the detriment of consumers and responsible actors. In his view, disclosure requirements would encourage better behavior across the AI supply chain by making failures harder to conceal.
Researchers echo that point. Sean McGregor, who helped launch the open-source AI Incident Database, argues that consistent government-backed data collection is essential for public trust. Without it, he said, the actions of the least responsible companies risk shaping perceptions of the entire sector.








