Calls Grow For A Federal AI Incident Database As Corporate Risk Concerns Mount

Support is building for a national AI incident database to track failures and near-misses, as companies, regulators and researchers seek clearer visibility into emerging risks.

Key points:

  • AI governance groups are urging the U.S. to create a centralized database to track serious AI failures.
  • Supporters argue incident reporting could improve safety, trust and legal defensibility.
  • Skeptics warn overly broad mandates could increase compliance costs and litigation risk.

The push for stronger oversight of artificial intelligence is moving beyond principles and into infrastructure, as support grows for a federal database to track when AI systems fail. Advocates say a centralized reporting mechanism could help companies, regulators and the public identify emerging risks before they escalate.

The proposal, detailed in a recent Bloomberg Law report, draws comparisons to long-standing federal systems that track aviation accidents and medical-device failures. The idea is to create a similar national repository for AI incidents, allowing organizations to report breakdowns, near-misses and high-consequence events.

The Future Society, an AI governance nonprofit, is among the groups advocating for the database. Its U.S. AI governance director, Caroline Jeanmarie, argues that the absence of centralized reporting prevents companies from learning from one another’s failures. Without shared data, she said, organizations are left to manage risks in isolation, deepening what she described as a “trust deficit” around AI systems.

Momentum is building as AI-related incidents increase. Data compiled by the Organisation for Economic Co-operation and Development shows reported AI incidents rose by nearly 50% between April and October, according to the OECD’s publicly available AI incident monitoring initiative. High-profile missteps in the U.S. have included private conversations with large language models appearing in search results and autonomous agents taking unintended actions.

Under the proposal, reporting obligations would be tiered. Mandatory disclosures would apply to high-impact events involving deaths, critical infrastructure disruptions or chemical, biological, radiological or nuclear risks. Lower-level issues, such as operational failures or near-misses, would be reported voluntarily, creating a broader dataset without imposing uniform mandates.

The concept aligns with language in the White House’s AI Action Plan, which calls for incorporating AI incident response into existing public- and private-sector frameworks. The plan states that the federal government should promote standardized approaches to AI incident response, signaling openness to more formal reporting mechanisms.

Not everyone is convinced. Adam Thierer, a senior fellow at the R Street Institute, cautioned that overly expansive disclosure requirements could burden developers and produce low-value information. He also warned that mandatory reporting could expose companies to litigation risk if incident data is later used against them.

Proponents counter that transparency can reduce, rather than increase, legal exposure. Jeanmarie argues that companies participating in industry-wide safety monitoring can demonstrate reasonable care and due diligence if disputes arise, particularly where they can show they learned from reported incidents and improved their systems.

Industry executives see parallels with cybersecurity. Ben Colman, CEO of deepfake detection firm Reality Defender, noted that hidden breaches tend to surface eventually, often to the detriment of consumers and responsible actors. In his view, disclosure requirements would encourage better behavior across the AI supply chain by making failures harder to conceal.

Researchers echo that point. Sean McGregor, who helped launch the open-source AI Incident Database, argues that consistent government-backed data collection is essential for public trust. Without it, he said, the actions of the least responsible companies risk shaping perceptions of the entire sector.

 

Customer Stories

See how leading enterprise in-house teams have scaled smarter with Legal.io's high-caliber flex talent.

More from Legal.io


Community Perspectives: How long is the process between applying for an in-house position and an offer?
Community Perspectives: How long is the process between applying for an in-house position and an offer?

In-house legal professionals discuss their experience with the length of time it takes to apply for, interview, and be offered a position.

Oct 13, 2022
Read More
GDPR for Space

The EU Space Act proposes harmonized regulations for space traffic and debris, positioning the EU as a leader in orbital governance—much like its role with GDPR in data privacy.

Jul 16, 2025
Read More
Class Actions Allege Crypto-Trading Platforms Collected Facial Scans Without Written Notice
Class Actions Allege Crypto-Trading Platforms Collected Facial Scans Without Written Notice

Crypto platforms could face claims worth billions of dollars for violating users' privacy.

Oct 01, 2023
Read More
The Legal Implications of FTX's Customer Fund Usage: A Closer Look at the FTX Trial
The Legal Implications of FTX's Customer Fund Usage: A Closer Look at the FTX Trial

The criminal fraud trial of Sam Bankman-Fried features the testimony of Can Sun, the former GC of FTX.

Oct 23, 2023
Read More
Inside the Surge: The Booming In-House Counsel Landscape in the U.S.
Inside the Surge: The Booming In-House Counsel Landscape in the U.S.

The U.S. in-house counsel population has surged to 140,800 in 2023, underscoring the growing importance of in-house legal departments.

Nov 27, 2024
Read More
Ready to hire?

Schedule a free consultation to discuss your hiring needs.

Free 15-min consultation
Legal.io Platform
5 star reviews
Hiring made smarter

Easy-to-use platform for hiring legal talent, managing spend, and optimizing your panel — plus an average savings of 50%.

Need Immediate Help?

Submit a hiring request and let our experts handle the entire process for you.