Key points:
- Norm Ai announced a research partnership with Stanford’s CodeX center on agentic AI.
- The collaboration explores how existing law applies to AI agents in high-stakes use.
- Focus includes compliance AI agents supervising business AI agents with human oversight.
Regulatory AI developer Norm Ai has launched a new research partnership with Stanford’s CodeX to study governance models for agentic AI systems, reported by Law.com. The center, jointly run by Stanford Law School and the computer science department, will collaborate with Norm Ai to explore how existing legal frameworks can be applied to AI agents as they increasingly take on roles once performed by human decision-makers.
Founder and CEO John Nay said the initiative aims to answer pressing questions about responsibility and oversight as AI enters high-stakes domains. Norm Ai already develops agentic systems on its no-code Legal Engineering Automation Platform, which conducts first-pass reviews of regulatory requirements, company policies, and legal obligations.
The research will consider whether human governance frameworks can be adapted to regulate autonomous AI agents. Nay described a future where AI bifurcates into business agents, which perform functions such as marketing and customer service, and compliance agents, which monitor and supervise business agents to ensure they remain within regulatory and ethical boundaries.
“You need, basically, the legal and compliance AI agents to supervise and monitor and guide the business AI agents,” Nay said. He added that routing nuanced and subjective issues for human oversight is a core challenge of the partnership’s work.
The collaboration builds on CodeX’s expanding portfolio of industry ties. Earlier this year, the program partnered with Davis Wright Tremaine to co-develop AI-powered legal tools. Norm Ai, for its part, recently raised $48 million in a funding round led by private equity firm Coatue, signaling growing investor interest in regulatory-focused AI applications.







