Two experts argue that flexible, management-based AI regulation – likened to a leash rather than guardrails – may be the only viable way to responsibly guide the rapidly evolving technology.
Key Points at a Glance
- Traditional guardrail regulations may be too rigid for dynamic AI systems
- Researchers propose a “leash” model: adaptable, management-based oversight
- The model allows innovation while ensuring internal risk mitigation systems
- Examples include regulating risks in AVs, social media, and AI bias
As artificial intelligence becomes increasingly embedded in everyday life – from healthcare diagnostics and autonomous vehicles to chatbots and social media algorithms – the question of how best to regulate it grows ever more urgent. A provocative new paper, published in the journal Risk Analysis, suggests a paradigm shift in how we think about AI oversight: not rigid guardrails, but leashes.
Authors Cary Coglianese, a law professor at the University of Pennsylvania, and Colton R. Crum, a doctoral candidate in computer science at the University of Notre Dame, argue that traditional prescriptive regulations are ill-suited for such a fast-moving, heterogeneous technology. Their alternative? Management-based regulation – a model that requires firms to develop their own internal risk governance systems, tailored to how they deploy AI.
Unlike guardrails, which confine movement, leashes allow for flexibility while maintaining control. “Just as physical leashes used when walking a dog through a neighborhood allow for a range of movement and exploration,” they write, AI leashes can give companies the room to innovate, discover, and adapt – without letting their technologies spiral into harm.
To illustrate the stakes, the authors point to real-world AI risks. In autonomous vehicles, failure to properly manage machine learning can result in fatal crashes. On social media, AI algorithms may inadvertently promote content linked to self-harm or suicide. And bias baked into AI systems – from hiring tools to facial recognition – continues to raise deep ethical and legal concerns.
The leash model would require firms using AI in these domains to implement robust internal mechanisms designed to anticipate, monitor, and reduce harm. This isn’t a passive framework; it demands active risk management and accountability from within organizations, not just external oversight bodies.
Coglianese and Crum contend that such an approach is uniquely suited to AI’s evolving nature. Management-based regulation is inherently more responsive and dynamic. It doesn’t attempt to guess every future risk; instead, it establishes a culture of anticipatory responsibility within the entities that create and deploy AI.
By reframing AI regulation from static rules to adaptive responsibility, the leash model doesn’t just promise better safety outcomes. It may also foster an environment in which technological innovation can flourish without abandoning caution. In a world where AI is already outpacing existing frameworks, this might be the kind of smart, nuanced thinking regulation needs.
Source: Society for Risk Analysis