Opinion: The AI Revolution Needs Guardrails, Not a Green Light
Opinion: The AI Revolution Needs Guardrails, Not a Green Light
Dr. Rebecca Stone is a professor of technology ethics at MIT and former advisor to the European Commission on AI policy.
The recent breakthroughs in artificial intelligence—from DeepSeek’s efficient models to OpenAI’s increasingly capable systems—have reignited the debate about AI regulation. As someone who has spent two decades studying the societal impacts of technology, I believe we are at a critical inflection point.
The Current Moment
We are witnessing the fastest technological transformation in human history. AI systems are now capable of:
- Generating indistinguishable synthetic media
- Conducting sophisticated cyber attacks
- Making consequential decisions in healthcare, finance, and criminal justice
- Potentially accelerating scientific discovery beyond human comprehension
Yet our regulatory frameworks remain stuck in the pre-AI era.
The Case for Action
Some argue that regulation will stifle innovation. They point to the dynamism of the tech industry and warn against “killing the goose that lays golden eggs.”
I find this argument unpersuasive for several reasons:
First, other high-impact technologies—from pharmaceuticals to aviation—are heavily regulated, and those industries continue to innovate.
Second, the absence of clear rules creates uncertainty that actually hinders responsible innovation. Companies don’t know what’s permissible, so they either proceed recklessly or hold back unnecessarily.
Third, the costs of getting AI wrong are potentially catastrophic. We cannot treat this as a normal technology deployment.
What Regulation Should Look Like
Effective AI governance should include:
- Mandatory safety testing for high-capability systems before deployment
- Transparency requirements about training data and model capabilities
- Clear liability frameworks for AI-caused harms
- International coordination to prevent a race to the bottom
- Public investment in AI safety research
The Democratic Imperative
Perhaps most importantly, decisions about AI’s role in society should not be made solely by a handful of technology companies. These are fundamentally democratic questions that require public deliberation.
We don’t let pharmaceutical companies decide which drugs are safe. We shouldn’t let AI labs decide which AI systems are safe.
The Window Is Closing
The time to act is now. Every day that passes without meaningful governance makes it harder to course-correct later. The technology is advancing faster than our institutions can adapt.
We have a choice: establish sensible guardrails today, or face much more disruptive interventions tomorrow when problems become undeniable. I know which future I prefer.