The Berlin Post

Opinion

Opinion: The AI Revolution Needs Guardrails, Not a Green Light

Dr. Rebecca Stone 7 min read
Robot and human interaction
Photo: Unsplash / Possessed Photography
As AI capabilities advance rapidly, we must establish clear rules of the road before it's too late. The time for meaningful regulation is now.

Opinion: The AI Revolution Needs Guardrails, Not a Green Light

Dr. Rebecca Stone is a professor of technology ethics at MIT and former advisor to the European Commission on AI policy.

The recent breakthroughs in artificial intelligence—from DeepSeek’s efficient models to OpenAI’s increasingly capable systems—have reignited the debate about AI regulation. As someone who has spent two decades studying the societal impacts of technology, I believe we are at a critical inflection point.

The Current Moment

We are witnessing the fastest technological transformation in human history. AI systems are now capable of:

Yet our regulatory frameworks remain stuck in the pre-AI era.

The Case for Action

Some argue that regulation will stifle innovation. They point to the dynamism of the tech industry and warn against “killing the goose that lays golden eggs.”

I find this argument unpersuasive for several reasons:

First, other high-impact technologies—from pharmaceuticals to aviation—are heavily regulated, and those industries continue to innovate.

Second, the absence of clear rules creates uncertainty that actually hinders responsible innovation. Companies don’t know what’s permissible, so they either proceed recklessly or hold back unnecessarily.

Third, the costs of getting AI wrong are potentially catastrophic. We cannot treat this as a normal technology deployment.

What Regulation Should Look Like

Effective AI governance should include:

  1. Mandatory safety testing for high-capability systems before deployment
  2. Transparency requirements about training data and model capabilities
  3. Clear liability frameworks for AI-caused harms
  4. International coordination to prevent a race to the bottom
  5. Public investment in AI safety research

The Democratic Imperative

Perhaps most importantly, decisions about AI’s role in society should not be made solely by a handful of technology companies. These are fundamentally democratic questions that require public deliberation.

We don’t let pharmaceutical companies decide which drugs are safe. We shouldn’t let AI labs decide which AI systems are safe.

The Window Is Closing

The time to act is now. Every day that passes without meaningful governance makes it harder to course-correct later. The technology is advancing faster than our institutions can adapt.

We have a choice: establish sensible guardrails today, or face much more disruptive interventions tomorrow when problems become undeniable. I know which future I prefer.