The 30-Day problem: can democracies move fast enough on AI?
Master of Public Policy student Lorna Enow examines whether democracies can keep pace with AI developments without weakening democratic oversight.
In early January 2026, the Pentagon issued a directive that reveals a deeper anxiety within democratic governments worldwide: a requirement that the latest artificial intelligence models be deployed within 30 days of their public release.
On its face, this is a question of military efficiency. But beneath it lies a deeper issue troubling policymakers from Washington to Brussels to Tokyo – can democracies, with their deliberative processes and institutional safeguards, move quickly enough to compete in the age of AI?
The concern is straightforward. Authoritarian regimes can mandate AI deployment across government systems without parliamentary debate, public consultation, or judicial review. China’s government can decide on Monday to integrate a new AI model into its surveillance apparatus and have it running nationwide by Friday. Democratic governments, meanwhile, must navigate privacy laws, legislative oversight, procurement regulations, and public accountability–processes that can stretch across months or years.
This creates what some strategists call the “democracy speed gap” – and it is fuelling an uncomfortable debate: does AI-first governance require democracies to compromise the very institutional checks that define them?
The autocratic advantage–or is it?
The apparent efficiency of authoritarian AI governance is real. China has deployed facial recognition systems. across entire cities, integrated AI into social credit systems, and mandated algorithmic content moderation at scales democratic governments have struggled to match
Yet this speed comes with significant costs that democratic leaders should resist romanticising. Authoritarian AI systems often fail spectacularly because they lack the independent oversight and open scrutiny. China’s COVID-tracking apps, for example, generated false positives that trapped millions in lockdowns. Without effective mechanisms for challenge or review, these failures compound.
The speed of deployment also doesn’t necessarily translate to better outcomes. Democratic processes–messy and slow as they can be–often surface risks that rushed implementation overlooks. The EU’s lengthy deliberations over its AI Act are a case in point. The resulting framework addresses issues of bias, transparency, and accountability in ways China’s rapid deployment simply does not.
How democracies are adapting
Rather than abandoning democratic norms, leading democracies are experimenting with ways to preserve accountability whilst accelerating decision-making.
In the United States, this has taken the form of what might be described as “agile procurement within guardrails“. The Pentagon’s strategy maintains Congressional oversight whilst using mechanisms like the Joint Acceleration Reserve to speed resource allocation. The 30-day deployment mandate applies to frontier AI models, but these still operate within existing legal frameworks.
The European Union has taken a different approach: regulatory pre-clearance. Rather than approving each AI deployment individually, the AI Act establishes risk categories with predefined requirements. High-risk systems face stringent oversight, but lower-risk applications can deploy rapidly once they meet baseline standards.
Japan and South Korea offer yet another model: public-private hybrid governance. Both countries have established AI councils that bring together government officials, technical experts, and industry representatives to make rapid decisions within democratically established boundaries.
None of these approaches perfectly resolves the speed problem, but they suggest democracies aren’t simply paralysed by their own processes.
The openness dividend
Democratic governance may actually confer advantages that aren’t immediately obvious in speed comparisons. The Pentagon strategy explicitly relies on “America’s private sector” and the “hundreds of billions in private sector capital investment” in AI infrastructure. This only works because democratic systems have cultivated dynamic, innovative commercial AI sectors through property rights, academic freedom, and competitive markets.
China’s AI sector, while formidable, operates under constraints that may limit long-term innovation. Researchers self-censor to avoid political repercussions and companies operate with the knowledge that the state can requisition their technology without compensation. The result is that whilst China can deploy AI rapidly, it increasingly struggles to develop frontier models that match Western capabilities.
Democratic governance also enables international cooperation that authoritarian regimes cannot easily replicate. The Pentagon strategy references leveraging “allies and partners” for AI development. NATO members and democratic allies share technology and data because they trust each other’s governance systems. Authoritarians face much greater barriers to build equivalent networks.
Rethinking democratic speed
Still, democracies cannot be complacent. The 30-day mandate reflects a genuine concern that traditional procurement timelines–sometimes stretching across years–are incompatible with the pace of AI development. Democracies need to rethink not whether to maintain oversight, but how to maintain it more efficiently.
Several reforms merit consideration. Outcome-based regulation could replace overly process-based approval systems. Instead of specifying exactly how AI systems must be built, regulations could define required performance standards and let developers innovate towards them. Standing expert bodies could be given delegated authority to make rapid technical decisions within democratically established boundaries. Rather than requiring full legislative approval for each AI deployment, elected representatives could set broad parameters and empower technical experts to act within them–subject to audit and review. Adaptive legislative frameworks that automatically update as technology evolves, such as “regulatory sandboxes”, could allow experimentation tested under temporary rules that inform permanent legislation.
The real question
The 30-day problem ultimately asks whether democracies will respond to AI competition by becoming more like their authoritarian rivals–centralising power, reducing oversight, accelerating deployment at the expense of deliberation–or whether they’ll find distinctly democratic approaches to speed.
The answer matters not only for technological competition, but the future of democratic governance itself. If democracies conclude they must abandon institutional checks to compete, they risk winning the AI race whilst losing the system of government that made their innovation possible in the first place.
The evidence suggests a third path exists: democracies can be faster without abandoning what makes them democratic – through institutional creativity, willingness to reform outdated processes, and confidence that openness and accountability are features, not bugs, in long-term technological competition.
Framing the choice as “speed or democracy” is a false binary that serves authoritarian narratives more than democratic interests. The real challenge is building democratic institutions fit for the speed of modern technology.