The Midterms Won't Decide Whether AI Governance Accelerates
- Feb 21
- 12 min read
They'll Decide What Kind of Acceleration You Face

The key visual takeaway is that governance doesn't simplify after the midterms — it fragments. You go from one line to many, each heading in a different direction at a different speed. That's the compliance nightmare the article warns boards about: not one new regulation to adapt to, but a patchwork of inconsistent obligations across dozens of jurisdictions.
I. The Paradox Hiding in Plain Sight
In the summer of 2025, a county commission meeting in rural Missouri ran three hours over schedule. The agenda item was not immigration, not taxes, not schools. It was a proposed 200-megawatt data centre, one of dozens being fast-tracked across red-state America to feed the insatiable compute demands of frontier AI models. Residents, many of them self-described Trump voters who had cheered the administration's promise to make America the global AI capital, lined up at the microphone to object. They worried about groundwater depletion, electricity price surges, construction noise, and something harder to articulate: the sense that a technology they did not fully understand was being imposed on their communities without consent.
That scene has since been replicated in church halls, planning board meetings, and state capitols from Florida to Montana. And it captures a tension at the heart of American AI policy that neither party has yet been forced to resolve - a tension that the 2026 midterm elections are about to blow wide open.
The paradox is simple to state and fiendishly difficult to govern. The Trump White House has pursued the most aggressively pro-industry AI posture of any administration in history. Executive Order 14365, signed in January 2025, explicitly revoked the Biden-era AI safety framework and directed federal agencies to pursue what it called a "minimally burdensome" approach to AI regulation. The accompanying AI Action Plan reads less like a governance document and more like an industrial strategy memo: remove barriers, accelerate deployment, ensure American dominance. Build, baby, build.
Yet the voters who put this administration in office are, by nearly every measure, deeply anxious about the technology it is championing. Polling conducted by Public First for the Financial Times found that roughly 60 percent of Trump voters are concerned about the impact of AI on their lives. A Fox News survey recorded similar unease: 60 per cent of all voters said AI was moving too fast and with too little oversight. Research compiled by Public Citizen across multiple polling cycles shows that approximately 80 percent of voters, across party lines, favour stronger regulation of artificial intelligence.
These are not marginal numbers. They represent a structural misalignment between federal policy and public sentiment that would be remarkable in any domain. In a domain as fast-moving and poorly understood as AI, it is a governance time bomb.
II. The State-Level Insurgency
To understand why the midterms matter, you first have to understand where AI governance is actually happening in America - and it is not in Washington.
Since the start of 2025, state legislatures have introduced hundreds of AI-related bills. Many of these are emerging from Republican-controlled chambers, driven not by progressive tech scepticism but by conservative populism: the belief that Big Tech has grown too powerful, that ordinary citizens deserve standing to challenge algorithmic harms, and that communities should have the right to refuse infrastructure projects they did not ask for.
The substance of these bills varies enormously. Some are narrow and sensible - child safety measures targeting deepfakes and AI-generated exploitation material, transparency requirements for AI used in hiring or insurance underwriting, disclosure mandates for AI-generated political advertising. Others are broad and potentially disruptive - developer liability frameworks that would make model creators responsible for downstream harms, moratoriums on data centre construction pending environmental review, and outright bans on certain AI applications in law enforcement and public benefits administration.
What unites them is a direction of travel that runs directly counter to federal policy. While the White House talks about removing guardrails, statehouses are building them. While federal agencies have been instructed to treat AI regulation as a drag on competitiveness, state attorneys general are sharpening their enforcement tools. While the administration pursues what amounts to voluntary self-governance by industry, state legislators are creating statutory obligations with real teeth.
The legal scholars at White & Case noted this divergence early, observing that the executive order's push for a unified national framework was partly designed to pre-empt exactly this kind of state-level proliferation. But pre-emption requires congressional action, and Congress - consumed by other priorities and deeply divided on technology policy - has shown no appetite for comprehensive AI legislation. The result is a regulatory vacuum at the federal level that the states are rushing to fill.
For enterprises, this is not an abstract constitutional question. It is an operational one. A company deploying AI across multiple states now faces what the Financial Times aptly described as an environment where American AI laws risk becoming "more European than Europe's" - not because any single state regime is as comprehensive as the EU AI Act, but because the patchwork of inconsistent state requirements may in aggregate be more burdensome, more unpredictable, and more expensive to comply with than a single comprehensive federal framework would have been.
III. Why November Changes the Calculus
This brings us to the midterms, and to the question that boards and general counsel should be modelling right now: what happens when the grassroots anger that is already driving state-level action gets channelled into electoral outcomes?
The conventional wisdom in Washington is that midterms are a referendum on the incumbent president, and that the party in the White House almost always loses seats. If that pattern holds, November 2026 will see Democratic gains - and with them, potentially, a more sympathetic federal posture toward AI regulation.
But this analysis misses the more consequential dynamic. The threat to the current federal approach is not primarily from the left. It is from within the president's own coalition.
Republican primary campaigns are already surfacing candidates who are running explicitly against the AI agenda - not against artificial intelligence per se, but against the version of AI policy that privileges Silicon Valley over Main Street, that fast-tracks infrastructure without local consent, that prioritises corporate returns over community resilience. These are candidates who understand that their voters' anxiety about AI is real, visceral, and electorally potent.
Pro-AI super PACs have noticed. Reporting by Wired has documented how technology-aligned political action committees are already spending heavily to shape midterm outcomes, recognising that the composition of the next Congress, and critically, the next class of state legislators, will determine how aggressively America regulates AI in the second half of this decade.
The scenario that should concern enterprise leaders most is not a dramatic shift in federal policy. It is the election of a cohort of populist Republicans at the state level who are ideologically committed to local control, deeply sceptical of Big Tech, and armed with genuine constituent anger about the costs and disruptions of the AI buildout. These legislators will not be constrained by the White House's preference for light-touch governance. They will be rewarded by their voters for doing the opposite.
Fortune magazine captured the emerging picture when it observed that AI is simultaneously powering economic growth under this administration while provoking exactly the kind of voter backlash that tends to produce aggressive legislative responses. That is the definition of a political football, and when both parties are competing to demonstrate that they take voters' concerns seriously, the result is almost always more rules, not fewer.
IV. The Fragmentation Problem
I spent fifteen years working i information security, and if there is one lesson I carried from the operational side to the analytical side, it is this: the hardest governance challenges are not the ones that come from a single, clear regulatory framework. They are the ones that come from fragmentation.
Fragmentation is what kept CISOs awake at night during the rise of data privacy regulation. It is what made the patchwork of state breach notification laws, 50 states, 50 different requirements - a compliance nightmare long before anyone was thinking about GDPR. And it is precisely the dynamic that is now emerging in AI governance.
Consider the compliance landscape that a multinational enterprise may face by 2027 or 2028 if current trends continue. At the federal level, you will have a regime that is light on prescriptive requirements but heavy on sector-specific agency guidance - some of which will conflict between agencies. Overlaying that, you will have a patchwork of state laws that differ in scope, in definitions of key terms like "high-risk AI system" or "automated decision-making," in disclosure and transparency requirements, in liability frameworks, and in enforcement mechanisms.
Some states will have adopted developer liability. Others will focus on deployer responsibility. Some will require algorithmic impact assessments. Others will mandate opt-out rights. Some will restrict AI in specific domains - hiring, lending, insurance, law enforcement - while leaving others unregulated. And sitting above all of this, for any company with European operations, will be the EU AI Act with its own risk classifications and compliance timelines.
Add to this the emerging challenge of agentic AI - systems that do not merely recommend decisions but take them autonomously. An AI agent that negotiates a contract, approves a loan, or routes a supply chain operates across multiple jurisdictions in the course of a single transaction. When that agent causes harm, the question of which state's law applies, which liability framework governs, and which regulator has jurisdiction is not a theoretical exercise. It is a live legal question that no current framework adequately answers.
This is not a hypothetical. It is the trajectory we are on. And the midterms will accelerate it - regardless of whether the outcome is red, blue, or split.
If Democrats gain ground, you can expect more comprehensive state-level AI bills modelled on the EU approach. If populist Republicans gain ground, you can expect more targeted but equally consequential legislation focused on data centres, community consent, child safety, and developer liability. If the result is mixed, you get both - which may be the most challenging outcome of all.
V. Paper Governance Will Not Save You
This is where I want to speak directly to the boards, the general counsel, and the enterprise risk leaders who are the intended audience for this analysis.
Most of the AI governance frameworks I see in corporate settings today are what I would call paper governance. They consist of policies, principles, ethics statements, and committee charters. They describe what the organisation intends to do. They articulate values. They assign responsibilities in the abstract. And they are, with respect, almost entirely inadequate for the regulatory environment that is emerging.
Paper governance was sufficient when AI regulation was aspirational - when the question was whether rules would come, not what form they would take. That era is over. The question now is not whether your organisation will face binding AI obligations. It is how many different, potentially conflicting obligations you will face, across how many jurisdictions, on what timeline. What boards need - and what very few have - is what I call a control architecture: a demonstrable, auditable, operationally embedded system for knowing what AI is being used in the organisation, how it makes decisions, what data it consumes, what risks it introduces, and how those risks are being monitored and mitigated in practice. Not in a policy document. In the actual infrastructure.
A control architecture does not replace governance principles. But it makes those principles enforceable, testable, and - critically - portable across regulatory regimes. If you know what models you are running, where they are deployed, what data flows into and out of them, and what guardrails are in place at the operational level, then adapting to a new state law, a new EU requirement, or a new enforcement priority becomes an engineering problem rather than an existential one.
Without that architecture, every new regulation is a fire drill. Every state-level bill that advances becomes a board-level crisis. Every enforcement action becomes an exercise in forensic archaeology, trying to reconstruct what the AI was actually doing from scattered logs, incomplete documentation, and the memory of engineers who may no longer be with the organisation.
I saw this firsthand as a CISO. When a regulator came knocking about a data breach, the organisations that could produce a clear, auditable trail of what data went where, who had access, and what controls were in place resolved the inquiry in weeks. The organisations that could not spent months, sometimes years - in adversarial, expensive, reputation-damaging investigations. The quality of the underlying security was often similar. The difference was the architecture of visibility and control.
AI governance is heading to the same destination, but the surface area is vastly larger. A cybersecurity breach affects data. An AI governance failure can affect hiring decisions, credit approvals, insurance pricing, medical diagnoses, content moderation at scale, and the behaviour of autonomous systems operating in the physical world. The reputational, legal, and human costs of getting this wrong are correspondingly greater.
I have seen this pattern before. It is exactly what happened with data privacy before GDPR. The organisations that treated privacy as a paper exercise spent years and millions of pounds retrofitting compliance after the regulation landed. The organisations that had built genuine data governance infrastructure - that knew where their data was, how it flowed, and who had access - adapted within months.
AI governance is now at that same inflection point, but the stakes are higher because the technology moves faster, the risk surface is broader, and the regulatory landscape is fragmenting more aggressively than anything we saw in data privacy.
VI. Three Vectors of Pressure
Let me be specific about what I believe boards should be preparing for. The governance pressure on enterprises is converging along three vectors, and the midterms will intensify all three.
First, the federal-state tension. Regardless of the White House's stated preference for a unified national framework, the practical reality is a diverging patchwork. Boards need to map their AI deployment footprint against the specific legislative landscape in every state where they operate, and they need to be doing this continuously, not annually. The legislative calendar in AI moves faster than most compliance functions are designed to handle.
Second, the political-grassroots tension. The gap between what political leaders say about AI and what voters actually want is widening. This creates unpredictable enforcement risk. A state attorney general who senses an opportunity to ride populist anger toward Big Tech can become a de facto regulator overnight, using existing consumer protection statutes to bring actions that no one in the C-suite anticipated. We saw exactly this dynamic play out in data privacy with state AG enforcement under unfair and deceptive practices laws long before comprehensive privacy statutes existed.
Third, the tension between paper governance and operational control. Regulators - whether state, federal, or international, are becoming increasingly sophisticated about the difference between a company that has an AI ethics policy and a company that can demonstrate, under examination, that it actually knows what its AI systems are doing. The EU AI Act's emphasis on conformity assessments and technical documentation is a harbinger. State-level regimes will follow. The days when a board could satisfy its governance obligations with a well-drafted policy statement are ending.
VII. The Election Is Not the Risk. The Architecture Gap Is.
I want to close with a provocation that I hope will reframe how enterprise leaders think about the next twelve months.
The midterms are not the risk. The midterms are a catalyst. They will accelerate dynamics that are already in motion. They will surface political energy that is already present. They will translate voter anxiety into legislative action that is already being drafted.
The actual risk is the gap between the pace of regulatory acceleration and the pace at which enterprises are building the capacity to respond to it. That gap, the architecture gap, is what will determine which organisations emerge from the next regulatory cycle as leaders and which spend the next five years in reactive compliance mode, haemorrhaging resources and board attention on problems they should have seen coming.
The Financial Times piece that prompted this analysis captured something important: the people who were supposed to be AI's biggest champions - the MAGA faithful in the heartlands - are becoming some of its most effective sceptics. That is not a political curiosity. It is a governance signal. It tells you that the political coalition underpinning the current light-touch federal approach is unstable, and that the centrifugal forces pushing toward more regulation, more liability, and more local control are stronger than Washington acknowledges.
Whether November delivers a red wave, a blue wave, or a muddled middle, the direction is the same: more scrutiny, more fragmentation, more liability exposure, and more pressure for demonstrable control over data, models, and autonomous agent behaviour.
The question for boards is not who will win the midterms. It is whether, by the time the results come in, they will have built the governance architecture that can survive the outcome - whichever way it breaks.
That means conducting an honest assessment of what you actually know about the AI systems operating within your enterprise, not what your policy says you should know, but what you can demonstrate under examination. It means mapping your exposure to the specific state-level legislative landscape across every jurisdiction where you operate. It means investing in the unglamorous but essential work of model inventories, data lineage tracking, decision audit trails, and automated compliance monitoring. And it means doing all of this with the understanding that the regulatory landscape twelve months from now will look materially different from the one you face today.
Because the lesson of every previous technology governance cycle, from data privacy to cybersecurity to financial regulation, is the same. The organisations that prosper are not the ones that predict the political weather. They are the ones that build structures capable of standing in any storm.
That work cannot wait for election night. It should have started already.




Comments