2 September 2025 U.S. Federal Approx. 15 minute read

Reading the Action Plan: Industrial Strategy as AI Policy

The Trump administration's America's AI Action Plan, released on 23 July, is being read as a deregulatory document. It is, in fact, a procurement and preemption document. The distinction matters for what clients should be planning around over the next twelve to eighteen months.

The Trump administration's America's AI Action Plan, released on 23 July under the cover of an Executive Order on Accelerating Federal Permitting of Data Center Infrastructure and a companion order on Promoting the Export of the American AI Technology Stack, is the most consequential federal AI policy document since the rescission of Executive Order 14110 in January. Most coverage has read it as a deregulatory document — the long-awaited counterpoint to the Biden order's reporting-and-evaluation architecture. That framing is not wrong, but it is partial. The Action Plan is also, and in our view more importantly, an industrial-policy and procurement document. It uses federal purchasing power, federal land, federal compute, and federal export-control authority as the operative regulatory levers, in lieu of the safety-evaluation and disclosure regime the previous administration favored. The shift in lever is what clients should be planning around.

The Plan organizes its ninety recommendations around three pillars: accelerate AI innovation, build American AI infrastructure, and lead in international AI diplomacy and security. Each pillar comes with a set of specific directives to named agencies, most with deadlines of between sixty and three hundred sixty days. The document is considerably more operational than the policy statements that emerged in the first six months of the administration, and several of its directives are already in active implementation. We focus on the four that, on present indications, will most directly affect our clients.

Federal procurement as regulatory instrument

The most consequential single mechanism in the Action Plan is the proposed update to the Federal Acquisition Regulation, directed by the Plan and the accompanying Executive Order on Preventing Woke AI in the Federal Government. Federal procurement of AI products from foundation model providers is to be conditioned on a set of provider attestations and contract terms. The relevant directive instructs the FAR Council, in coordination with OSTP, OMB, and the Department of Commerce, to publish a proposed rule within one hundred eighty days. The practical horizon for the rule's force is therefore the first quarter of 2026 in proposal form, with a likely effective date around mid-2026.

The substantive content of the proposed FAR rule is not yet public, but the directives in the Plan and the executive order describe its principal features. Federal AI procurement from large language model providers is to require contractor representations about model training practices, content moderation policies, and the absence of specified categories of ideological orientation in model outputs. The rule is also to require disclosure of model capabilities relevant to specified national security considerations, and to incorporate by reference forthcoming Department of Defense and Intelligence Community standards for AI procurement. The contractual mechanism — flow-down clauses, audit rights, termination for default — is familiar from existing federal procurement, but its application to foundation model providers is novel.

Why does federal procurement matter? The federal government's direct annual spend on AI products is not large by the standards of the frontier laboratories' revenue bases — current estimates place it in the low single-digit billions — but the indirect effects are substantial. State and local governments, federally regulated industries, and federal contractors typically align their procurement standards with federal practice; federal procurement specifications become, in effect, industry standards. The FAR rule, if it adopts the substantive provisions the Plan and the Executive Order indicate, will therefore be a quasi-regulatory instrument for the U.S. market in a way that no statute is likely to be in the near term.

The federal government's direct AI spend is modest. Its regulatory leverage through procurement is not. The Plan uses one to substitute for the other.

Preemption

The Plan's most direct intervention into the state-versus-federal architecture is its recommendation that federal agencies condition certain federal funding streams on the absence of state AI laws that the administration regards as overly restrictive. The relevant directive instructs the Office of Management and Budget, in coordination with the Departments of Commerce and Justice, to identify federal funding streams whose conditions can include a requirement that recipient jurisdictions not enforce specified categories of state AI regulation. The list of categories is not in the Plan itself; it is delegated to a forthcoming OMB circular.

This is the mechanism that several commentators have described as backdoor preemption, and the description is accurate so far as it goes. It is not, in our reading, a strong form of preemption — federal funding conditions of this type face well-known constitutional constraints under South Dakota v. Dole and its progeny, and the categories of state AI law that could plausibly be conditioned on are narrower than the Plan's drafters appear to assume — but it is a real one. Several states have indicated, at the attorney-general level, that they intend to challenge the conditions if they are imposed. The Massachusetts and California attorneys general have jointly indicated that they regard the conditions as coercive within the meaning of NFIB v. Sebelius. We expect litigation; we expect it to be slow; we expect the funding conditions to be in effect for at least the first several months of dispute.

The preemption posture matters most for clients whose compliance architecture has been built around the state-level statutes the administration is most likely to target. California's SB 53, signed by Governor Newsom at the end of the legislative session (we will discuss it separately in a forthcoming viewpoint), is one such statute. Colorado's SB 24-205 is another, as is the New York algorithmic decision-making statute. We are advising clients to treat these statutes as operative — they remain in force regardless of federal funding-condition pressure on the states that adopted them — but to monitor the compliance posture carefully as the federal-state relationship is contested.

Infrastructure

The Plan's infrastructure pillar contains the most-discussed components of the package: streamlined NEPA review for AI-relevant data centers and the transmission lines that serve them; categorical exclusions for certain federal-land grid interconnections; expanded federal-loan-guarantee capacity for advanced nuclear and geothermal projects whose anchor offtake is AI infrastructure; and the establishment of an AI Infrastructure Coordinator within OSTP with cross-agency authority over permitting decisions affecting frontier compute build-out.

These provisions have received substantial coverage and we will not repeat the analysis at length. The relevant points for clients are three. First, the permitting acceleration is real but is not unconstrained. The categorical exclusions cover a defined set of facilities and the Coordinator's authority does not override the Clean Water Act, the Endangered Species Act, or the consultation requirements with sovereign tribes. The infrastructure build-out will proceed faster than it would have without the Plan, but not as fast as some industry forecasts suggest.

Second, the loan-guarantee capacity is the lever that will, in our assessment, do the most actual work. The capital structure of advanced nuclear and large geothermal projects has not, historically, been compatible with utility-grade offtake from AI infrastructure customers; the offtake duration is shorter than the project life, and the construction risk is concentrated in the early years. A meaningful federal loan-guarantee program can reshape that capital structure. The Department of Energy's expanded loan office is the agency to watch.

Third, the Plan's silence on data-center water consumption is, in our view, the most likely point at which the infrastructure pillar will encounter durable political resistance. Several of the jurisdictions in which AI infrastructure expansion is most concentrated — central Texas, Phoenix, parts of Virginia — are also jurisdictions in which water-rights questions are politically active. The federal preemption posture sketched in the Plan does not reach water rights, which are reserved to the states under long-established doctrine. Clients planning data-center expansion in these regions should expect state and county-level water permitting to be the binding constraint, not federal permitting.

Export controls and the international stack

The Plan's third pillar restructures the U.S. export- control posture on advanced computing items and AI models. The Biden-era diffusion framework, which had created a three-tier system of country groupings with differing access to advanced computing hardware, is to be replaced with a bilateral negotiation model: tier-one access on bilateral terms negotiated through the Department of Commerce, in coordination with the State Department and the relevant intelligence agencies. The Plan directs the Bureau of Industry and Security to publish revised Commerce Control List entries for advanced AI accelerators within ninety days and to publish a revised export enforcement framework within one hundred eighty days.

The substantive direction of the new export-control posture is more permissive on hardware export to specified bilateral partners — most prominently the Gulf states, with whom the administration has been negotiating since the spring — and more restrictive on model weight export to jurisdictions of concern. The model-weight provisions are a substantial expansion of U.S. export-control practice; until the new framework is published, the United States has not, in general, regulated the export of foundation model weights as such. The Plan signals that it will.

For frontier laboratories, the model-weight provisions are the immediate compliance problem. The current state of practice — public release of certain model weights, API access to others, sale of weights to specified enterprise customers — has been developed in the absence of an explicit export-control framework. Bringing that practice into compliance with the forthcoming framework will, in most cases, require changes to license terms, customer onboarding processes, and (in some cases) the architecture of model serving. We are advising clients to begin that work now, on the basis of the indicative information in the Plan and the Department of Commerce's recent public statements, rather than waiting for the final framework to issue.

What the Plan does not contain

Three notable absences are worth flagging. First, the Plan does not contain a successor to the EO 14110 reporting requirements for dual-use foundation model training runs. The previous reporting regime is, on the current text, simply gone. Whether some functional equivalent will be reconstituted through the procurement mechanism — a contractor representation about training practices, in lieu of a direct reporting requirement — is not yet clear. Our reading of conversations with people involved in the FAR drafting is that some functional equivalent is contemplated, but its scope will be narrower than the EO 14110 regime's was.

Second, the Plan does not contain a coherent successor to the AI Safety Institute's pre-deployment evaluation function. The Institute itself remains in place administratively but its remit is being redefined as the AI Standards and Innovation Institute, with a focus on standards development and innovation support rather than safety evaluation. Pre-deployment evaluation of frontier models, if it continues in the federal system, will likely be located in the Department of Defense and the Intelligence Community, with a narrower scope and a classified posture. The civilian, public-facing safety evaluation function is, for the moment, vacated.

Third, the Plan is largely silent on the AI workforce questions — training and re-skilling, immigration policy for high-skilled AI workers, university funding for AI programs — that occupied a substantial portion of EO 14110. Several of the relevant provisions of the previous regime have been folded into the broader workforce and immigration policies of the administration; the Plan itself does not treat them as AI-specific concerns. This is a substantive policy choice, and one that we expect to attract criticism from the academic AI community.


What we are advising clients to do

For frontier laboratories whose principal market is the United States, the immediate work is twofold. First, prepare for the FAR rulemaking comment period, which we expect to open in late 2025 or early 2026. The proposed rule will be the operative federal regulatory instrument for the medium term, and the comment process is the principal channel through which its substantive provisions will be shaped. Second, evaluate the model-weight export- control implications of the forthcoming Commerce Control List entries, and prepare the operational changes that compliance will require.

For deployers of AI systems in federally regulated industries — financial services, healthcare, transportation — the FAR rule will also matter, indirectly. Federally regulated firms typically import federal procurement requirements into their own vendor management standards. A FAR rule that requires specific provider attestations will, within a relatively short period, become a de facto requirement of doing business with regulated U.S. enterprises. We are advising deployer clients to begin the vendor-management work now, rather than waiting for the rule to take effect.

For clients with significant non-U.S. operations, the principal planning problem is the interaction between the forthcoming FAR rule and the EU AI Act's general-purpose model obligations, which entered into application on 2 August. The two regimes do not align in their substantive requirements — the AI Act's training-data and evaluation obligations are more demanding than the Plan's are; the Plan's content-orientation attestations have no European analogue — and a global model offering will need to satisfy both. We are working with several clients on unified compliance architectures designed to meet both regimes' requirements without unnecessary duplication; the work is significant but tractable.

The Action Plan is not, in its essential character, a withdrawal of the federal government from AI policy. It is a relocation. The previous administration's instrument was executive-order direction to civilian agencies with reporting and evaluation outputs. The current administration's instrument is procurement, infrastructure finance, and export control. These are at least as consequential, possibly more so, but they operate on a different time horizon and require different forms of engagement from the regulated industry. Clients who treat the Plan as a deregulatory holiday will, in our view, be unprepared for the rule-making that follows it.