#

Analyzing the Trump Administration’s National Policy Framework for AI

Kevin T. Frazier

Since the first day of this administration, President Trump has made clear that America’s success in the AI space requires a uniform, national strategy. His Day One Executive Order on AI called for the removal of “existing AI policies and directives that act as barriers to American AI innovation.” The AI Action Plan further clarified that “AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level.” In December 2025, the Executive Order on a National Policy Framework prioritized a national approach in light of increasing concerns about an emerging state patchwork while still allowing “states to continue to enforce existing, generally applicable law.” This December EO also outlined the need for a legislative framework that would ensure AI companies are “free to innovate without cumbersome regulation.” 

Today, the Trump administration released its official proposal for a single national AI framework

Emphasizing a Light Touch Approach to Allow AI Innovation to Flourish

Unlike the most recent legislative AI proposal offered by Senator Blackburn, the framework is not a call for a heavy-handed, all-encompassing federal AI statute but rather an allocation of regulatory authority in line with the Constitution’s intended distribution of powers between the states and the federal government. AI regulation is often framed as an all-or-nothing proposition, which fuels the mistaken perception that the White House is seeking to infringe on states’ authority to exercise their police powers. Yet, AI governance is best thought of as a five-layer cake involving energy, chips, infrastructure, models, and applications. Each layer may require a different mix of federal and state engagement based on the extent to which regulation at that layer has nationwide implications. The framework intends to ensure that the federal government leads on issues related to AI development because training, fine-tuning, and deploying models “is an inherently interstate phenomenon with key foreign policy and national security implications.” 

The direction to Congress provided by the framework is to “preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones.” That said, Congress is instructed not to interfere with state laws that seek to “protect children, prevent fraud, and protect consumers,” nor “state zoning laws, including state authorities to determine the placement of AI infrastructure.”

Beyond a call for a constitutionally-sound approach to AI governance, the framework set forth several specific policy recommendations in six areas:

Protecting Children and Empowering Parents 
Safeguarding and Strengthening American Communities
Respecting Intellectual Property Rights and Supporting Creators
Preventing Censorship and Protecting Free Speech
Enabling Innovation and Ensuring American AI Dominance
Educating Americans and Developing an AI-Ready Workforce

Cato has a deep bench of scholars from Jennifer Huddleston to Dave Inserra and many more in between who have explored these AI policy domains. Expect more in-depth analysis as everyone has a chance to dive into the framework’s provisions. For now, it’s worth highlighting some key aspects of each of these domains. 

Protecting Children and Empowering Parents

Unlike state laws that effectively invite labs to surveil users and might create greater privacy risks for both children and adults, the framework urges Congress to prioritize laws that put parents in the driver’s seat of how, when, and to what ends their child uses an AI tool. This approach carries the benefit of leaving sensitive decisions about which tools are appropriate to parents, not the federal government. Each child and each family is unique. Parents and other trusted adults, not policymakers, are in the best position to help kids and teens use technology in positive ways and respond to crises.

The framework also calls for “commercially reasonable, privacy protective, age-assurance requirements.” This recommendation can raise significant privacy and speech concerns for all users, as my Cato colleague Jennifer Huddleston has discussed. As made clear by a recent letter signed by hundreds of computer scientists around the world, there’s reason to believe that “privacy protective” age verification is an oxymoron. We will remain attentive to how Congress interprets and acts on this aspect of the framework.

Another key recommendation here is that Congress “avoid setting ambiguous standards about permissible content.” While many people want AI to be “moral,” the fact is that most Americans do not share the same morals. It’s not the role of the government to dictate who is “right” on sensitive questions and to compel labs to train their models in a manner that aligns with the preferences of some Americans over others.

Safeguarding and Strengthening American Communities

Leading on AI is a whole-of-nation endeavor. This section recognizes that AI success requires bringing small towns and small businesses along for the ride, while keeping an eye out for negative consequences on everyday Americans.

There are a few recommendations worth highlighting. First, there’s a reminder that there’s no AI exception to existing law. State AGs from New Jersey to California have recognized that expansive state laws on fraud, discrimination, and consumer protection can already be used to penalize bad actors. Rather than drafting new laws, the framework instructs Congress to consider “augment[ing] existing law enforcement efforts to combat AI-enabled impersonation scams and fraud that target vulnerable populations such as seniors.”

Second, the framework rightly notes that effective AI policy will require extensive technical expertise within the government. That’s why Congress should “ensure that the appropriate agencies within the national security enterprise possess sufficient technical capacity to understand frontier AI model capabilities and any associated national security considerations and establish plans to mitigate potential concerns.”

Finally, following the lead of states like Oklahoma, the framework embraces the idea of easing regulatory burdens to behind-the-meter power generation. This will play a key role in expanding the nation’s power supply by allowing companies greater autonomy to produce their own energy. It will also allow us to better compete with China as it races ahead in developing its own AI infrastructure.

Respecting Intellectual Property Rights and Supporting Creators

This section aims to strike a balance between lawful innovation and maintaining America’s leadership in a robust cultural economy. 

On the key issue of whether training on copyrighted data qualifies as fair use, the Trump administration largely punts by asking Congress not to interfere with judicial resolution of the question. As I will explain in more detail later, I think this is a flawed approach. Access to data is essential for AI innovation and research. The largest labs have built a data moat that lets them train new models faster than entrants. Until this question is resolved, AI startups will struggle to compete. Congress, relying on the first principles set out in the IP Clause, should clarify that training is a fair use. Waiting for courts to figure this out will only make it harder for new firms to get into this key market. 

On how best to support and sustain creators in this new AI economy, the framework directs Congress to exercise a little creativity itself. It encourages Congress to study “enabling licensing frameworks or collective rights systems for rights holders to collectively negotiate compensation from AI providers, without incurring antitrust liability.”

Preventing Censorship and Protecting Free Speech

This is the shortest section with huge ramifications. But it is a particularly critical issue especially in the wake of the dispute between Anthropic and the Department of War. As Cato’s work and brief have noted, such actions can raise significant First Amendment concerns. There is a live conversation about how the federal government balances its particular procurement needs and the constitutionally protected ability of private companies to develop tools that align with their own values and mission. 

It will be important to keep a close eye on how Congress responds to the two recommendations spelled out below: 

“Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.

Congress should provide an effective means for Americans to seek redress from the Federal Government for agency efforts to censor expression on AI platforms or dictate the information provided by an AI platform.”

How Congress chooses to interpret these recommendations will be interesting to see. Given the Trump administration’s ongoing dispute with Anthropic, as well as its high-profile uses of the FCC, DHS, and FTC to influence the speech of private actors, these recommendations demonstrate an apparent hypocrisy. Cato scholars Jennifer Huddleston and David Inserra have written extensively on the worrying trend under both parties to weaponize government agencies against disfavored speech. Congress should take action to protect against government coercion of speech. But whether the Trump Administration actually wants these protections to have teeth remains to be seen.

Enabling Innovation and Ensuring American AI Dominance

As noted from the outset, the Trump administration has been explicit about its desire for the US to be a global leader in AI. Pursuant to that goal, this section offers a few simple yet transformative policy recommendations. First, lean into regulatory sandboxes to foster a “try-first” mentality. This iterative and evidence-based approach to governance acknowledges that Americans will need to test and deploy AI tools to discover their risks and benefits. Second, recognizing the aforementioned need for more data, make more federal datasets available to innovators and researchers.

Educating Americans and Developing an AI-Ready Workforce

Finally, this section embraces a vision of the future in which all Americans thrive in the Age of AI. This future follows from adherence to recommendations on improving educational opportunities and retraining programs. As I testified before Congress, this is a pivotal and immediate issue. Congress must resist the temptation to replicate the paternalistic architecture of past workforce programs — the approved-provider lists, restricted vouchers, and compliance-heavy pipelines that move at the pace of bureaucracy rather than the pace of displacement. The better approach is to trust the worker. A displaced machinist in Tulsa knows her own barriers better than a program administrator in Washington ever will. That is why any AI-era workforce initiative worth its name should include direct, fast, unrestricted reemployment support — modest grants delivered within weeks, not months, with accountability tied to outcomes rather than process. Congress should also resist the instinct to define “AI readiness” narrowly. 

The goal is not to produce a nation of prompt engineers; it is to produce workers with the adaptability, foundational digital literacy, and economic runway to meet this moment on their own terms.

Conclusion

The framework is a starting point, not a final answer. Other Cato scholars and I will continue to dig into each of these domains — scrutinizing the details, flagging the tradeoffs, and identifying where Congress should push further and where it should pull back. In general, however, this approach signals that the administration remains committed to the idea that the light-touch approach to regulation that allowed America to lead in the internet age remains the best solution to continued leadership in the AI era.