#

The Latest AI Bill’s 5 Biggest Flaws

Kevin T. Frazier and Jennifer Huddleston

Senator Marsha Blackburn (R‑TN) has published a discussion draft of an AI bill that supposedly aims to codify President Trump’s vision for the nation to achieve AI dominance. She went so far as to title the bill, “TRUMP AMERICA AI Act.” But the truth is, this proposal represents a dramatically different approach that would heavily regulate the industry, hinder entrepreneurship, and restrict speech. 

If such a proposal gains traction, it would represent a radical shift from the light-touch approach that the Trump administration’s AI Action Plan largely supports. The bill would instead insert the government into many aspects of regulating AI. (The AI Action Plan has been covered in previous work for those who wish to catch up on its core components.) The proposal isn’t the light-touch, pro-innovation approach that seeks to make America’s tech sector the leader in this global market but a kitchen sink of internet and AI regulation that could create more problems than it solves at a critical moment.

At 291 pages, the draft intends to cover what some have labeled the four Cs of AI policy: children, creators, conservatives, and communities. It includes the Kids Online Safety Act (KOSA) and NO FAKES Act in addition to myriad AI-specific provisions. Cato has covered many of these elements in the past. For example, Jennifer has written extensively on the concerns around speech and privacy in KOSA. And David Inserra has addressed the importance of a light-touch approach to AI governance when it comes to free expression, covering some of the issues implicated by the NO FAKES Act.

Analysis of the new and AI-specific aspects of the draft reveal that it is a poor path forward if the United States is going to lead on AI. While a few specific elements of the Trump administration’s AI Action Plan raise their own concerns, Senator Blackburn’s proposal would significantly shift the US away from the light-touch regulatory approach that has traditionally allowed it to flourish as a global leader in new technologies.

Below are the five most significant flaws or deviations that raise significant concerns for the flourishing of American AI: 

1. Places an onerous “duty of care” on AI developers that could unnecessarily slow design, development, and operation of AI

This call for a vague “duty of care” on AI development clashes with the fact that AI tools are nondeterministic and undergoing significant evolution as we speak. If enacted, this attempt to standardize AI training practices risks chilling innovation—research and development that could lead to more capable and reliable models—by virtue of leaving developers unsure of whether their specific approach aligns with the latest case law. 

Labs such as OpenAI rely on an iterative approach to deployment that allows for a mix of pre-deployment evaluations while acknowledging that some harms cannot be fully known until a model is broadly available. Once such harms are detected, labs can quickly and transparently make adjustments. This is exactly what happened when OpenAI released a model with sycophantic tendencies—users caught the behavior, informed the lab, and the lab responded. 

Under Senator Blackburn’s proposal, OpenAI and others in a similar position would have to serially delay deployment. Meanwhile, our adversaries will not be pressing pause. Instead, countries including China will race ahead by virtue of releasing models sooner and increasing AI adoption across their populations.

2. Enacts burdensome requirements under the guise of protecting children that could instead censor speech and limit access to information

These onerous requirements supposedly advance the interests of young Americans but seem more likely to generate excessive fines against developers and annoy users. Two requirements stand out. The first is a fine of up to $100,000 on anyone who designs, develops, or makes an AI chatbot available that “promotes” or “encourages” suicide, non-suicidal self-injury, or imminent physical or sexual violence. While this provision rightfully addresses the serious issue of harms associated with minors using AI, it bypasses other, less burdensome interventions, such as AI literacy initiatives, and introduces a highly ambiguous challenge for courts. 

While protecting young people from harm is a well-intentioned impulse, this type of burden is incredibly problematic and potentially limits more speech than intended. This includes things that could be helpful, including how to find counseling or report abuse. We know that most platforms interpret such terms broadly, and the result can limit access to critical information for those seeking help as well. Even in court, these categories would be highly subjective and are not clearly defined as less protected the same way pornography is. The result would require a judge to draw the line between an output that merely refers to imminent physical violence, such as how to practice self-defense when encountering an attacker or the description of any number of news stories and movies, and an output that “encourages” imminent physical violence. Not to mention the numerous false positives known to occur in chess and gardening content on social media, for example. As was found in Brown v. Entertainment Merchants Association, in which the Supreme Court struck down a California law limiting access to violent video games, the Court remains skeptical of such distinctions and limitations from the state, even when under the guise of protecting children. 

The second is a requirement that such chatbots notify users that they are still AI systems, not humans, every 30 minutes. There has yet to be any conclusive research that such notices have any positive empirical effect on users. The odds seem higher that such frequent reminders will actually make users numb to more meaningful notices. 

3. Weaponizes the Copyright Act of 1976 to act as a barrier to AI development

The Intellectual Property Clause is explicit that the purpose of copyright and patent laws is “to promote the progress of science and useful arts.” Yet, this provision would ignore the underlying basis for copyright law as we know it today. The Copyright Act of 1976 would instead be used as a barrier to training AI tools that have the potential to democratize expertise the world over. 

What’s more, this provision carries severe risk of hindering AI competition domestically and causing the United States to fall behind in AI competition globally. China is moving full steam ahead on collecting data and sharing with innovators and researchers. This strategic move will allow the country to continue to train and fine-tune models with greater capabilities. US labs, however, have been ringing the alarm about a shortage of quality data for years. 

Rather than map out a national strategy for how to provide an ongoing supply of data to these leading firms, much of the training data discourse has centered on litigation over copyright claims. In truth, no one but major studios win from aggressive copyright enforcement. The creators whom Senator Blackburn allegedly had in mind when crafting this provision tend to not register their works for copyright protection. Even those who do likely lack the funds to pursue infringement claims. 

The nation should have a framework for ensuring a vibrant cultural scene and robust support for the arts. (This will be a nuanced issue as Jennifer discusses in her Liberty University Law Review article.)

4. Combats the consistent pattern of bias against conservative figures demonstrated by AI systems by requiring third-party audits to prevent discrimination based on political affiliation

It’s old news that some AI labs have released models specifically trained to advance certain ideologies and perspectives. Rather than mandate that all labs release homogeneous models that please everyone and, by extension, serve no one, the best path forward is to lean on competition. More models with myriad perspectives and tendencies will allow consumers to decide which best aligns with their values and preferences. 

Instead, Senator Blackburn would not only compel labs to train their models in certain ways but also create an AI-audit industrial complex. This raises significant concerns in speech and the potential for significant government pressure on companies to design their tools in a way that aligns with political priorities. Given Anthropic’s recent experience with the Pentagon, such an opportunity seems ripe for potential abuse if government bureaucrats do not like the preferences of developers restrictions or models. 

These actions could also backfire on the conservatives the bill claims to protect. First, it could prevent the development of a product that focuses on specific values, such as a product aimed at particular religious groups that promises to keep out inappropriate content. Second, as seen in the debate over the Florida and Texas laws challenged in the NetChoice cases, political viewpoint is a much broader category than many may initially perceive. This could force an AI to also provide anti-Semitic or homophobic content, even if it was designed for a Jewish or LGBTQ audience.

Mandated audits raise concerns about a chilling effect and could infringe on the expression of developers around their product design. These audits could also limit the options available to the same groups they claim to help.

5. Harms competition by enabling the US attorney general, state attorneys general, and private actors to file suit to hold AI system developers liable for harms caused by the AI system for defective design, failure to warn, express warranty, and unreasonably dangerous or defective product claims

As mentioned, training AI models is not akin to designing and releasing a new car because the former are nondeterministic. This means an AI developer may have a general purpose in mind for their design or engage in different decisionmaking about how a model should weigh various inputs. But the models will still operate in unpredictable and, in many ways, unknowable ways. While labs should be held accountable for knowingly releasing flawed tools, existing law likely already allows for that. State attorneys general have broad authority to shield the public from unfair and deceptive acts or practices. A better approach would focus on helping clarify enforcement of existing law rather than try to squeeze AI models into a paradigm that puts misguided liability on them for their users’ choices.

This proposal is especially flawed due to the inclusion of allowing private actors to file suit. It’s probable that litigators will exploit the ability to sue companies—rather than confining enforcement authority solely to the state attorney general—to sue the well-intentioned, yet resource-strapped innovators we’re counting on to build the future. Whereas big labs can handle such suits, smaller players may find themselves faced with difficult choices around litigation costs. Such costs mean fewer resources for developing or improving the product, even if the company ultimately wins in court. As a result, such litigation could make it harder for smaller players to potentially compete in the long run.

Conclusion

These flaws alone should give pause to the broad impact that this proposal could have for innovators and consumers; yet this analysis is far from exhaustive. There are many other shortsighted provisions that warrant alarm from all those seeking to advance an AI agenda that aligns with America’s reputation as an innovative and free society. For instance, Senator Blackburn would have Congress sunset Section 230. (A bad idea for free expression and innovation for many reasons, including ones that Jennifer’s and David’s previous work covered.) Hopefully, any successful federal policy framework for AI will better reflect an optimistic and light-touch approach to AI—not an approach that could burden entrepreneurship and hurt both consumers and innovators.