Senator Cruz Leads on AI
By James Erwin
Yesterday, Senator Ted Cruz moved to reify President Donald Trump’s AI Action Plan with legislation. At a Commerce Committee hearing with White House Office of Science and Technology Policy Director Michale Kratsios, Senator Cruz introduced a five-pillar framework for a comprehensive AI package, along with a regulatory sandbox bill to serve as a starting point.
Only the sandbox bill, known as the SANDBOX Act, has been written. The other pillars are broad principles intended to result legislation shortly, all of which can be bundled into one big, beautiful AI package. The five pillars include:
I. Unleash American innovation and long-term growth
This includes the SANDBOX Act, streamlined permitting for AI infrastructure like data centers and power plants, and opening the vast trove of federal data to AI model training. The SANDBOX Act, modeled on similar state legislation in Utah, would allow developers to request waivers for regulations that inhibit the development of artificial intelligence. It also creates a sort of reverse CRA process, or perhaps a regulatory rescission, whereby the OSTP director can identify an existing regulation as a barrier to innovation and send it to Congress for a repeal vote. This bill is a great first step that will accelerate the process of repealing outdated rules that applied to older technology.
But to power and enable all of this computing, we need more data centers and data. A lot more. Streamlining federal permitting and working with states to do the same will help meet this demand. Data centers will bring thousands of construction jobs and permanent engineering positions to the small towns where they will be built, as several acres are required to build them. The Nuclear Regulatory Commission will also have to become much more cooperative and have some of its authority curtailed to enable small nuclear reactors to come online, providing cheaper electricity to both data centers and nearby towns. More gas and oil permitting will also be necessary.
Training the models, on the other hand, requires access to data. This raises a whole other debate about privacy and copyright. Who has access to what works to train language models? What books, movies, or personal data will be available at no cost? Many publisher have dollar signs in their eyes about licensing content for training, and privacy advocates are understandably concerned about what algorithms are allowed to read. A partial solution is all of the data held by the federal government. Millions of pages of documents and non-sensitive information about people from personnel records, the Census, military or VA hospitals, the NIH – this is a dragon’s hoard of treasure for training, and the government serves no public interest by making it unavailable for language model training.
II. Protect Free Speech in the Age of AI
We have long supported efforts to stop government jawboning of social media companies, and this must not be allowed to happen to large language models. Furthermore, free speech must be included in NIST’s standards, and the U.S. government must stand up to censorship of Americans by foreign governments seeking to regulate AI and social media.
The Trump administration has taken steps to reverse the jawboning of the Biden administration, even if President Trump will not let the practice die out completely. The admin has also aggressively negotiated against other nations’ digital trade barriers, including censorship of Americans by foreign government officials, such as Brazilian judges.
III. Prevent a patchwork of burdensome AI regulations
Federal preemption of state AI laws is arguably the most critical priority that Senator Cruz and President Trump have both endorsed. As we have written before, AI developed in any one state will not stay there and remain subject solely to local jurisdictions. AI is plainly interstate commerce, and Congress should invoke the Commerce Clause to prevent an unnavigable patchwork of regulations restricting developers. Like the Internet Tax Freedom Act before it, a federal AI moratorium would assert Congress’s right to regulate AI, preventing hasty state action that could derail its potential.
And states are acting with (undue) haste. In California, last session’s Senate Bill 1047 would have imputed liability for “harmful” uses of AI to developers rather than the people who actually commit crimes with the technology, similar to the progressive attempt to hold gun manufacturers liable for mass shootings. California is now considering a bill to defeat the purpose of AI by requiring human oversight of all decision-making. Developers would even have to conduct annual impact assessments to avoid “bias” and “harms” after algorithms are deployed. Federal preemption is urgently needed, which can be attached to a comprehensive package as either a temporary or permanent moratorium. The ITFA also began as a temporary moratorium that was repeatedly renewed until it was made permanent once everyone was used to it.
During the hearing, Chairman Cruz reaffirmed his support for a moratorium and pressed Mr. Kratsios to do the same:
Chairman Cruz: “Why does the Administration believe that state AI laws and regulations such as those in California and Colorado pose a threat to AI deployment and innovation in the United States? And does the Administration support preemption of those laws?”
Mr. Kratsios: “A patchwork of state regulations is anti-innovation. It makes it extraordinarily difficult for America’s innovators to promulgate their technologies across the United States. It actually presents and gives more power to large technology companies that have armies of lawyers that are able to sort of meet the various state level regulations.”
Additionally, the framework calls for the U.S. government to fight foreign regulation of AI that might affect Americans. Excessive foreign regulation of AI must be combatted by the U.S. using our economic and soft-power leverage, just as the Trump administration is doing with digital trade barriers and censorship of Americans.
IV. Stop nefarious uses of AI against Americans
This can be a nebulous concept and invites a level of regulation with which we are uncomfortable, but the Cruz framework appropriately constrains the potential censorship or privacy violations involved. Congress should update existing statutes to protect against digital impersonation scams and frauds and expand the principles of Senator Cruz’s already-passed Take It Down Act. These are appropriately narrow policies to prevent things that should be crimes, rather than making crimes out of activities that could potentially cause someone distress.
V. Defend human value and dignity
The framework calls for reinvigorating bioethical considerations in federal policy and outlawing AI-driven eugenics and other threats to human dignity and flourishing. This should go without saying, but eugenics is bad. New gene-editing possibilities, however, will likely reopen this debate. Bioethics has not been a subject of national debate since the George W. Bush administration, at least until vaccine mandates were publicly litigated during COVID. Those who value human dignity need to come to this debate armed with strong arguments, which this framework aims to provide.
In all, Cruz’s is a very promising framework. The SANDBOX Act is a good bill on its own, but bolstered by further legislation adhering to these principles, it will be a generational accomplishment to get over the finish line. Godspeed to the good Senator.