Artificial intelligence may be a massive industry in the United States, but the regulatory framework governing it varies depending on the jurisdiction. Although most states have passed laws regulating AI’s development and use, there are no comprehensive federal AI laws.
U.S. President Donald Trump signed an executive order on 11 December aiming to block states from enforcing the regulations on AI they have passed. The order seeks to create a “minimally burdensome” AI regulatory framework, claiming that regionally-dependent rules would impede AI’s development in the country.
However, it does not have the same force as legislation and would likely face legal challenges. Congress has also not shown interest in passing similar laws.
The U.S. AI market’s size was $54.09 billion in 2024, according to Fortune Business Insights, and is expected to have reached $66.42 billion in 2025. Most of the world’s largest and most prominent AI-related companies are based in the U.S., including Nvidia, OpenAI, Meta, Alphabet, and Oracle.
What are the current laws?
There are currently no comprehensive federal laws regulating AI’s development, with this instead being left to state governments.
All 50 states’ legislatures, as well as Puerto Rico, the Virgin Islands, and Washington, DC, introduced AI regulations in 2025 Thirty-eight states had passed laws regulating AI as of July 2025.
California’s AI regulations, passed in September, require developers of major AI models to disclose risk assessments. Colorado and Illinois have passed statutes that would prevent employers from using predictive AI tools that result in discrimination, meanwhile.
Republican-governed states have also enacted AI regulations, including Texas’ ban on using AI systems to manipulate behaviour, discriminate, or create illegal deepfakes.
AI companies have heavily lobbied the Trump administration to cut regulations on the technology, however. Nvidia CEO Jensen Huang met with Trump in November, saying inconsistent laws across states would threaten AI’s development in the U.S.
In July, the Trump administration published its ‘America’s AI Action Plan’, a set of policy recommendations for governing the industry. It encourages minimal regulation, limiting AI-related discretionary funds for states with stronger laws, and avoiding government contracts for AI models the administration considers ideologically biased, similar to the eventual executive order.
“To maintain global leadership in AI, America’s private sector must be unencumbered by bureaucratic red tape,” according to the policy plan.

What's in the executive order?
The executive order aims to block states from enforcing their own AI regulations, arguing that extensive regulation would stifle AI development and investment.
“State-by-State regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups,” the order claims.
It directs the Department of Justice to create a litigation task force to challenge state AI laws that the administration views as burdensome. The administration does not plan to challenge every state AI law, according to Trump’s special advisor for AI and cryptocurrency David Sacks.
However, these challenges could fail in court. The Department of Justice is expected to argue that these state laws violate the federal government’s right to overrule conflicting state laws or regulate interstate commerce, according to law firm Gibson Dunn, but “neither would likely succeed”.
The order also asks the Department of Commerce to study whether it could withhold federal funding for programs like rural broadband from states with comprehensive AI laws. Legal precedent holds that conditioning funds in this way cannot be used to compel states to pass or repeal laws, per Gibson Dunn.
Additionally, it alleges that some state laws “are increasingly responsible for requiring entities to embed ideological bias within models. The order would require the Federal Trade Commission to issue a policy statement saying laws that impact AI models’ outputs, including Colorado’s statutes, would violate the agency’s rules against deceptive practices.
What happens next?
The executive order does not have the force of congressional legislation and is likely to face legal challenges that could halt its implementation.
“This EO is going to hit a brick wall in the courts,” said Brad Carson, president of AI regulation advocacy nonprofit Americans for Responsible Innovation.
Although the order calls for a regulatory framework for AI to be set out in partnership with Congress, Congress has so far been unwilling to pass provisions that would forestall the enforcement of state AI laws.
It near-unanimously voted down a 10-year moratorium on enforcing state AI regulations in July, which would have been part of the ‘One Big Beautiful Bill’ spending package, and declined to add the same clause to an annual defence policy bill in December despite Trump’s demands.
Democratic senators have also introduced a bill that would prevent this executive order from being enforced. “While I am confident that the courts will strike down Trump’s illegal power grab, Congress has a responsibility to assert its legislative authority and block this Executive Order,” said Massachusetts Senator Ed Markey, the bill’s sponsor.
If aspects of this executive order do enter into effect, however, it would still leave state AI laws covering certain issues in place. The order exempts regulations targeting child safety, data centre infrastructure, state government AI contacts, and “other topics as shall be determined”.
