University of Maryland survey found that support for AI regulation crosses party lines
In sum – what you need to know:
–United on AI – Large majorities of U.S. citizens, regardless of party affiliation, are wary of unregulated artificial intelligence and want to see AI regulation of some sort that limits or guides the technology’s development.
–Concern about competitiveness – While survey participants have also expressed that they do not want government AI regulation to stifle development, and are concerned about China outpacing the U.S. in AI, they were more concerned about the potential harms of AI not being addressed until after the fact.
A newly released survey from the University of Maryland has found that there is broad support across party lines for government regulation of artificial intelligence.
The survey was fielded from July 30th through August 7th, 2025 with 1,202 adults nationwide, by the Program for Public Consultation (PPC) at the University of Maryland’s School of Public Policy.
“Clearly Americans are seriously concerned about the current and potential harms from AI,” said Steven Kull, director of PPC. “And while the public is wary of government regulation, they are clearly more wary of the unconstrained development and use of AI.”
The survey found that while majorities of 59-69% of those surveyed agreed that they didn’t want to stifle AI innovation (and thought the private sector could move faster than the government to address risks), significantly more people (77-84%) wanted to see responsible innovation, agreeing that it was better to be preventive than reactive about the potential harms of AI.

“Instead of letting things out into the world and then reacting based on flaws, they must be functioning to their best ability beforehand,” one participant told PPC as part of the survey on AI regulation.
The PPC survey found that bipartisan majorities continue to favor ideas such as government certification of AI to evaluate whether models violate regulations, have security vulnerabilities or make biased decisions (79% support overall, with 84% support among Republicans and 81% support among Democrats), and allowing government audits of AI software that is already in use, with companies being required to fix any problems (78% support, including 82% among Republicans and 78% among Democrats).
The PPC survey also found support for requiring companies to disclose to the government how decision-making AI was trained, if that information was requested. There was also a very high support (80% overall, with support stronger among Republicans) for prohibition or clear labeling of entirely fabricated deepfakes in political advertising.
The findings were in line with a previous survey conducted by PPC in the spring of 2024, when a survey of more than 3,600 registered voters concluded that “very large bipartisan majorities favor giving the federal government broad powers to regulate artificial intelligence.”
Other attempts to take the temperature of U.S. residents on how or whether they think AI should be regulated have had similar findings. In April of this year, Pew Research surveyed both experts and laypeople about AI and found that while experts were generally more positive about the potential impacts of AI, about six out of 10 U.S. adults, and 56% of AI experts surveyed, were more concerned about the U.S. government not going far enough in regulating AI, than they were about it going too far in regulation.
Separately, a June 2025 survey of more than 1,000 registered voters, commissioned by tech advocacy organization TechNet and conducted by Morning Consult, found that most of were concerned about China outpacing the United States in AI development, and preferred a national AI regulatory framework versus state-dominated regulation of AI.
However, the Trump administration recently released a federal AI Action Plan that takes a deliberately light hand with any limitations on AI, and orders a repeal of existing federal regulations or reexamination of legal decisions which “unnecessarily hinder” AI development or deployment. It also focuses on international export of American AI technology, with the biggest AI ecosystem seen as the best.
“Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits. Just like we won the space race, it is imperative that the United States and its allies win this race,” authors Michael Kratsios, assistant to the president for science and technology; David Sacks, special advisor for AI and crypto, and Marco Rubio — identified as assistant to the president for national security affairs, rather than as secretary of state — in the introduction to the AI Action Plan.
“AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level,” the plan said. “The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” However, the plan also directs that agency heads with federal funding for AI “limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.”
The AI Action Plan directs the National Institute for Standards and Technology (NIST) to revamp with AI risk management framework to eliminate references to misinformation, DEI and climate change. It outlines promotion of a “try-first” AI climate for American businesses, directs federal agencies to prioritize AI skill development and investment in AI adoption, including in next-generation manufacturing to “usher in a new industrial renaissance” and in AI-enabled science and data sets; and drive adoption of AI within government, including in the Department of Defense.