The US AI race won’t be won by speed alone — without trust and regulation, dominance could collapse before it begins.

America’s AI Action Plan states in no uncertain terms that to maintain America’s dominance in AI, we must “remove red tape and onerous regulation.” Seeking to differentiate itself from its predecessor, the Trump administration has argued that restricting AI development with onerous regulation “would not only unfairly benefit incumbents… it would mean paralyzing one of the most promising technologies we have seen in generations,” which is why President Trump rescinded what it calls the Biden administration’s dangerous actions on his very first day in office.
But what happens to trust and security when the focus is on accelerating innovation without appropriate guardrails? It is the age-old struggle between regulation and innovation, a constant balancing act where leaders must decide how to remove barriers without stifling the very safeguards that keep technology sustainable as it scales.
Having spent nearly 15 years advising clients ranging from startups to Fortune 50 companies on how to adapt new technologies to existing and new legal frameworks, I understand the frustration with bureaucratic obstacles that slow innovation. This is particularly true when the government passes laws that do not reflect technological or operational realities. Case in point, during Mark Zuckerberg’s 2018 Senate hearing, the senators’ questions laid bare a striking lack of familiarity with even the basic workings of the Internet, underscoring the disconnect between policymakers and the technologies they seek to regulate.
But the reality is that speed without safeguards rarely ends well. To truly sustain American dominance in AI, citizens must trust the technology. They must also feel secure allowing their data to be used for training, since data is the lifeblood of AI and the development of large language models cannot advance without it. In the absence of regulation that protects this trust, the foundations of US leadership in AI will begin to weaken.
The choice before America is not a stark one between so-called “innovation-killing regulation” and unchecked “freedom-first governance.” That is a false dichotomy. The real path forward is crafting thoughtful, well-designed regulations that provide the durable foundation on which innovation can scale.
Innovation versus regulation
Having worked with clients in heavily regulated industries like advertising, healthcare and defense, I can tell you that oversight isn’t inherently anti-innovation. Regulation done thoughtfully can accelerate adoption, because it builds confidence among users, employees and investors.
Without strong safeguards against threats like adversarial attacks, data misuse or intellectual property theft, large-scale adoption becomes difficult. No one wants to deploy an AI tool only to discover later that it leaked sensitive data, exposed proprietary IP or became a new attack surface for adversaries. Beyond the immediate operational and security fallout, there’s also the risk of lawsuits over data misuse, regulatory penalties or contractual breaches. For many organizations, the uncertainty of those risks is enough to slow or even halt adoption until stronger safeguards are in place. In fact, a recent Forrester report shows that data privacy and security concerns remain the biggest barrier to generative AI adoption. Building trustworthy AI requires attention to privacy, cybersecurity and AI governance.
AI isn’t just a race for speed; it’s a race for trust
AI isn’t just about faster chips, bigger models or who gets to market first. It’s about whether enterprises, governments and individuals feel confident enough to use it in the first place. The hesitation around DeepSeek, a Chinese artificial intelligence system, illustrates this point, as many potential users and governments remain wary due to unresolved privacy and cybersecurity concerns that undermine trust in the system and threaten national security.
We don’t have to speculate about what happens when trust is ignored. The crypto industry offers a cautionary tale for revolutionary technologies. Without regulation tailored to the unique nature of blockchain, the space was plagued by cyberattacks, privacy failures, security breaches and widespread illicit use. Now, as regulators begin clarifying the legal landscape, such as by requiring regular public disclosures and compliance with anti-money laundering and export control laws, many in the industry argue that digital assets can finally gain legitimacy and move into the financial mainstream.
When trust collapses, adoption stalls, and regulation becomes reactive rather than strategic. By the time governments step in to restore confidence, the damage to innovation momentum can be severe and long-lasting.
Beyond finance, blockchain adoption in other sectors reveals the same pattern. A study about blockchain use cases in healthcare, for example, revealed that the promise of secure, patient-centric data management has run headfirst into barriers around privacy, security, scalability and cost. Blockchain adoption in healthcare has stalled as privacy and security gaps, high data volumes, lack of standardization and limited interoperability can make it costly, inefficient and often noncompliant with regulations such as the EU’s General Data Protection Regulation.
More than a decade on, blockchain’s potential remains real, but its trajectory shows how the absence of early safeguards and strategic regulation can delay legitimacy and adoption.
Privacy and security as strategic assets, not red tape
Taking a page from the experience with blockchain and crypto, where the lack of regulatory clarity delayed adoption, the United States now has an opportunity to shape an approach to AI in which privacy and cybersecurity are treated as strategic assets.
I recommend the following strategic steps:
- Embedding cybersecurity and privacy from the start. Just as “privacy by design” became a foundational best practice for data protection, “AI governance by design”, as reflected in NIST AI Risk Management Framework, calls for embedding cybersecurity and privacy into the earliest stages of AI development rather than adding them later.
- Treating red-teaming and adversarial testing as competitive advantages. In AI testing, “red teaming” refers to simulating the tactics of an attacker in order to probe for weaknesses, a practice borrowed from cybersecurity. It is critical because it helps ensure that an AI system functions as intended and does not expose vulnerabilities that could undermine its reliability, security or trustworthiness.
- Incentivizing public–private collaboration. Public-private collaboration is essential to advancing sensible AI regulation because it brings together the complementary strengths of government and industry. Governments provide oversight, funding and access to public data, while companies contribute technical expertise, innovation and market solutions. By working together, these partnerships help close resource and knowledge gaps, establish shared ethical standards and ensure that AI is developed in a way that is both globally inclusive and locally accountable.
- Building regulatory alignment across borders Harmonizing AI laws across borders is critical because fragmented regulations slow innovation, weaken safety and limit equitable access. A healthcare algorithm that meets EU data governance standards might still violate certain US state laws or face export restrictions in China, making global deployment difficult. Startups and smaller firms with limited resources to address complex regulatory regimes are hit hardest, while larger enterprises gain an advantage navigating the patchwork of rules.
- Building a federal privacy framework. The absence of a unified federal privacy statute has left the United States with a patchwork of state and local rules governing AI and data protection. As new regulations emerge at the state level, and in some cases even at the municipal level, businesses face compliance challenges. For AI companies that rely heavily on data, this fragmented landscape creates inefficiencies, higher legal costs and operational uncertainty, underscoring the urgency of a single, nationwide standard.
Final thoughts: We need a balanced approach
The path forward requires recognizing that America’s AI Action Plan sets ambitious goals for technological sovereignty and market leadership, but achieving these goals demands more than deregulatory enthusiasm. It requires building infrastructure that enables widespread and sustainable AI adoption.
The organizations and countries that understand this dynamic will capture the largest share of AI’s economic benefits.
Reducing oversight doesn’t remove responsibility. In a world where AI models can make choices, produce content and influence public opinion, a balanced approach to governance is essential.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?