Right now, the U.S. is rewriting the rules for the age of artificial intelligence —and the changes will shape not only the speed of innovation but also the values, safeguards, and power structures that govern it for decades to come.
In July, the government published its “Winning The Race: America’s AI Action Plan,” framing artificial intelligence as the next monumental leap in human progress. The rhetoric is sweeping: AI will deliver “an Industrial Revolution, an information revolution, and a renaissance—all at once.” It promises to revolutionize materials science, medicine, energy, education, and even the decoding of ancient texts. The vision is not just about new tools—it’s about unlocking “a golden age of human flourishing”, securing U.S. economic dominance, and protecting national security.
The key, according to the plan, is to “achieve and maintain unquestioned and unchallenged global technological dominance.”
The Three Pillars of “Winning the Race”
The plan’s path to this dominance rests on three pillars:
- Accelerate AI innovation
- Build American AI infrastructure
- Lead in international AI diplomacy and security
On paper, the vision is exciting, bold and dynamic: slash regulatory “red tape” to ignite innovation, scale up America’s AI infrastructure with chips, power, and skilled talent, and export a U.S.-designed AI stack as the global default—backed by ironclad security. The message is clear: move fast, build big, and set the standard before anyone else does.
The Allure of the Upside
It’s easy to see why supporters are excited. Startups and tech giants could innovate at lightning speed without being slowed by compliance costs. Open-source AI communities could see their models become the global gold standard. Infrastructure builders—from chip foundries to energy companies—could break ground faster than ever. Even defense contractors stand to benefit from a secure, U.S.-led AI stack adopted by allies.
Proponents point to a future of rapid breakthroughs, booming job creation, and a decisive lead in setting the rules of global AI. From their perspective, this is America’s moonshot moment—and one that is so important in the global AI arms race.
But Here’s the Catch: “Cutting Red Tape” Means Cutting Ethics
From the perspective of someone deeply committed to the responsible implementation of AI—and who approaches the concept of “human flourishing” with grounded reasoning rather than as a feel-good slogan—the most alarming aspect of this plan is how “cutting red tape” has become a euphemism for dismantling ethical safeguards.
The plan’s first pillar, “Accelerating AI Innovation,” outlines 15 broad recommendations. The first two set the tone for its ethical stance on innovation:
- Remove “onerous” regulations
- Ensure “Frontier AI” protects free speech and American values
One of the administration’s earliest moves after taking office was to roll back Biden-era executive orders that had established frameworks for risk mitigation and ethical oversight in AI—measures viewed as obstacles to innovation. Biden’s Executive Order 14110 aimed to advance AI in ways that were safe, ethical, and equitable. It required rigorous pre-deployment testing, continuous monitoring to prevent misuse, bias, and harm, and a commitment to fairness and accountability. It explicitly rejected using AI to entrench discrimination and called for active engagement with affected communities.
Those sound principles have now been abandoned. In their place, the new Winning the Race in AI Policy is paired with a fresh executive order titled “Preventing Woke AI in the Federal Government.”
As part of this shift, the NIST AI Risk Management Framework—a voluntary guide for identifying and managing AI risks—will be stripped of references to misinformation, diversity, equity, inclusion (DEI), and climate change.
In addition, a new “Unbiased AI Principles” standard will be applied to federal contracts. Under this policy, the federal government will only invest in, or contract with, companies whose AI systems are deemed ideologically neutral. These standards require that large language models (LLMs) be:
- Truth-seeking: providing factual information, prioritizing historical accuracy, scientific inquiry, and acknowledging uncertainty.
- Ideologically neutral: avoiding partisan or ideological “dogmas” such as DEI unless directly prompted by the user.
Even if these principles were technically achievable with today’s LLMs—which they are not—they still collapse under their own logic. You cannot demand “truth” while forbidding discussion of climate change. You cannot claim “neutrality” and self-righteously trumpet “freedom of speech” while preemptively erasing entire categories of discourse. And you certainly cannot denounce “top-down ideological bias” while aligning with platforms whose AI systems already filter every output through a single political lens.
The Hypocrisy in Practice
Case in point: Truth Social’s newly launched LLM chatbot, owned by the leader of the current administration and powered by Perplexity AI—a company already criticized for ignoring web data-protection protocols. Testing by Wired found the chatbot sourced all answers exclusively from conservative outlets. Perplexity defended this as “source selection,” saying developers can filter information for their audience. But this is exactly the kind of ideological shaping the Action Plan claims to oppose. The irony
The Risks of Deregulated AI
The fact that enforcement of these changes only applies to federal use, not private companies, is sadly not a consolation. The reality is that when speed is rewarded, competitive pressures push everyone to adopt the same deregulatory model—especially when major players like ChatGPT are signing government contracts—and they have done exactly that since the publication of the new AI Policy Plan.
The problem is that responsible innovation takes time:
- Time to address algorithmic bias, often baked into training data or introduced unconsciously by developer teams.
- Time to reduce hallucinations and misinformation, which remain major unsolved challenges in LLMs.
- Time to mitigate harms like toxic online discourse, echo chambers, and erosion of trust in information.
Stripping away these safeguards isn’t just risky for individuals—it’s dangerous for the integrity of democracy and society as a whole.
Why “Human Flourishing” Risks Becoming Empty Rhetoric
Throughout the plan, “human flourishing” is used as a catch-all justification for deregulation. But without guardrails, the phrase risks becoming an empty slogan—a feel-good cover for a race to deploy powerful AI without ensuring it’s safe, equitable, or accountable.
True human flourishing isn’t just about faster innovation. It’s about innovation that benefits society without amplifying existing harms. And that requires deliberate, often inconvenient, governance.
What’s Really at Stake
The U.S. can lead in AI. But leadership isn’t just about being first—it’s about setting a standard worth following. A strategy that sacrifices ethics for speed might deliver short-term wins but risks long-term trust, stability, and global credibility.
Once the norms are set—both domestically and internationally—they’ll be hard to undo. If those norms are built around partisan selectivity, deregulation, and an absence of safeguards, we may find ourselves in an AI-powered future that’s fast, powerful, and deeply flawed.
The future of AI in America is being written right now. If we want that future to be not just innovative but also safe, fair, and genuinely beneficial, we need to question whether “winning the race” should come at the cost of the safety rails that make technological dominance worth having.

Alexandra Frye
The Digital Ethos Group
Alexandra Frye edits the Technology & Society blog, where she brings philosophy into conversations about tech and AI. With a background in advertising and a master’s in philosophy focused on tech ethics, she now works as a responsible AI consultant and advocate.