- Lumiera
- Posts
- 🔆 The US AI Action Plan: Three Core Pillars
🔆 The US AI Action Plan: Three Core Pillars
The Trump Administration's latest approach to AI, age bias lawsuits, and leaked ChatGPT threads.
🗞️ Issue 82 // ⏱️ Read Time: 5 min
In this week's newsletter
What we’re talking about: "America's AI Action Plan,” a 28-page strategy that reshapes how the U.S. approaches AI development, regulation, and global competition.
How it’s relevant: This plan affects every organization using AI, from federal procurement rules to state regulatory authority, marking the biggest shift in US AI governance since the technology entered mainstream use.
Why it matters: The plan represents a philosophical reversal from safety-first to competition-first AI policy, with implications that could ripple through business, civil rights, and international relations for years to come.
Hello 👋
The U.S. approach to AI has taken a dramatic turn. Where the Biden Administration emphasized safety testing and careful risk evaluation, the Trump Administration's new AI Action Plan declares a different priority: "win the AI race" against China, with safety regulations framed as obstacles to victory.
Released on July 23rd, 2025, the plan explicitly presents this as a binary choice: either accept the risks of rapid AI deployment or lose technological leadership to adversaries. For business leaders, this shift creates immediate compliance challenges and strategic decisions that could define competitive advantage for years.
Today, we’ll cover what the plan says, the near-term business impact, cross-industry responses, and a few potential outcomes.
What the Plan Says
America’s AI Action Plan organizes around three core pillars:
Pillar I: Accelerate AI Innovation focuses on removing regulatory barriers:
Rescinding Biden-era AI safety requirements
Updating federal procurement to exclude AI systems that reference "misinformation, Diversity, Equity, and Inclusion, and climate change."
Cutting federal funding to states with "burdensome AI regulations"
Encouraging open-source and open-weight AI
🚨Claiming objectivity and free speech while actively excluding references that don’t align with the White House’s views is an innate contradiction.
Pillar II: Build American AI Infrastructure promises rapid infrastructure development:
New environmental exemptions specifically for AI infrastructure
Federal lands made available for data center construction
Streamlined permitting that bypasses traditional environmental reviews
Workforce training programs for AI-related jobs
🌎 Want to read more about the environmental impact of AI? Check out our 3-part series.
Pillar III: Lead in International AI Diplomacy and Security targets global competition:
"Full-stack AI export packages" for allies willing to join America's AI alliance
Enhanced export controls on AI chips and technology to China
Leveraging international standards bodies to promote "American values"
Evaluating frontier AI models for national security risks
ℹ️ The full AI tech stack includes hardware, models, software, applications, and standards.
While we’re not covering this today, it’s important to know that the US AI Action Plan fits into a larger global narrative, including the subsequent release of China’s AI Action Plan.
The Business Impact: What Changes ASAP
Three executive orders were signed alongside the plan:
Preventing Woke AI in the Federal Government → Federal Contracting Changes:
Government agencies must evaluate AI vendors for "ideological bias." This affects billions in federal contracts and creates potential conflicts for companies operating across states with varying AI regulations. Timeline: Government guidance by November 20, 2025
Accelerating Federal Permitting of Data Center Infrastructure → Expedited Infrastructure.
Data center developers gain expedited permitting and federal land access, while facing new security requirements around foreign technology. Timeline: EPA guidance by January 19, 2026
Promoting the Export of the American AI Technology Stack → Deepening Reliance.
Bundled U.S. AI exports could deepen international reliance on American technology. European tech businesses argue this could channel significant European investment into US industries rather than supporting European capacity building. Timeline: American AI Exports Program by November 20, 2025
How does the shift in the U.S. approach to AI impact your organization’s strategic priorities?
Industry Reactions: The Divide
👍 Support: TechNet (aka "Tech's most powerful advocacy group") praised the plan's focus on "removing regulatory barriers to innovation.” NetChoice called it "night and day" compared to the previous administration's "command and control" approach.
👎 Opposition: The ACLU called the state preemption effort "harmful" and legally questionable, and researchers worry that eliminating references to climate change and bias undermines scientific integrity.
What Happens Next
Business leaders face a fundamental choice:
Optimize for federal unregulated incentives
Maintain stakeholder trust through responsible AI practices,
OR try to navigate both simultaneously.
The plan's most striking contradiction lies in promising "objective" AI while making explicitly ideological choices about what topics AI systems can consider. This reveals the challenge of technology governance: who defines objectivity, and should governments make that determination?
Big tech news of the week…
⚖️ A U.S. District Court judge recently ordered Workday, the leading provider of enterprise cloud applications for finance and human resources, to provide a full list of customers who enabled HiredScore AI features in their hiring process, following a collective action lawsuit alleging age bias against candidates aged 40 and up.
🛑 Anthropic revoked OpenAI’s API access to its Claude family of AI models, citing violations of its terms of service, which explicitly prohibit customers from using its AI tools to build competing products, train rival models, or reverse engineer its systems.
🔓 ChatGPT users discovered that shared chats were surfacing in Google search results, even ones with sensitive or private information. While the users’ identities aren’t shown by ChatGPT, some potentially identify themselves by sharing highly specific personal information during the chats. OpenAI quickly disabled the "Make public" toggle that enabled sharing, and it's now working with search engines to de-index exposed chats.
Until next time.
On behalf of Team Lumiera
Lumiera has gathered the brightest people from the technology and policy sectors to give you top-quality advice so you can navigate the new AI Era.
Follow the carefully curated Lumiera podcast playlist to stay informed and challenged on all things AI.
What did you think of today's newsletter? |

Disclaimer: Lumiera is not a registered investment, legal, or tax advisor, or a broker/dealer. All investment/financial opinions expressed by Lumiera and its authors are for informational purposes only, and do not constitute or imply an endorsement of any third party's products or services. Information was obtained from third-party sources, which we believe to be reliable but not guaranteed for accuracy or completeness. |