- Lumiera
- Posts
- 🔆 When Governments Become AI-First
🔆 When Governments Become AI-First
AI's growing role in government, newly blocked bots, and the most relevant satirical startup.
🗞️ Issue 77 // ⏱️ Read Time: 7 min
In this week's newsletter
What we’re talking about: The growing adoption of AI tools by government agencies and officials worldwide, moving beyond policy frameworks to actual daily implementation across defense, intelligence, and public services.
How it’s relevant: Governments globally are heavily invested in exploring AI technologies, driven by both the opportunity to improve citizens' lives and the inherent risk of falling behind other nations in this critical field.
Why it matters: AI has profound potential to reshape public services, national security, economic landscapes, and international relations, while introducing complex ethical, societal, and geopolitical challenges that will ripple through every system.
Hello 👋
If you’ve ever stood in line to renew your driver's license or waited weeks for a permit from a local government agency, you’re aware of how outdated some government systems can be. The idea to use AI to streamline these processes and provide more efficient services to citizens sounds like a no-brainer.
But when we talk about the rise of AI adoption in government, it’s not only digitizing backlogs of analog documents and deploying public-facing chatbots on government websites. It’s also intelligence gathering and threat detection, cybersecurity, autonomous weapons systems, and more.
These aren’t trivial use cases; the stakes are high and the risks are higher.
The Strategic Imperative Behind Government AI Adoption
This isn't just about efficiency gains, it's about national competitiveness. AI has become a key area of strategic competition, with major global powers like the United States and China vying for leadership. Many governmental leaders believe that embracing and mastering AI will provide a competitive advantage against other nations and protect them from potential conflicts.
The numbers reflect this urgency. Since 2024, more than 90,000 users across over 3,500 U.S. federal, state, and local government agencies have exchanged over 18 million messages using ChatGPT to support daily work. Twenty of 23 major federal agencies reported about 1,200 current and planned AI use cases, spanning everything from drone analysis to dataset processing.
This phenomenon isn’t unique to the US. Globally, there are over 1,000 AI policy initiatives from 69 countries, territories, and the EU, including almost 800 governance initiatives. The European Union announced its €200 billion "AI Continent Action Plan," while the UAE is investing tens of billions to become the world's first fully AI-powered government by 2027.
The Three Pillars of Government AI Adoption
Enhancing Public Services and Internal Government Efficiency: Governments are increasingly adopting AI to streamline public administration, improve citizen services, and increase overall efficiency. We’re seeing:
Virtual assistants for government employees to help locate departmental information
Public-facing chatbots that allow citizens to search for information on government websites using natural language
AI assisting in modernizing legacy IT systems by identifying bugs and converting to newer coding languages
Intelligence, Surveillance, and Reconnaissance (ISR) & Decision Support: Governments use AI to process vast amounts of intelligence data faster than human analysts. Examples include:
The U.S. Defense Intelligence Agency's SABLE SPEAR program identifies illicit activities that traditional methods miss
China is using an enhanced military LLM to improve decision-making during military operations.
Computer vision enables facial recognition and surveillance footage analysis to identify suspects in criminal cases
Autonomous and Unmanned Systems: A significant focus for governments, particularly in defense, is the development and deployment of autonomous and unmanned systems. This includes:
Air platforms → Various types of unmanned aerial vehicles (UAVs) for reconnaissance and strike missions
Underwater platforms → Autonomous underwater vehicles (AUVs)
Lethal autonomous capabilities → the ability for any of these platforms to engage and kill targets without human control or authorization, known as Lethal Autonomous Weapon Systems (LAWS)
Army of None
These autonomous defense systems can use AI tools to derive behavior from data, allowing them to make independent decisions or adjust behavior based on changing circumstances. This leads to what defense expert Paul Scharre calls an "army of none,” a military force where machines make life-or-death decisions without human intervention. His book of the same name examines the potential benefits and profound risks of delegating such critical decisions to algorithms.
While this all might seem like science fiction, it’s very much the current reality. We're seeing these technologies tested in real-time. Ukraine has become what some call a "war lab" because its ongoing conflict with Russia provides a live testing ground for developing and refining new military AI and autonomous technologies.
War serving as a driver for technical innovation isn't new, but we've entered a new era of collaboration between private tech companies and armed forces. The implications are staggering: immense power now lies in providing the AI technology embedded into the day-to-day operations of wartime governments. As Time magazine notes: "In conflicts waged with software and AI, where more military decisions are likely to be handed off to algorithms, tech companies stand to wield outsize power as independent actors."
As governments become increasingly dependent on private tech companies for critical AI capabilities, how should democratic societies balance national security needs with corporate accountability and public oversight?
Public Sector 🤝 Private Sector
The line between government AI capabilities and private sector innovation is increasingly blurred, with national defense strategies now fundamentally dependent on partnerships with tech companies. This collaboration raises critical questions about who controls the most powerful AI systems and whether private companies should play such central roles in national security decisions.
UK's 2025 Strategic Defence Review Pushes for Greater AI and Autonomy Integration: In June of 2025, the UK government released its Strategic Defence Review, recommending a significant shift towards increased use of AI and autonomous technologies to modernize its armed forces, which will rely heavily on collaboration with the private tech sector.
U.S. Military Expands Use of Generative AI for Intelligence: As of early 2025, U.S. Marines were actively experimenting with generative AI, a private sector innovation, through chatbot interfaces to scour intelligence for surveillance tasks, marking a new phase in the Pentagon's AI adoption.
Palantir, a data analytics firm, built its business by providing software to U.S. Immigration and Customs Enforcement (ICE), as well as the FBI, the Department of Defense, and various foreign intelligence agencies. It’s since been dubbed “The AI arms dealer of the 21st century” by Jacob Helberg, a national security expert.
Broader Challenges and Long-Term Considerations
The foundational understanding of AI in government encompasses both its potential to improve the lives of citizens around the world and its use in legislating how private companies and individuals use it.
Despite military AI being a strategically prioritized area with rapid growth, a significant portion of AI use in government is dedicated to non-military applications. AI is being widely integrated to enhance efficiency, improve public services, and address societal challenges.
However, even if not used in directly dangerous ways, the widespread integration of AI applications in government systems inherently introduces heightened exposure to data breaches and cyberattacks, which can severely compromise public services and national security. As AI is fundamentally software, it is susceptible to malicious exploitation, hacking, or even reprogramming by adversaries.
And while AI promises to streamline public services and enhance efficiency, a critical challenge is ensuring equitable access and unbiased service delivery for all citizens. AI models, which are trained on human-generated data, can inadvertently inherit and perpetuate existing human biases and prejudices, potentially leading to discriminatory behaviors or disproportionately affecting certain communities in areas like law enforcement or public resource allocation.
Staying Aware
As we race to develop the next groundbreaking tech solution, it’s important to remember that technological advances and infrastructure could (and have historically) support future governmental and defense use cases.
Because “the biggest tech companies have become geopolitical actors with as much wealth and power as most countries,” there is an obligation on business leaders and developers to think proactively and responsibly about whose hands their next big idea could land in.
Big tech news of the week…
🍎 Apple weighs using Anthropic or OpenAI to power Siri in major reversal. This would be a significant reversal from Apple’s longstanding approach of relying almost exclusively on its own in-house AI models.
⚖️ The U.S. Senate has removed a controversial proposal from a major budget bill that would have banned states from regulating artificial intelligence (AI) for 10 years. The ban was supported by major tech companies like Google, OpenAI, Meta, and Microsoft, who argued that state-level rules could stifle innovation and make the U.S. less competitive globally.
☁️ Millions of websites, including Sky News and The Associated Press, can now automatically block AI bots from accessing their content without permission. This change is due to a major update by Cloudflare, a leading internet infrastructure company that supports about 20% of all online content.
♀️ Nitya Kuthiala and Louis Barclay teamed up to release this satirical, dystopian startup that highlights the growing problem of adult deepfakes targeting women. A must-see.
Until next time.
On behalf of Team Lumiera
Lumiera has gathered the brightest people from the technology and policy sectors to give you top-quality advice so you can navigate the new AI Era.
Follow the carefully curated Lumiera podcast playlist to stay informed and challenged on all things AI.
What did you think of today's newsletter? |

Disclaimer: Lumiera is not a registered investment, legal, or tax advisor, or a broker/dealer. All investment/financial opinions expressed by Lumiera and its authors are for informational purposes only, and do not constitute or imply an endorsement of any third party's products or services. Information was obtained from third-party sources, which we believe to be reliable but not guaranteed for accuracy or completeness. |