Share this article

Why AI security looks different across the UK, France, Germany, and Australia
Accelerating security solutions for small businesses Tagore offers strategic services to small businesses. | A partnership that can scale Tagore prioritized finding a managed compliance partner with an established product, dedicated support team, and rapid release rate. | Standing out from competitors Tagore's partnership with Vanta enhances its strategic focus and deepens client value, creating differentiation in a competitive market. |
Globally, 88% of companies regularly use AI in at least one business function—a 10% increase from the previous year. But as organizations race to adopt new capabilities, we’ve found that the rigor and maturity of AI governance vary widely by region.
The third edition of our State of Trust report reveals how leading AI adopters outside the U.S.—from the UK to Germany, France, and Australia—are approaching AI security and governance in distinct ways. Together, these regional patterns offer a window into how a country’s AI investment and approach to regulation may impact the way businesses there are managing AI risk.
UK: The assurance accelerators
In the UK, the private and public sectors have invested more than £78 billion (~$104.7 billion) over the last three years in AI, primarily to build data centers and create new job opportunities.
A comprehensive AI bill is expected to be presented in the UK Parliament in 2026 to govern critical issues, including AI safety and the use of copyrighted material for AI training. The country currently regulates AI through existing laws (like GDPR) and cross-sector regulatory guidelines.
Against this backdrop of growing investment and regulatory direction, UK businesses appear more confident in adopting AI tools for security use cases. Our report found that:
- 81% of security teams in the UK report they are increasing their use of AI in security programs
- 78% of UK organizations say AI and automation are helping to reduce burnout for security and compliance teams
“Automating workflows like identity reviews and incident response can help teams keep up with the increased pace of threats and make security work more sustainable." — Khush Kashyap, Senior Director of GRC at Vanta
France: The cautious innovators
At the beginning of 2025, France announced it had secured funding of more than €109 billion (~$128 billion) in AI for the next few years. Despite this investment, France does not yet have its own standalone AI regulation in place.
That lack of policy clarity may reflect or be a contributing factor in France’s slower pace of AI adoption for security:
- 70% of French companies say their security team is increasing its usage of AI in security programs, lower than the UK by 11%
France has also been slower to adopt agentic AI:
- Only 62% of French companies say they’re actively using or planning to use agentic AI this year, compared to 75% globally
Across all regions, 65% of organizations say their use of agentic AI outpaces their understanding of it. The gap between usage and understanding could play a role in France’s cautiousness around AI adoption in general.
“AI agents can be very powerful— but only if humans are defining the rules and reviewing what they do.” — Khush Kashyap, Senior Director of GRC at Vanta
Germany: The risk-management realists
Germany was among the first countries to create a national AI strategy in 2018. It also pledged more public investment in AI than the UK or France. This year, Germany reached roughly €10.3 billion (~$12.1 billion) in AI investments, combining €1.6 billion in government funding with multiyear investments from Google (€5.5B) and Microsoft (€3.2B).
But investment alone doesn’t guarantee adoption. Germany now lags behind some European peers when it comes to operationalizing AI securely, particularly in areas like oversight and third-party risk management.
Our State of Trust report found the following:
- 68% of German organizations have a formal process for assessing and managing third-party risk (the lowest of all other countries surveyed)
- 65% of German businesses say they know how to determine if their data has been impacted by a third-party breach (lower than all other countries surveyed)
Australia: The manual-evidence marathoners
Australia ranks fourth globally in GenAI usage, and the country’s investment of AU$460 million (~$305 million) for AI and related initiatives is less than what other major global players have committed.
Australia also lacks comprehensive, AI-specific legislation governing system autonomy or usage. Our report found:
- 40% of organizations in Australia have frameworks to govern AI system autonomy (compared to 53% in the U.S.)
- 65% of organizations in Australia say they spend more time proving security rather than improving it (higher than the UK by 9%)
“AI adoption can amplify a solid security and compliance program, but it won’t fix fundamental gaps.” —Khush Kashyap, Senior Director of GRC at Vanta
Trustworthy AI starts with a strong foundation
AI investment and adoption are accelerating worldwide—but local realities and culture play a role in how companies view and adopt AI tools. Differences in regulatory clarity and funding levels can also influence how confidently businesses can deploy and secure AI.
But when AI initiatives are grounded in strong security fundamentals, businesses are better positioned to effectively manage AI risk, no matter the region.
The State of Trust Report
Compliance shouldn’t just be for show—yet 61% of security teams admit they’re too busy posturing instead of protecting. Get the report to see more trends in trust.





FEATURED VANTA RESOURCE
The ultimate guide to scaling your compliance program
Learn how to scale, manage, and optimize alongside your business goals.











.png)

.png)
.png)
.png)
.png)







.png)
.png)
.png)
.png)