Why is there so much urgency around adopting generative AI now?
Leaders see generative AI (GenAI) as a way to transform how their organizations work, compete, and innovate, so there is strong pressure to move quickly.
The data in the report shows this clearly:
- 75% of global knowledge workers are already using GenAI.
- Only 5% of surveyed organizations are neither using nor planning to use or develop GenAI.
- 95% are either already using GenAI, developing GenAI apps, or planning to do so.
- 66% of respondents say their organizations are both using GenAI applications and either developing or planning to develop their own GenAI apps.
Organizations are applying GenAI to:
- Improve productivity and reduce costs
- Increase revenue and drive business innovation
- Enhance customer service with AI assistants
- Optimize operations (for example, production processes or energy use)
- Support high‑value scenarios like rare disease research, fraud detection, and personalized learning
Many companies are not stopping at a single project. Among organizations that are developing GenAI apps, the average number of apps in development or already built is 13.9. The top reasons for building their own or customized GenAI apps include:
- Driving business innovation (58%)
- Maintaining control of data (55%)
- Integrating with existing systems (52%)
- Managing cost and scalability versus SaaS (49%)
- Meeting compliance and regulatory requirements (44%)
In short, GenAI is being used to rethink core processes and services, and adoption is already mainstream rather than experimental. That pace, however, is creating new security expectations and pressures for security and risk leaders.
What are the main security risks organizations worry about with GenAI?
Security and risk leaders are trying to balance the push for AI innovation with a clear set of concerns. These fall into two broad groups: amplified existing risks and emerging AI‑specific risks.
For companies using GenAI, top concerns include:
- Leakage of sensitive data (63%)
- Sensitive data being overshared, giving users access they should not have (60%)
- Inappropriate use or exposure of personal data (55%)
- Inaccurate insights generated by GenAI (43%)
- Harmful or biased outputs (41%)
For companies developing or customizing GenAI apps, the picture is similar but with added complexity:
- Data leakage and exfiltration (60%)
- Inappropriate use of personal data (50%)
- Violations of regulations (42%)
- Lack of visibility into AI components and vulnerabilities (42%)
- Over‑permissioned access granted to AI apps (36%)
Two amplified risks stand out:
1) Data oversharing and breaches
- Users may see data they are not authorized to access because of weak labeling or access controls.
- Rapid rollout of tools without adequate training can lead to people sharing sensitive data without realizing the implications.
2) Shadow IT and “bring your own AI” (BYOAI)
- 78% of AI users are bringing their own AI tools to work.
- Employees may paste source code, meeting notes, or spreadsheets into third‑party tools outside IT’s visibility, increasing the risk of data leakage.
On top of that, several emerging AI‑specific risks are gaining attention:
- Hallucinations: Models generating false or misleading information, which is especially risky in healthcare, finance, and legal contexts.
- Harmful content: Offensive, dangerous, or non‑compliant outputs, including deepfakes and fabricated content.
- Model theft: Illegal copying or theft of proprietary models, undermining competitive advantage and potentially exposing sensitive data.
- Prompt injection attacks: Malicious prompts that cause the model to reveal confidential information or behave in unintended ways.
- Training data poisoning: Tampering with training data to introduce vulnerabilities, bias, or backdoors.
- Excessive agency: AI systems given too much autonomy performing unintended or harmful actions.
- Regulatory compliance: New regulations such as the EU AI Act introduce significant penalties (up to 35 million euros or 7% of annual turnover), while 62% of business leaders say they do not understand which AI regulations apply to their sector.
Overall, leaders are less worried about whether AI will be used and more focused on how to use it in a way that protects data, complies with regulation, and maintains trust.
How can organizations say a secure ‘yes’ to AI adoption?
Organizations are looking for a way to support AI innovation without compromising security. The research shows that 95% of security and risk leaders agree their company needs security measures in place for AI apps—including third‑party SaaS, enterprise‑ready, and custom‑built apps—within the next 12–24 months.
The report outlines a security transformation path built around four key steps:
1) Form a dedicated security team for AI
- Establish a focused group responsible for AI security strategy, governance, and oversight.
- Bring together security, IT, data, and risk stakeholders so AI initiatives are not happening in isolation.
2) Optimize resources to secure GenAI
- Prioritize where to invest time and budget based on the organization’s AI use cases and risk profile.
- Address high‑impact issues such as data classification, access controls, monitoring, and user training for AI tools.
3) Implement a Zero Trust strategy for AI
- Apply Zero Trust principles—verify explicitly, use least‑privilege access, and assume breach—to AI systems.
- Treat AI components (models, orchestrators, plug‑ins, data sources) as part of the broader attack surface that needs identity, access, and network controls.
4) Adopt a comprehensive security solution for AI
- Use integrated security capabilities that cover:
- Data protection and loss prevention for AI prompts and outputs
- Visibility into AI components, vulnerabilities, and access patterns
- Protection against threats such as prompt injection, model abuse, and supply chain issues
- Ensure coverage across all AI application types: third‑party SaaS, enterprise‑ready tools, and custom‑built apps.
In parallel, organizations should:
- Address data oversharing by tightening access controls and improving data labeling.
- Reduce shadow IT and BYOAI risk by offering approved AI tools and clear usage guidelines.
- Build awareness of AI‑specific risks (hallucinations, harmful content, model theft, training data poisoning) into security training and governance.
By treating AI as a first‑class security domain—rather than an add‑on—organizations can reimagine processes and services with GenAI while maintaining a risk posture that security and risk leaders are comfortable endorsing.