AI has the potential to bring significant changes to business. However, AI initiatives are often hindered by issues related to AI data governance, security, and identity management. While data concerns and identity management are not the only things slowing down AI initiatives, they are among the largest roadblocks to an AI transformation journey.
Addressing risks related to data governance, security, and identity in AI initiatives requires a structured approach. Which is why the Open Worldwide Application Security Project (OWASP) has created frameworks such as the OWASP Top 10 for LLM 1 and the OWASP LLM AI Cybersecurity and Governance Checklist 2.
OWASP Top 10 for Large Language Model (LLM)
The OWASP Top 10 for LLM lists critical vulnerabilities such as prompt injection (crafting AI inputs that manipulate outputs) and insecure prompt handling (lack of LLM-generated content validation). These vulnerabilities risk data leaks, compliance penalties, and reputation damage. The 2025 OWASP Top 10 highlights threats such as supply chain (risks related to the use of third-party data and components), and excessive agency (over permissive autonomy). The 2025 Top 10 highlights risks that delay AI projects as security teams work to add and validate third parties, and adjust agent, model, and data permissions.
OWASP created the LLM Security and Governance Checklist to offer actionable guidance to mitigate the risks related to the Top 10 and to accelerate AI adoption. The checklist advocates for the creation of cross-functional teams to help avoid security bottlenecks that can slow an AI deployment. Key recommendations of the checklist include access control to help limit who can use the application, access to training data, and data sanitization controls (scrubbing inputs/outputs to control unintended data exposure).
In addition to mitigations for technical risks, the checklist also provides operational safeguards consisting of adversarial testing to replicate attacks and monitoring model behavior to identify misuse or drift. The checklist ensures early integration of controls to avoid delays from audits, remediation, or rework. This integration aims to ensure data security, protect business operations, and support AI transformation.
Data governance: The foundation of trustworthy AI
Data governance plays a fundamental role in AI systems. The quality and management of data directly affect the reliability, safety, and fairness of outputs generated by an AI system. Poor data quality risks creating unreliable, dangerous, and biased AI systems. Poor data governance can lead to costly rework and negatively affect stakeholders’ perception of AI projects, which can result in the postponement or avoidance of future AI projects.
Internal stakeholder perception of AI projects aside, mishaps related to sensitive data can have external stakeholder effects by eroding customer trust, damaging a company’s reputation, or diminishing brand value. Additionally, there are regulatory compliance concerns. Improper handling of data can lead to financial and legal consequences under regulations such as GDPR, HIPAA, and CCPA.
Stakeholder perception and regulatory concerns are strategic examples of how poor data governance can become a barrier to AI adoption. A tactical barrier to AI adoption is the presence of data silos often present in organizations. These silos complicate the process of finding relevant information for AI projects, which can result in delays to project timelines. These silos often have unclear ownership. When there is ambiguity around responsibility for access controls and data quality audits, it can lead to inefficient data collection and confusion related to responsible and appropriate data use. Additionally, these data silos often have unclear policies and manual approval processes leading to lengthy delays for data access and stalled AI projects.
Data governance is important due to risks such as training data poisoning and sensitive data exposure, which arise from inadequate practices, risks highlighted in the OWASP Top 10 for LLM. Further, to improve data governance for AI systems, the OWASP LLM Governance and Security Checklist for LLM recommends maintaining clear data governance policies along with records of access controls and data provenance. These recommendations aim to help companies improve data governance practices.
Data security: The cornerstone of responsible and resilient AI innovation
The next critical topic is data security, which is protecting governed and managed data from misuse, unauthorized access, and breaches. Effective data security ensures that only the right users and non-human identities have access to data and AI infrastructure. Appropriate access control mechanisms minimize the risk of data leaks, model manipulation, or unauthorized use. Data security ensures clear records of who accessed what and when, which is crucial for accountability and compliance. Appropriate access controls reduce vulnerabilities to threats, whereas poorly managed access controls significantly increase the risk of malicious or accidental misuse of infrastructure and data.
Additionally, data security challenges also present barriers to AI projects. If not managed thoughtfully, complex permissions can slow access rights configuration and management, often adding weeks or months to an AI project’s timeline. In some cases, a lack of proper access controls further complicates approvals on how and what data can be responsibly used for an AI initiative. Manual or undocumented approval processes can also delay AI projects, increasing costs and negatively impacting perceptions of their value.
Strong AI data security practices are referenced in the OWASP Top 10 for LLM. Risks such as sensitive data disclosure and data and model poisoning both stem from poor data security practices. Least privilege access is crucial for both the data used to train the model, data used to ground the model responses, and the new knowledge generated from responses to user prompts. Defining roles and responsibilities, along with conducting regular access reviews, helps reduce the risks of data disclosure and the introduction of compromised data.
Identity matters: Strengthening AI security through robust identity management
Identity management provides control over who can access AI applications, sensitive data, and AI models. Effective identity management ensures that only authorized individuals and non-human identities interact with data and systems. Additionally, identity management provides a clear record of who did what for accountability and compliance. Overly broad or poorly managed identity management increases the risk of AI system misuse, unintended model modifications, data breaches, compliance headaches, and other issues.
Companies often have complex structures, which can cause delays to AI projects if access rights are not set up and maintained effectively. Complexities like data security issues can delay access and slow AI projects. Additionally, companies often lack an automated, scalable identity management program, increasing the risk of human error, such as failing to revoke access for departed personnel or granting excessive access privileges.
The OWASP Top 10 for LLM highlights excessive agency and system prompt leakage as direct effects of weak identity and access management. An AI system that can be manipulated to provide unintended responses and context can potentially expose and then remove or poison sensitive data. This represents a real-world threat to an AI system and is directly tied to poorly managed identity and access management. Further, the OWASP LLM Governance and Security checklist emphasizes clear AI-specific role-based dynamic access controls, rigorous identity and access management processes, regular access reviews and audit logging, and activity monitoring.
Where to start: Enabling AI projects to transform business
1. Create a cross-functional AI governance team
- Include leaders from IT, security, compliance, and business units.
- Assign clear ownership for data, security, and identity decisions.
2. Align with OWASP LLM Checklist
- Add the checklist to your AI development process.
- Regularly review and update as AI and regulations evolve.
3. Map your data and access
- Use data catalogs and access inventories to understand what data you have and who can use it.
- Maintain an up-to-date identity directory to track and manage who has access to data and AI systems, ensuring only authorized users are granted permission.
4. Automate where possible
- Use modern tools to automate data quality checks, security scans, and access reviews.
- Automate identity and access management, enabling real-time enforcement of least privilege, adaptive authentication, and immediate revocation of access when roles or risk profiles change.
5. Communicate and train
- Ensure all stakeholders understand the importance of governance, security, and identity in AI.
- Provide ongoing training on new risks and best practices.
How GDT can support AI data governance, security, & identity management
As a full-service IT solutions provider with deep expertise across security and AI, GDT can help your organization address the risks related to data governance, security, and identity in AI initiatives. An excellent place to start is with a GDT AI Security Workshop. This half-day, interactive workshop provides your organization with a clear, actionable pathway to securing AI by identifying vulnerabilities, aligning with global standards, and defining methods to build trust with stakeholders. Schedule your complimentary workshop or download the GDT AI Security Workshop datasheet to learn more.
- https://genai.owasp.org/llm-top-10/ ↩︎
- https://genai.owasp.org/resource/llm-applications-cybersecurity-and-governance-checklist-english/ ↩︎
Author
-
Casey Gager is a principal solutions architect on the GDT Cybersecurity Team, where he works with clients to scale, improve, and transform their cybersecurity posture. Projects range from Zero Trust, network modernization, and AI to identity, data security, program and compliance, and technology optimization. Casey has over 25 years of experience in IT and cybersecurity, working for corporations and startups as well as IT solutions providers like GDT. In addition to many highly respected cybersecurity, privacy, and business-focused certifications, he holds an MBA from the University of Connecticut and a master's degree in information security and assurance from Norwich University. In his spare time, Casey enjoys spending time with his wife and dogs, running and weightlifting, and reading books on diverse subjects.
View all posts