Skip to main content

Enterprise AI Security: What Your CISO Needs to Know

AI introduces new attack surfaces and data risks. A practical security framework for enterprise AI deployments.
22 August 2023·8 min read
John Li
John Li
Chief Technology Officer
Your team has been using ChatGPT for months. Your organisation is planning its first AI deployment. Your CISO is losing sleep. Here's the security framework that should exist before any enterprise AI goes into production.

What You Need to Know

  • Consumer AI tools (ChatGPT, Bard) and enterprise AI deployments have fundamentally different security profiles. Conflating them creates either unnecessary fear or dangerous complacency.
  • AI introduces three new attack surfaces that traditional security frameworks don't cover: prompt injection, data leakage through model interactions, and hallucination-based misinformation.
  • The biggest immediate risk isn't an AI-specific attack. It's employees putting sensitive data into consumer AI tools without understanding the data handling implications.
  • Enterprise AI security isn't a separate discipline. It's an extension of existing information security practices (data classification, access control, monitoring, incident response) applied to a new category of system.
  • Start with a simple AI usage policy. Build toward a full governance framework as your AI deployments mature.
65%
of enterprise employees have used consumer AI tools for work without formal approval
Source: Salesforce, State of IT Report, 2023

The Three Security Domains

1. Shadow AI: Your Immediate Risk

Before you worry about securing AI deployments, address the AI that's already in your organisation: the consumer tools your team is using without governance.
The risk: Employees paste confidential data into ChatGPT to summarise documents, draft communications, or analyse data. In most configurations, this data may be used to train future models. Your competitive intelligence, client data, and internal strategy could end up in a training dataset.
The fix:
  • Acknowledge it. Banning consumer AI is ineffective. People use personal devices. Instead, set clear policies about what data can and can't be shared with consumer AI tools.
  • Classify data for AI. Extend your existing data classification to include an "AI-shareable" tier. Public information? Fine. Client data? Never. Internal strategy? Probably not.
  • Provide alternatives. If you ban ChatGPT, provide an approved alternative. Azure OpenAI Service, for example, offers the same models with enterprise data guarantees. Your data is not used for training.
  • Monitor and educate. Regular security awareness training that specifically covers AI data handling.

2. Deployment Security: Building AI Safely

When you build and deploy AI systems, standard security practices apply, plus AI-specific additions.
Data security:
  • AI models should run in your controlled environment (private cloud, VPC, or on-premises)
  • Training and inference data never leaves your security perimeter
  • Access controls apply to AI systems the same way they apply to databases: role-based, audited, monitored
  • Data pipelines need the same security review as any data integration
Model security:
  • Prompt injection. Attackers craft inputs designed to make the AI ignore its instructions and follow theirs instead. Mitigation: input validation, output filtering, model-level guardrails, and never giving AI systems more permissions than necessary.
  • Data poisoning. If your AI learns from user feedback, malicious inputs could skew its behaviour over time. Mitigation: review and validate feedback before incorporating it, maintain clean baseline datasets.
  • Model theft. Fine-tuned models represent significant IP. Treat model artifacts with the same security as source code: version control, access logging, encrypted storage.
Infrastructure security:
  • GPU instances are expensive. Secure them against crypto-mining hijacking
  • API endpoints need rate limiting, authentication, and monitoring
  • Log all AI interactions for audit trails (essential for governance)

3. Output Security: What the AI Produces

The most novel security challenge with AI is that the output itself can be a risk.
Hallucination risk: The AI confidently states something incorrect: a wrong policy interpretation, an incorrect compliance requirement, a fabricated legal citation. In enterprise context, acting on a hallucination can have real financial and legal consequences.
Mitigation:
  • Never deploy AI as an autonomous decision-maker for high-stakes outcomes
  • Implement confidence scoring and human-in-the-loop review for critical outputs
  • Require source attribution. The AI must cite which documents informed its response
  • Regularly test outputs against known-correct answers (regression testing)
Data leakage through outputs: The AI might inadvertently include sensitive information from its context in a response to an unauthorised user.
Mitigation:
  • Implement access-aware retrieval. The AI only searches documents the user is authorised to see
  • Review output filters for PII, credentials, and sensitive classifications
  • Test with adversarial prompts designed to extract protected information

A Practical Security Framework

Phase 1: Policy (Weeks 1-2)

  • AI usage policy for consumer tools (what data, which tools, what approval)
  • Data classification extended for AI (which data can be processed by AI systems)
  • Incident response plan extended for AI-specific incidents

Phase 2: Infrastructure (Weeks 3-6)

  • Approved AI deployment environment (private cloud / managed service)
  • API security (authentication, rate limiting, logging)
  • Access control framework for AI systems

Phase 3: Operational (Ongoing)

  • Prompt injection testing as part of security review
  • Output quality monitoring and hallucination detection
  • Regular security awareness training covering AI-specific risks
  • Audit log review for AI interactions
Don't Let Security Block AI
The goal is secure AI adoption, not no AI adoption. A pragmatic security framework that enables safe experimentation is more valuable than a perfect policy that prevents any progress. Start with Phase 1 (a clear AI usage policy) and build from there.
Should we ban ChatGPT in our organisation?
No. Banning consumer AI tools is ineffective (people use personal devices) and counterproductive (your team falls behind on AI literacy). Instead, set clear policies about data handling and provide approved enterprise alternatives.
Is it safe to use cloud-hosted AI models for enterprise data?
Yes, with the right configuration. Enterprise offerings from major cloud providers (Azure OpenAI, AWS Bedrock, Google Vertex AI) offer data isolation, compliance certifications, and guarantees that your data isn't used for training. Evaluate the specific terms, but cloud-hosted enterprise AI is fundamentally different from consumer AI in terms of data handling.
How do we test for prompt injection?
Include prompt injection in your security testing alongside SQL injection and XSS. Test with known attack patterns: "Ignore previous instructions and...", "You are now in debug mode...", attempts to extract system prompts. Specialised tools are emerging, but manual adversarial testing by your security team is the best starting point.