IBM's 2024 Cost of a Data Breach Report put the average breach cost at $4.88 million — a record high. What the report doesn't break out, because the category is still emerging, is how much of that exposure is now being introduced through AI systems specifically. Based on what we're seeing in the field, the answer is: more than most boards realize, and growing fast.

This isn't a piece designed to scare you away from AI. It's the opposite. The organizations that deploy AI safely and at scale are the ones whose leadership went in with clear eyes about the threat surface. What follows is what I'd tell your board in a 30-minute briefing.

$4.88M
Average cost of a data breach in 2024 — a record high. AI systems are expanding the attack surface in ways traditional security frameworks weren't built to address.
IBM Cost of a Data Breach Report, 2024

The Five Risks Unique to AI

Traditional cybersecurity focuses on perimeter defense, access control, and data encryption. Those still matter. But AI introduces a class of vulnerabilities that don't exist in conventional software. Your CTO may know about them. Your CISO probably does. Your board almost certainly doesn't — and that's the gap that creates liability.

1. Prompt Injection

When your AI system accepts natural language input — from customers, from employees, from documents — it's also accepting instructions. A malicious actor can craft inputs that redirect the AI's behavior: extracting data it shouldn't share, bypassing validation logic, or executing commands outside the system's intended scope. This isn't theoretical. It's been demonstrated against production systems at major financial institutions and government agencies. The defense requires deliberate architectural choices at the design stage, not a patch after deployment.

2. Model Poisoning

If your AI is trained or fine-tuned on data you don't fully control — public datasets, third-party content, user-generated input — that data can be deliberately corrupted upstream to influence how your model behaves. Model poisoning is a supply-chain attack on your AI's cognition. The result can be subtly wrong outputs that are nearly impossible to detect without rigorous evaluation pipelines. In regulated industries, a poisoned model making compliance-related decisions is a material risk.

3. Data Leakage Through LLMs

Large language models are extraordinarily good at synthesizing and surfacing information. That's the feature. The risk is that the same capability can expose sensitive data that was never meant to be retrieved together. Feeding an LLM proprietary contracts, HR records, and financial data to power an internal assistant creates the possibility that a sufficiently crafted query — by an employee or an attacker — returns information that violates confidentiality, regulatory requirements, or both. Access control at the data layer is necessary but not sufficient when the retrieval mechanism is a reasoning engine.

4. Shadow AI

Your employees are already using AI. Some of it is sanctioned. Most of it isn't. Shadow AI — employees pasting sensitive data into ChatGPT, using personal AI accounts for work tasks, or deploying unapproved tools in their workflows — is the AI equivalent of shadow IT, and it's already inside your perimeter. The gap between what your organization officially allows and what your workforce is actually doing is where your most significant near-term exposure lives.

Shadow AI Is Already in Your Organization

Studies suggest that over 60% of employees at enterprise companies use AI tools not approved by their IT department. Before approving an enterprise AI budget, your board should ask: what is already happening without approval? The answer will be more concerning than the deployment you're about to greenlight.

5. Compliance Drift

AI systems change. Models are updated by vendors, fine-tuning shifts behavior over time, and integrations evolve. What was compliant with HIPAA, SOX, or GDPR when you deployed may not remain compliant six months later — not because your policies changed, but because your AI did. Most organizations have no mechanism to detect this drift. A compliance posture that relies on a one-time audit of an AI system is not a compliance posture; it's a liability.

What Good AI Governance Looks Like

Governance isn't bureaucracy for its own sake. It's the operational structure that lets you move fast without creating hidden liabilities. In the organizations we've worked with that do this well, governance has three components:

01
An AI Inventory
Every AI system in use — sanctioned or not — is documented, categorized by data sensitivity, and assigned an owner. This is not optional if you operate in a regulated industry. It is the foundation of every other governance capability.
02
A Data Classification Policy That Covers AI
Existing data classification policies almost universally predate AI and don't account for it. What data can be sent to an external LLM? What requires on-premises processing? These decisions need to be explicit, written, and enforced technically — not just as policy.
03
Continuous Evaluation, Not One-Time Audits
AI systems need ongoing behavioral monitoring the same way networks need continuous intrusion detection. Automated evaluation pipelines that test model outputs against expected behavior, flagging drift before it becomes a compliance failure or a customer incident.

The Questions Every Board Should Be Asking

Before approving AI budgets or deployment timelines, boards should get clear answers to these questions from their CTO and CISO. If the answers are vague, that's the finding.

  • What data are we feeding into AI systems, and where does that data go? If the answer involves any third-party cloud model, the next question is: what are the data retention and training policies of that vendor?
  • How are we detecting and managing shadow AI? Not preventing it — detecting it. You cannot manage what you cannot see.
  • What happens when our AI system is wrong in a consequential way? The answer should include a detection mechanism, a remediation process, and a liability analysis.
  • Who is responsible when an AI-assisted decision causes harm? This is a governance question, not just a legal one. The answer needs to exist before the incident, not after.
  • How will our AI compliance posture be maintained as models are updated? If the answer is "we'll revisit this annually," that's not an answer.
🔍
Why Our Background Matters Here

DevThing's founding team includes direct experience at DHS and CISA — the federal agencies responsible for protecting the nation's critical infrastructure from exactly these kinds of emerging threats. We've seen what sophisticated adversaries do with new attack surfaces. We build AI systems with that threat model in mind from the first line of code, not as an afterthought. That's a different posture than a consulting firm that added "AI security" to their pitch deck in 2024.

The Bottom Line

The organizations getting AI wrong on security aren't reckless. They're moving fast with incomplete frameworks. The solution isn't to slow down — it's to build the right structure before you scale. That means a board that asks hard questions before approving budgets, a CTO who has documented answers to those questions, and an implementation partner who treats security as a first-class design constraint rather than a compliance checkbox.

The $4.88 million average breach cost doesn't account for regulatory fines, reputational damage, or the operational cost of unwinding a compromised AI system. The organizations that get this right will have a durable competitive advantage. The ones that don't will spend the next several years cleaning up messes that were entirely preventable.

Security isn't a feature you add to AI. It's the foundation you build on. Every shortcut you take during deployment gets paid back with interest during an incident.