Here is the problem nobody talks about openly: a standard AI deployment โ the kind you get from most vendors, most consultancies, and most internal IT teams โ violates compliance frameworks by default. Not sometimes. By default. The way large language models handle data, retain context, route requests through third-party APIs, and log interactions is fundamentally incompatible with HIPAA, SOX, and PCI-DSS requirements unless someone has deliberately engineered around every one of those incompatibilities.
Most organizations don't discover this until they're already in production. At that point, the options are expensive: rip out the AI system, spend months retrofitting controls that were never designed in, or quietly hope the auditors don't look closely enough. None of those options are acceptable. All of them are avoidable.
This guide is for technology leaders and compliance officers who need to understand what compliance actually demands from AI systems โ not the vendor's marketing language about being "HIPAA-ready," but the specific technical and operational controls each framework requires.
When you send Protected Health Information to a cloud-based AI API without a signed Business Associate Agreement, you have committed a HIPAA violation. When your AI system processes payment card data through an unscoped endpoint, you have expanded your PCI-DSS audit surface. These violations happen silently, at scale, from day one of a naive deployment. Awareness is the first control.
HIPAA and AI in Healthcare
The Health Insurance Portability and Accountability Act was not written with AI in mind, but its requirements map directly onto AI system design in ways that are precise and unambiguous. The core issue is Protected Health Information โ any individually identifiable health data โ and the rule is simple: you cannot send PHI to any third party that has not signed a Business Associate Agreement.
Every major AI API provider โ OpenAI, Anthropic, Google, Microsoft Azure โ has a BAA available, but signing one is not sufficient on its own. The BAA governs how data is handled at the provider level. You are still responsible for what data you send, how you send it, and what your system does with the responses. A BAA does not make your architecture compliant. It makes one link in the chain covered. Every other link is still your responsibility.
Sending full patient records to an AI API when only a subset of fields is needed. Using a general-purpose AI assistant (without a BAA) to summarize clinical notes. Storing AI conversation logs in systems not included in your HIPAA risk analysis. Allowing AI-generated outputs containing PHI to flow into unsecured downstream systems. Training or fine-tuning models on patient data without explicit de-identification.
Audit logging under HIPAA requires that every access to PHI be recorded: who accessed it, when, from what system, and what action was taken. For AI systems, this means logging every query that included PHI and every response that contained or derived from PHI. Most off-the-shelf AI tooling does not produce audit logs in this format by default. You have to build that layer explicitly.
HIPAA's Minimum Necessary standard requires that you only use or disclose the minimum PHI needed to accomplish the intended purpose. For AI systems, this means pre-processing patient data to strip all fields not relevant to the specific query before the data ever reaches an AI model. Build this filtering layer before your AI integration, not after.
The Breach Notification Rule adds another dimension. If your AI system experiences a breach involving PHI โ including unauthorized model access, prompt injection that exfiltrates data, or a misconfigured API endpoint โ you have 60 days from discovery to notify affected individuals and HHS. For large breaches affecting 500 or more individuals in a state, media notification is also required. AI systems that handle PHI must be included in your incident response plan, with breach scenarios specifically modeled.
SOX Requirements for Financial AI
The Sarbanes-Oxley Act was written in the aftermath of Enron and WorldCom, with the specific goal of ensuring that financial reporting is accurate, auditable, and tamper-resistant. When AI systems participate in financial processes โ generating reports, processing transactions, making classification decisions about accounts โ they become part of the audit trail and must meet the same standards as any other financial system.
Section 404 of SOX requires that management assess and report on the effectiveness of internal controls over financial reporting. For AI systems, this creates a documentation burden that most teams don't anticipate. You must be able to demonstrate that the AI system produces consistent, reproducible outputs for a given set of inputs โ or, if outputs are probabilistic (as they are for most LLM-based systems), that you have controls in place to validate those outputs before they affect financial records.
Using an AI system to generate journal entries without a human review and approval step. Allowing AI to modify financial records directly without segregation-of-duties controls. Deploying AI system updates without change management documentation. Using AI outputs in financial reports without validation audit trails. Failing to include AI systems in your IT general controls assessment.
Segregation of duties is perhaps the most operationally challenging SOX requirement for AI systems. In traditional financial systems, the person who initiates a transaction cannot also be the person who approves it. For AI systems, the equivalent control is ensuring that the AI cannot both generate a financial output and mark that output as approved. Every AI-generated financial action requires a human approval step, and that approval must be logged separately from the AI's action.
Change management is equally demanding. Every change to an AI system that affects financial reporting โ including model updates, prompt changes, configuration adjustments, and integration modifications โ must be documented, reviewed, and approved through your change management process. This is operationally painful for teams accustomed to deploying AI updates continuously. The discipline has to be built into the deployment pipeline from the start.
For any AI system touching financial data, implement write-once audit logging from day one. Every AI query, every response, every downstream action, and every human approval must be logged to a system that neither the AI nor the application layer can modify. This is the control your auditors will look for first.
PCI-DSS for Payment Systems
The Payment Card Industry Data Security Standard is the most technically prescriptive of the three frameworks, and it creates a unique challenge for AI systems: scope. PCI-DSS scope includes any system that stores, processes, or transmits cardholder data โ and "processes" is interpreted broadly. If your AI system receives, analyzes, or routes data that includes PANs (Primary Account Numbers), expiration dates, CVV codes, or cardholder names in combination with account numbers, that AI system is in scope.
Being in scope means meeting all twelve PCI-DSS requirements for that system. For most AI deployments, the practical answer is scope reduction: architect the system so that cardholder data never reaches the AI layer. Use tokenization at the point of data entry, so the AI system only ever sees tokens, not raw card data. This is not a workaround โ it's the recommended approach.
Building an AI customer service system that can access full card numbers "for verification purposes" โ this puts your entire AI infrastructure in scope. Logging AI conversation transcripts that may contain card data spoken by customers. Using a shared AI platform for both in-scope and out-of-scope data without network segmentation. Failing to document AI systems in your cardholder data flow diagram.
PCI-DSS Requirement 10 mandates logging and monitoring of all access to network resources and cardholder data, with log retention of at least 12 months (3 months immediately available). For AI systems, this means your logging infrastructure must be able to capture and retain query logs at the same scale the AI operates โ potentially millions of requests per day โ while keeping them tamper-evident and available for forensic review.
Design your data flow so that PANs are tokenized before they enter any AI processing pipeline. The AI should never see raw card numbers. If the use case genuinely requires processing card data, that processing should happen in an isolated, fully hardened PCI-compliant environment before results are passed to the AI layer. Reducing scope is always cheaper than expanding controls.
Designing Compliant AI From Day One
Retrofitting compliance controls onto a deployed AI system is one of the most expensive, time-consuming, and organizationally painful things you can do in regulated technology. Every control that wasn't designed in has to be surgically added after the fact, often requiring architectural changes that break existing integrations. The compliance design framework below applies to all three frameworks and should be executed before any production deployment.
DevThing's Compliance-First Approach
Every AI system we build for a regulated client is designed against the compliance requirements from the first architecture conversation. We do not add compliance controls at the end. We do not offer "compliance-ready" AI systems that require you to implement the hard parts yourself. We design and deliver systems where the compliance architecture is part of the system architecture โ inseparable, documented, and audit-ready.
Our team includes people who have operated under federal compliance requirements in environments where the consequences of failure were not a fine or a failed audit, but something considerably more serious. That background shapes how we think about regulated AI deployments. Compliance is not a checkbox exercise for us. It is an engineering discipline.
"You cannot make a non-compliant AI system compliant by writing a policy. You have to build compliance into the architecture. Every other approach is theater โ and auditors have seen all the theater."
Fred Lackey, DevThing LLC
If you are deploying AI in healthcare, financial services, or any payment-adjacent environment, the question is not whether you need compliance architecture. The question is whether you want to build it correctly from the start or pay two to three times as much to fix it after the auditors find the gaps. We have seen both paths. One of them ends better.