HIPAA-aware AI for medical, dental, and chiropractic practices
If you're a healthcare practice considering AI, you have specific obligations the average vendor may not understand. Here's the short version.
Healthcare practices come to us with a specific concern that other industries don't share at the same intensity: HIPAA. The good news is that AI deployments can be HIPAA-aware without much additional engineering. The bad news is that vendors who don't know what they're doing will get this wrong, and the consequences of getting it wrong are real.
The fundamental requirement is that protected health information is handled by entities that have a Business Associate Agreement with the practice and that have implemented the technical and administrative controls HIPAA requires. This applies to any system that touches PHI — including AI systems, including the model providers behind them.
Major model providers have HIPAA-eligible offerings now. OpenAI, Anthropic, Google, and AWS Bedrock all provide enterprise tiers with BAAs and the appropriate controls. The architecture we deploy uses these tiers, configures them correctly, and ensures that PHI is processed only through them. We sign a BAA with the practice. We document the architecture. We are accountable for the chain.
Beyond the formal compliance posture, we apply additional layers. PHI minimization — the AI receives only the data it actually needs to answer the question, not the full patient record. Audit logging — every PHI touch is logged. Access controls — practice staff see what they should see, not what they shouldn't. Retention policies — PHI in conversation logs is retained per your written policy, not by default.
We do not recommend deploying AI in healthcare with vendors that won't sign a BAA, won't talk specifics about their architecture, or who answer compliance questions vaguely. The risk-cost ratio is unfavorable. We're happy to be a second opinion if you've gotten a proposal that doesn't read right.
