If you're evaluating voice AI solutions for your healthcare practice, HIPAA compliance should be your first question — not your last.
Many vendors claim "HIPAA-compliant AI," but the reality is more nuanced. True HIPAA readiness requires architectural decisions, vendor partnerships, operational controls, and ongoing monitoring that most generic AI chatbots simply don't provide.
- HIPAA doesn't certify products — covered entities must verify that vendors have BAAs with all subprocessors
- Every vendor who touches PHI — including speech-to-text and LLM providers — needs an executed BAA
- "Zero-retention" AI policies are the gold standard: PHI should never be used for model training
- Immutable audit logs (not just database logs) are required for compliant incident response
- ClaireMed provides security documentation, sample BAAs, and penetration test reports on request
What "HIPAA Compliance" Actually Means for Voice AI
HIPAA doesn't certify products. There's no "HIPAA certification" you can point to. Instead, covered entities (healthcare providers, health plans) and their business associates (vendors who handle PHI) must implement safeguards to protect patient data.
For voice AI, this means:
Technical safeguards
- Encryption: TLS 1.3 in transit, AES-256 at rest
- Access controls: Role-based access, identity verification
- Audit logging: Immutable logs of all PHI access
- Integrity controls: Ensuring data isn't altered or destroyed improperly
Administrative safeguards
- Business Associate Agreements (BAAs): Required with every vendor who touches PHI
- Risk assessments: Regular evaluation of security vulnerabilities
- Staff training: Your team needs to know how to use the system safely
- Incident response: Plans for handling breaches
Physical safeguards
- Data center security: Where is patient data stored? Who has access?
- Device controls: How are recordings and transcripts accessed?
- Disaster recovery: Backups and redundancy for PHI
The 8 Questions Every Practice Manager Should Ask
1. "Do you have executed BAAs with all subprocessors?"
Why this matters: Voice AI systems use multiple vendors — speech-to-text providers, LLMs, telephony services, cloud storage. Each one needs a BAA if they handle PHI.
ClaireMed's answer: Yes. We have executed BAAs with AWS (HIPAA-eligible services: S3, RDS, Lambda, KMS, CloudTrail), Twilio (Security Edition with HIPAA BAA), and all AI vendors with zero-retention policies (PHI never used for model training).
2. "Where is patient data stored and for how long?"
Why this matters: HIPAA requires "minimum necessary" data retention. Storing call recordings forever is a liability, not a feature.
ClaireMed's answer:
- Call recordings: 90 days (configurable), stored in S3 with Object Lock (immutable)
- Transcripts: 90 days, redacted for PII
- Metadata (call duration, routing): 12 months for analytics
- After retention period: Automatic deletion via lifecycle policies
3. "How do you handle identity verification?"
Why this matters: Before discussing appointments, billing, or medical records, you must verify the caller is who they claim to be. Many voice AI systems skip this step.
ClaireMed's answer: Multi-factor verification options — date of birth + ZIP code, last 4 of medical record number, optional OTP via SMS, configurable per practice and call type.
4. "What happens if the AI detects an emergency?"
Why this matters: If a caller says "chest pain" or "suicidal thoughts," your AI needs immediate escalation protocols — not a scheduling bot offering next week's appointments.
ClaireMed's answer: Emergency keyword detection ("911," "emergency," "chest pain," "bleeding," "overdose," etc.) triggers an immediate response: "This sounds like an emergency. I'm connecting you to 911 / on-call provider right now." The caller is never left on hold or in voicemail.
5. "Do you have audit logs? Are they immutable?"
Why this matters: In case of a complaint or breach investigation, you need complete, tamper-proof logs of who accessed what and when.
ClaireMed's answer: Immutable audit logs via S3 Object Lock (cannot be deleted or modified, even by admins), CloudTrail logging for every API call and access event, 7-year retention per HIPAA requirements.
6. "What are your non-clinical boundaries?"
Voice AI should never provide medical advice. But defining "medical" vs. "administrative" is tricky.
ClaireMed's answer — strict boundaries by design:
Automatic escalation: "That's a great question for your provider. Let me connect you with our clinical team."
7. "How do you train your AI? Will my patient data be used for training?"
Why this matters: Many AI companies use customer data to improve their models. For healthcare, this is unacceptable.
ClaireMed's answer: Zero-retention policies with all AI vendors (contractual commitment: no training on PHI). If we train models, it's on synthetic data or fully de-identified samples. You own your data — we never share it.
8. "Can I see your risk assessment and security documentation?"
As a covered entity, you're responsible for vetting your business associates. You need documentation.
ClaireMed's answer: We provide a security whitepaper, sample BAA for review, penetration testing reports on request, and SOC 2 Type II (in progress, available Q2 2026).
Common HIPAA Myths About Voice AI
ClaireMed's HIPAA Architecture
Infrastructure layer
AWS HIPAA-eligible services (S3, RDS, Lambda, KMS, CloudTrail), Twilio Security Edition (telephony with HIPAA BAA), VPC isolation so practice data never mingles.
Application layer
End-to-end encryption (TLS 1.3 in transit, AES-256 at rest), role-based access (agents only access data needed for their role), identity verification before discussing PHI.
Operational layer
Zero-retention AI policies (no training on PHI), immutable audit logs (S3 Object Lock, 7-year retention), breach notification within 60 days per HIPAA.
Monitoring layer
Weekly performance metrics (call volume, routing accuracy, abandonment rate), quarterly security reviews (vulnerability scans, penetration testing), annual risk assessments required for BAA renewal.
What Your Practice Should Do
Before selecting a vendor:
- Ask for their security whitepaper and BAA template
- Verify they have BAAs with all subprocessors (not just the main vendor)
- Check retention policies — shorter is better, automatic deletion is required
- Test emergency scenarios ("What if a caller says 'chest pain'?")
- Review audit logging — is it immutable? How long is it retained?
After implementation:
- Train your staff on how to use the system safely
- Review weekly metrics (are calls being handled appropriately?)
- Conduct quarterly audits (spot-check call recordings for compliance)
- Update your HIPAA risk assessment to include the voice AI system
If a breach occurs:
- Immediate investigation: what happened, how many patients affected
- Vendor notification within 24 hours per BAA
- Breach notification: patients + HHS if 500+ affected, within 60 days
The Bottom Line
HIPAA compliance for voice AI isn't optional — it's foundational.
Don't settle for vendors who say "we're working on it" or "it's coming soon." Your practice's reputation and your patients' trust depend on getting this right from day one.