HIPAA Guide
HIPAA Compliant App Development: What Your Infrastructure Must Handle
Last updated: March 2026
Most guides on HIPAA-compliant app development are written by dev agencies trying to sell you an engagement, or by compliance consultants who have never shipped a production healthcare app. The result is advice that conflates two very different problems.
HIPAA compliance in software has two layers: infrastructure and application. The infrastructure layer covers encryption at rest, network isolation, audit log storage, patch management, and BAA coverage for your hosting stack. The application layer covers the controls you write in code: access management, session handling, PHI logging behavior, and integrity controls in your data model. These are distinct responsibilities. What your platform handles and what you build in your application are not the same question, and most HIPAA content never separates them.
This guide does. It covers what the Security Rule actually requires, what your infrastructure must handle for you to inherit, and what you have to build in your application regardless of which platform you use.
What HIPAA requires of app developers (the relevant parts)
HIPAA has three main rules. All three apply to business associates, which is the category most digital health startups fall into.
The Privacy Rule governs how PHI can be used and disclosed. For engineers, it surfaces primarily as two requirements: BAAs with every vendor that touches PHI, and the minimum necessary standard (transmit only the PHI required for the task, not a full record when a patient identifier will do).
The Security Rule is where most engineering work lives. It requires administrative, physical, and technical safeguards for electronic PHI, documented policies for each, and retention of all compliance documentation for a minimum of six years. The Security Rule is also risk-based: you select controls proportionate to your risk profile, document your reasoning, and revisit it when your environment changes.
Within the Security Rule, access control expectations extend beyond authentication. Systems must enforce least privilege, meaning users and services only have access to the minimum PHI necessary to perform their function. Separation of duties must also be implemented where appropriate, preventing any single individual from having unchecked control over critical operations such as access provisioning, audit log management, or data modification.
The Breach Notification Rule specifies what constitutes a reportable breach and who gets notified. For business associates, the obligation is to notify your covered entity customer without unreasonable delay, and no later than 60 days from discovery. Many BAAs specify 24-72 hours. Know which applies to you before something goes wrong.
The Security Rule is where your architecture decisions matter most. The rest of this guide focuses there.
The technical safeguards you must build in your app
The Security Rule's technical safeguards break down into five categories. Some of these are genuinely infrastructure concerns. Others you have to implement in application code. This section covers the ones that belong to you.
Access controls
HIPAA requires that each user have a unique identifier for accessing systems that contain PHI. Shared accounts are not compliant. If your application lets two people log in as "admin@company.com," that's a violation regardless of what your infrastructure does.
Required: unique user identification for anyone who can access PHI.
Addressable (but expected in practice): automatic session timeout after inactivity. Fifteen minutes is the common standard for web applications. "Addressable" under the Security Rule doesn't mean optional. It means you assess whether it's appropriate for your environment and either implement it or document why an equivalent control is sufficient. For healthcare applications with live patient data, automatic logoff will always be appropriate.
Also required: a documented emergency access procedure. This is the process for accessing PHI when the normal access mechanism fails, for example when the primary administrator is unavailable during an incident. Most startups skip this until an auditor asks for it.
Audit controls
This section covers what your application code must log. For the infrastructure layer (immutable storage, log drains, platform-level activity logs), see Hosting Requirements. For retention periods, protection requirements, and what auditors check, see Audit Log Retention.
This is one of the most commonly under-built controls in early-stage healthcare apps. HIPAA requires hardware, software, and procedural mechanisms to record and examine access and activity in systems that contain PHI. At the application level, that means audit logs: who accessed which PHI, what action they took, and when.
What must be logged:
Authentication events (login, logout, failed attempts)
PHI access (which records were viewed, by whom)
PHI modifications (creates, updates, deletes)
Permission changes (who gained or lost access to what)
Administrative actions that affect PHI visibility
These logs must be retained for a minimum of six years and protected against unauthorized modification. The logs themselves must be immutable. An audit log that the application itself can overwrite provides no compliance value.
If your application stores audit logs in the same database your application can write to, you have a problem.
Integrity controls
HIPAA requires mechanisms to ensure that ePHI has not been altered or destroyed in an unauthorized manner. At the application layer, this means access controls that prevent unauthorized writes, logging of all modifications with sufficient context to detect tampering, and database-level constraints that prevent corruption. It does not require cryptographic checksums on every record, but it does require that you can detect and demonstrate that unauthorized modification did not occur.
Session management and transmission security
PHI transmitted over a network must be protected against unauthorized interception. TLS 1.2 or higher is the current standard; TLS 1.0 and 1.1 are not acceptable. This is primarily an infrastructure concern if your platform handles TLS termination, but if your application makes outbound API calls to third-party services that handle PHI (an EHR integration, a lab results vendor), you are responsible for ensuring those connections use acceptable protocols.
PHI handling in application code
A few failure modes that are entirely your responsibility, regardless of infrastructure:
Logs. Application logs that capture request bodies, error details, or debug information can easily capture PHI. A stack trace that includes a patient record ID, a name, or a date of birth is a PHI exposure. Structure your logging so that PHI never appears in application logs by default. This requires deliberate design, not just good intentions.
Error messages. Error responses that return PHI (a "user not found" message that echoes back an email address, a validation error that includes a record ID) are a frequent source of unintentional disclosure. Audit your error handling for PHI leakage before launch.
Third-party scripts and SDKs. Analytics tools, error monitoring platforms, and customer support SDKs that run in your application or server environment can capture PHI if not configured carefully. Any of these that touch PHI require a BAA. Most don't offer one.
What your infrastructure must handle
The controls above are yours to build. The following are the responsibility of your infrastructure platform. If you're running on AWS directly, you're building and maintaining these yourself. If you're on a managed HIPAA-compliant platform, you inherit them.
Encryption at rest. ePHI stored on disk must be encrypted. AES-256 is the current standard. This applies to your application database, file storage, backups, and any other persistent storage layer. At the infrastructure level, this means encrypted EBS volumes, encrypted RDS instances, encrypted S3 buckets, and documented evidence that encryption is enabled on every volume, not just the ones you remember to configure.
Encryption in transit within your infrastructure. TLS between your application and your database, between services in your environment, and between your environment and external systems. The infrastructure layer should enforce this by default, not rely on each developer configuring it correctly.
Network isolation. ePHI environments should be isolated at the network level. Production databases should not be reachable from the public internet. Internal services should communicate over private networks. This is a deployment architecture concern, not something you configure in application code.
Vulnerability management. Your infrastructure components (the OS, web server, database, and supporting packages) require regular patching and monitoring for known vulnerabilities. On a self-managed stack, this is your team's ongoing responsibility. On a managed platform, the provider handles OS-level patching, and you're responsible for your application dependencies.
Audit log storage. The infrastructure layer should provide tamper-evident storage for audit logs that your application cannot modify. This is the separation that makes audit logs meaningful. If your application and your audit log storage share credentials, you don't have an audit log. You have a record your application could alter.
BAA coverage. Your hosting provider, managed database provider, and any infrastructure service that could process or store your application data must sign a BAA. AWS will sign a BAA, but it covers only the services and configurations you've explicitly set up as HIPAA-eligible. Getting it right requires navigating their Shared Responsibility Model carefully. A managed platform like Aptible provides a single BAA that covers the infrastructure layer, so you don't have to identify and cover each underlying service individually.
The practical difference: on AWS without a managed platform, you're responsible for inventorying every service that touches PHI and ensuring it's covered. That's 10-15 services before you write application code, and it grows as your architecture does.
Business associate agreements: more than your hosting provider
A BAA is required with every vendor that handles PHI on your behalf. Most startups cover the obvious ones (hosting, managed database) and miss the rest.
Go through your full vendor stack and ask: could this service receive PHI?
Error monitoring (Sentry, Datadog, Rollbar): if your application sends error context that includes request data, yes
Log management (Papertrail, Loggly, Elastic): if application logs could contain PHI, yes
Customer support tools (Intercom, Zendesk): if support conversations involve patient data, yes
Email service providers: if your application sends emails containing PHI, yes
Analytics platforms: if event data includes PHI, yes
Video conferencing (for telehealth): almost certainly yes
Some of these vendors offer BAAs. Many don't. If a vendor won't sign a BAA, you cannot use them in a context where they could receive PHI. That's not negotiable.
Getting a BAA from a vendor isn't just about collecting a signature. The BAA defines what security obligations the vendor has accepted. A BAA from a vendor with no security certifications and no evidence of controls is a piece of paper. When evaluating vendors, ask for their SOC 2 Type II report alongside the BAA. The report is evidence that the BAA means something.
For a full breakdown of what to look for in a BAA, see What is a HIPAA BAA.
Common mistakes that cause HIPAA failures in healthcare apps
These are the patterns that show up in breach reports and enforcement actions. All of them are preventable.
"AWS is HIPAA compliant, so our stack is HIPAA compliant." AWS offers HIPAA-eligible services, but compliance is your responsibility under their Shared Responsibility Model. An unencrypted S3 bucket, an RDS instance without encryption at rest enabled, or a security group that exposes your database to the public internet are all your problem, not AWS's. HIPAA eligibility from a cloud provider is a necessary condition, not a sufficient one.
PHI in application logs. This is how most accidental PHI exposures happen. A developer adds debug logging for a patient lookup endpoint. The logs go to a third-party log management service that didn't sign a BAA. Now you have a PHI exposure and a BAA violation at the same time. Audit your logging configuration before launch and on every significant code change.
Missing BAAs with secondary vendors. Teams are diligent about getting a BAA from their hosting provider and then integrate Sentry, add Intercom, connect a Twilio SMS notification flow, and never ask whether PHI could flow through any of them.
Shared administrative accounts. Unique user identification is required. If your ops team shares a database admin account, you cannot produce a meaningful audit trail for that account's activity. You also can't revoke one person's access without changing credentials for everyone.
No audit logging for PHI access. Most healthcare applications log errors and events. Far fewer log every PHI access event with sufficient context to reconstruct who accessed what and when. Audit logs are not just a compliance checkbox. They're what lets you scope a breach when one occurs.
PHI in error messages returned to clients. Error responses that echo back record identifiers, patient names, or other identifying information are a common source of PHI exposure. They're also the kind of thing that shows up in security research.
No documented emergency access procedure. The Security Rule requires one. It doesn't have to be elaborate, but "we'd figure it out" is not a compliant answer.
How to structure development for HIPAA compliance
HIPAA compliance isn't a feature you add at the end. The decisions that are hardest to retrofit (data model, access control architecture, audit logging, encryption scheme) are easiest to get right before you write the first line of production code.
Start with data and system classification. Before you build, identify where PHI exists, where it's stored, and how it flows through your system. That means databases, APIs, logs, and third-party integrations. Classify systems based on whether they store, process, or transmit PHI. For early-stage teams, this doesn't need to be formal. A simple map of data flows and PHI touchpoints is sufficient to start. This is one of the most commonly skipped steps in early healthcare applications. Teams focus on infrastructure and access controls but lack a clear picture of where PHI actually flows, which leads to missed controls, accidental exposure, and gaps in BAA coverage. It also enables everything else on this list: you can't do a meaningful risk assessment without knowing which parts of your system are in scope.
Start with a risk assessment. Document what risks exist at each point in your PHI flows and what controls you're implementing in response. This is a required component of the Security Rule, and it's also a useful design exercise. Teams that skip it often build without knowing which parts of their system are in scope.
Design for minimum necessary access. Your data model and access control architecture should reflect the principle that each user, service, and component accesses only the PHI it needs to function. This is easier to enforce with role-based access control designed in from the start than bolted on after the fact.
Document your security architecture. HIPAA requires documented policies and procedures for each safeguard. This doesn't mean writing a 50-page document before launch, but it does mean having written records of how your controls work, why you chose them, and how they're maintained. Those records need to be retained for six years.
Test before you launch. Penetration testing before your first production PHI deployment surfaces the vulnerabilities that code review misses. Many healthcare enterprise customers will ask for evidence of penetration testing as part of their vendor approval process.
Build an incident response plan. Before you have a breach, know how you'll detect it, who gets notified and when, how you'll scope which PHI was involved, and what your notification obligations are under your BAAs. A breach during which you're also figuring out your process for the first time is worse in every possible way.
HIPAA-compliant app development checklist
Requirement | Layer | Notes |
|---|---|---|
Encryption at rest (AES-256) | Infrastructure | Must cover database, backups, file storage |
Encryption in transit (TLS 1.2+) | Infrastructure + App | Infrastructure handles internal; app is responsible for outbound connections |
Network isolation | Infrastructure | Databases not publicly reachable; internal services on private network |
Unique user identification | Application | No shared accounts for PHI access |
Role-based access controls | Application | Least-privilege access enforced in code |
Automatic session logoff | Application | 15-minute inactivity timeout is common standard |
Emergency access procedure | Administrative | Documented; doesn't need to be elaborate |
PHI audit logs (6-year retention) | Application + Infrastructure | Application generates; infrastructure provides tamper-evident storage |
Secure deletion / disposal of PHI | Application + Administrative | PHI must be disposed of in a manner that renders it unrecoverable; document the process |
Immutable log storage | Infrastructure | Application cannot modify its own audit logs |
PHI excluded from application logs | Application | Requires deliberate engineering |
BAA: hosting provider | Vendor management | Must cover infrastructure layer |
BAA: managed database | Vendor management | Including backups |
BAA: all secondary vendors that touch PHI | Vendor management | Logging, monitoring, support, email, analytics |
Risk assessment (documented) | Administrative | Required; must be updated when environment changes |
Security officer designated | Administrative | Named individual; not a dedicated role at early stage |
Workforce training records | Administrative | All staff with PHI access; retained 6 years |
Incident response plan | Administrative | Must exist before a breach, not after |
Vulnerability management | Infrastructure | OS and dependency patching |
Penetration testing | Application + Infrastructure | Before launch; annually or after significant changes |
Documentation retention (6 years) | Administrative | Policies, risk assessments, training records, audit logs |
FAQs
Do I need HIPAA compliance for my healthcare app?
If your application processes, stores, or transmits protected health information on behalf of a covered entity (a healthcare provider, health plan, or clearinghouse), you are a business associate and HIPAA applies to you. The practical test: if a hospital, clinic, or insurance company is your customer and patient data flows through your platform, you're covered. If you're building a consumer wellness app with no connection to the healthcare system, HIPAA may not apply, but you should confirm with an attorney.
Can I build a HIPAA-compliant app directly on AWS?
Yes, but the compliance burden is substantially higher than on a managed platform. AWS signs a BAA and offers HIPAA-eligible services, but the Shared Responsibility Model means you are responsible for configuring every service correctly, maintaining encryption across every storage layer, handling OS-level patching, and inventorying every AWS service that touches PHI to ensure it's covered under your BAA. Teams that underestimate this scope regularly find gaps during audits or customer security reviews.
What's the difference between HIPAA-eligible and HIPAA-compliant?
HIPAA-eligible refers to cloud services or platforms that meet the conditions necessary to be used for PHI under HIPAA. It's a vendor status claim. HIPAA-compliant refers to an organization's posture: that it has implemented the required safeguards, documented them, and can demonstrate compliance. A HIPAA-eligible hosting provider is a prerequisite. Your own compliance is a separate question.
Does my mobile app need to be HIPAA compliant?
If the mobile app accesses or stores PHI, yes: the same technical safeguards apply. This means encrypted local storage if PHI is stored on device, session timeout, unique user authentication, and audit logging of PHI access. PHI should generally not be stored on device at all unless there's a specific reason to do so, as it expands your attack surface and your compliance scope.
How much does HIPAA compliant app development cost?
The largest cost variable is how much of the infrastructure compliance layer you inherit from your platform versus build and maintain yourself. Teams on managed HIPAA infrastructure typically spend $1,000-$3,000/month on hosting, with a one-time investment of $10,000-$30,000 to establish initial policies, documentation, and legal review. Teams managing compliance infrastructure themselves spend more in engineering time, ongoing maintenance, and evidence-generation work than they typically account for at the start.
Do I need a penetration test before launch?
HIPAA doesn't explicitly require penetration testing, but it is part of a reasonable risk management program and most healthcare enterprise customers will ask for evidence of it during vendor review. Consider it a prerequisite for any customer who takes their own compliance seriously.
Next steps
If you're starting from scratch on infrastructure: HIPAA Compliant Hosting: what to look for in a platform and how Aptible covers the infrastructure layer
If you need a complete control checklist: HIPAA Compliance Checklist: all required controls with ownership mapped between platform and application
If you're reviewing or negotiating vendor BAAs: What is a HIPAA BAA: what terms matter and what to look for before signing
If you're building with AI or LLMs: HIPAA-Compliant AI: BAA requirements, audit logging patterns, and AI gateway architecture for healthcare
Aptible is a HIPAA-compliant deployment platform built for digital health. Every environment includes a signed BAA, AES-256 encryption at rest, TLS in transit, network isolation, tamper-evident audit logging, and HITRUST R2-certified controls. Your team builds the application layer; Aptible handles the infrastructure. See how Aptible works
This guide is for informational purposes only. Aptible is not a law firm, and nothing here constitutes legal advice. Consult an attorney for advice specific to your organization's compliance obligations.
HIPAA Compliant App Development: What Your Infrastructure Must Handle
Last updated: March 2026
Most guides on HIPAA-compliant app development are written by dev agencies trying to sell you an engagement, or by compliance consultants who have never shipped a production healthcare app. The result is advice that conflates two very different problems.
HIPAA compliance in software has two layers: infrastructure and application. The infrastructure layer covers encryption at rest, network isolation, audit log storage, patch management, and BAA coverage for your hosting stack. The application layer covers the controls you write in code: access management, session handling, PHI logging behavior, and integrity controls in your data model. These are distinct responsibilities. What your platform handles and what you build in your application are not the same question, and most HIPAA content never separates them.
This guide does. It covers what the Security Rule actually requires, what your infrastructure must handle for you to inherit, and what you have to build in your application regardless of which platform you use.
What HIPAA requires of app developers (the relevant parts)
HIPAA has three main rules. All three apply to business associates, which is the category most digital health startups fall into.
The Privacy Rule governs how PHI can be used and disclosed. For engineers, it surfaces primarily as two requirements: BAAs with every vendor that touches PHI, and the minimum necessary standard (transmit only the PHI required for the task, not a full record when a patient identifier will do).
The Security Rule is where most engineering work lives. It requires administrative, physical, and technical safeguards for electronic PHI, documented policies for each, and retention of all compliance documentation for a minimum of six years. The Security Rule is also risk-based: you select controls proportionate to your risk profile, document your reasoning, and revisit it when your environment changes.
Within the Security Rule, access control expectations extend beyond authentication. Systems must enforce least privilege, meaning users and services only have access to the minimum PHI necessary to perform their function. Separation of duties must also be implemented where appropriate, preventing any single individual from having unchecked control over critical operations such as access provisioning, audit log management, or data modification.
The Breach Notification Rule specifies what constitutes a reportable breach and who gets notified. For business associates, the obligation is to notify your covered entity customer without unreasonable delay, and no later than 60 days from discovery. Many BAAs specify 24-72 hours. Know which applies to you before something goes wrong.
The Security Rule is where your architecture decisions matter most. The rest of this guide focuses there.
The technical safeguards you must build in your app
The Security Rule's technical safeguards break down into five categories. Some of these are genuinely infrastructure concerns. Others you have to implement in application code. This section covers the ones that belong to you.
Access controls
HIPAA requires that each user have a unique identifier for accessing systems that contain PHI. Shared accounts are not compliant. If your application lets two people log in as "admin@company.com," that's a violation regardless of what your infrastructure does.
Required: unique user identification for anyone who can access PHI.
Addressable (but expected in practice): automatic session timeout after inactivity. Fifteen minutes is the common standard for web applications. "Addressable" under the Security Rule doesn't mean optional. It means you assess whether it's appropriate for your environment and either implement it or document why an equivalent control is sufficient. For healthcare applications with live patient data, automatic logoff will always be appropriate.
Also required: a documented emergency access procedure. This is the process for accessing PHI when the normal access mechanism fails, for example when the primary administrator is unavailable during an incident. Most startups skip this until an auditor asks for it.
Audit controls
This section covers what your application code must log. For the infrastructure layer (immutable storage, log drains, platform-level activity logs), see Hosting Requirements. For retention periods, protection requirements, and what auditors check, see Audit Log Retention.
This is one of the most commonly under-built controls in early-stage healthcare apps. HIPAA requires hardware, software, and procedural mechanisms to record and examine access and activity in systems that contain PHI. At the application level, that means audit logs: who accessed which PHI, what action they took, and when.
What must be logged:
Authentication events (login, logout, failed attempts)
PHI access (which records were viewed, by whom)
PHI modifications (creates, updates, deletes)
Permission changes (who gained or lost access to what)
Administrative actions that affect PHI visibility
These logs must be retained for a minimum of six years and protected against unauthorized modification. The logs themselves must be immutable. An audit log that the application itself can overwrite provides no compliance value.
If your application stores audit logs in the same database your application can write to, you have a problem.
Integrity controls
HIPAA requires mechanisms to ensure that ePHI has not been altered or destroyed in an unauthorized manner. At the application layer, this means access controls that prevent unauthorized writes, logging of all modifications with sufficient context to detect tampering, and database-level constraints that prevent corruption. It does not require cryptographic checksums on every record, but it does require that you can detect and demonstrate that unauthorized modification did not occur.
Session management and transmission security
PHI transmitted over a network must be protected against unauthorized interception. TLS 1.2 or higher is the current standard; TLS 1.0 and 1.1 are not acceptable. This is primarily an infrastructure concern if your platform handles TLS termination, but if your application makes outbound API calls to third-party services that handle PHI (an EHR integration, a lab results vendor), you are responsible for ensuring those connections use acceptable protocols.
PHI handling in application code
A few failure modes that are entirely your responsibility, regardless of infrastructure:
Logs. Application logs that capture request bodies, error details, or debug information can easily capture PHI. A stack trace that includes a patient record ID, a name, or a date of birth is a PHI exposure. Structure your logging so that PHI never appears in application logs by default. This requires deliberate design, not just good intentions.
Error messages. Error responses that return PHI (a "user not found" message that echoes back an email address, a validation error that includes a record ID) are a frequent source of unintentional disclosure. Audit your error handling for PHI leakage before launch.
Third-party scripts and SDKs. Analytics tools, error monitoring platforms, and customer support SDKs that run in your application or server environment can capture PHI if not configured carefully. Any of these that touch PHI require a BAA. Most don't offer one.
What your infrastructure must handle
The controls above are yours to build. The following are the responsibility of your infrastructure platform. If you're running on AWS directly, you're building and maintaining these yourself. If you're on a managed HIPAA-compliant platform, you inherit them.
Encryption at rest. ePHI stored on disk must be encrypted. AES-256 is the current standard. This applies to your application database, file storage, backups, and any other persistent storage layer. At the infrastructure level, this means encrypted EBS volumes, encrypted RDS instances, encrypted S3 buckets, and documented evidence that encryption is enabled on every volume, not just the ones you remember to configure.
Encryption in transit within your infrastructure. TLS between your application and your database, between services in your environment, and between your environment and external systems. The infrastructure layer should enforce this by default, not rely on each developer configuring it correctly.
Network isolation. ePHI environments should be isolated at the network level. Production databases should not be reachable from the public internet. Internal services should communicate over private networks. This is a deployment architecture concern, not something you configure in application code.
Vulnerability management. Your infrastructure components (the OS, web server, database, and supporting packages) require regular patching and monitoring for known vulnerabilities. On a self-managed stack, this is your team's ongoing responsibility. On a managed platform, the provider handles OS-level patching, and you're responsible for your application dependencies.
Audit log storage. The infrastructure layer should provide tamper-evident storage for audit logs that your application cannot modify. This is the separation that makes audit logs meaningful. If your application and your audit log storage share credentials, you don't have an audit log. You have a record your application could alter.
BAA coverage. Your hosting provider, managed database provider, and any infrastructure service that could process or store your application data must sign a BAA. AWS will sign a BAA, but it covers only the services and configurations you've explicitly set up as HIPAA-eligible. Getting it right requires navigating their Shared Responsibility Model carefully. A managed platform like Aptible provides a single BAA that covers the infrastructure layer, so you don't have to identify and cover each underlying service individually.
The practical difference: on AWS without a managed platform, you're responsible for inventorying every service that touches PHI and ensuring it's covered. That's 10-15 services before you write application code, and it grows as your architecture does.
Business associate agreements: more than your hosting provider
A BAA is required with every vendor that handles PHI on your behalf. Most startups cover the obvious ones (hosting, managed database) and miss the rest.
Go through your full vendor stack and ask: could this service receive PHI?
Error monitoring (Sentry, Datadog, Rollbar): if your application sends error context that includes request data, yes
Log management (Papertrail, Loggly, Elastic): if application logs could contain PHI, yes
Customer support tools (Intercom, Zendesk): if support conversations involve patient data, yes
Email service providers: if your application sends emails containing PHI, yes
Analytics platforms: if event data includes PHI, yes
Video conferencing (for telehealth): almost certainly yes
Some of these vendors offer BAAs. Many don't. If a vendor won't sign a BAA, you cannot use them in a context where they could receive PHI. That's not negotiable.
Getting a BAA from a vendor isn't just about collecting a signature. The BAA defines what security obligations the vendor has accepted. A BAA from a vendor with no security certifications and no evidence of controls is a piece of paper. When evaluating vendors, ask for their SOC 2 Type II report alongside the BAA. The report is evidence that the BAA means something.
For a full breakdown of what to look for in a BAA, see What is a HIPAA BAA.
Common mistakes that cause HIPAA failures in healthcare apps
These are the patterns that show up in breach reports and enforcement actions. All of them are preventable.
"AWS is HIPAA compliant, so our stack is HIPAA compliant." AWS offers HIPAA-eligible services, but compliance is your responsibility under their Shared Responsibility Model. An unencrypted S3 bucket, an RDS instance without encryption at rest enabled, or a security group that exposes your database to the public internet are all your problem, not AWS's. HIPAA eligibility from a cloud provider is a necessary condition, not a sufficient one.
PHI in application logs. This is how most accidental PHI exposures happen. A developer adds debug logging for a patient lookup endpoint. The logs go to a third-party log management service that didn't sign a BAA. Now you have a PHI exposure and a BAA violation at the same time. Audit your logging configuration before launch and on every significant code change.
Missing BAAs with secondary vendors. Teams are diligent about getting a BAA from their hosting provider and then integrate Sentry, add Intercom, connect a Twilio SMS notification flow, and never ask whether PHI could flow through any of them.
Shared administrative accounts. Unique user identification is required. If your ops team shares a database admin account, you cannot produce a meaningful audit trail for that account's activity. You also can't revoke one person's access without changing credentials for everyone.
No audit logging for PHI access. Most healthcare applications log errors and events. Far fewer log every PHI access event with sufficient context to reconstruct who accessed what and when. Audit logs are not just a compliance checkbox. They're what lets you scope a breach when one occurs.
PHI in error messages returned to clients. Error responses that echo back record identifiers, patient names, or other identifying information are a common source of PHI exposure. They're also the kind of thing that shows up in security research.
No documented emergency access procedure. The Security Rule requires one. It doesn't have to be elaborate, but "we'd figure it out" is not a compliant answer.
How to structure development for HIPAA compliance
HIPAA compliance isn't a feature you add at the end. The decisions that are hardest to retrofit (data model, access control architecture, audit logging, encryption scheme) are easiest to get right before you write the first line of production code.
Start with data and system classification. Before you build, identify where PHI exists, where it's stored, and how it flows through your system. That means databases, APIs, logs, and third-party integrations. Classify systems based on whether they store, process, or transmit PHI. For early-stage teams, this doesn't need to be formal. A simple map of data flows and PHI touchpoints is sufficient to start. This is one of the most commonly skipped steps in early healthcare applications. Teams focus on infrastructure and access controls but lack a clear picture of where PHI actually flows, which leads to missed controls, accidental exposure, and gaps in BAA coverage. It also enables everything else on this list: you can't do a meaningful risk assessment without knowing which parts of your system are in scope.
Start with a risk assessment. Document what risks exist at each point in your PHI flows and what controls you're implementing in response. This is a required component of the Security Rule, and it's also a useful design exercise. Teams that skip it often build without knowing which parts of their system are in scope.
Design for minimum necessary access. Your data model and access control architecture should reflect the principle that each user, service, and component accesses only the PHI it needs to function. This is easier to enforce with role-based access control designed in from the start than bolted on after the fact.
Document your security architecture. HIPAA requires documented policies and procedures for each safeguard. This doesn't mean writing a 50-page document before launch, but it does mean having written records of how your controls work, why you chose them, and how they're maintained. Those records need to be retained for six years.
Test before you launch. Penetration testing before your first production PHI deployment surfaces the vulnerabilities that code review misses. Many healthcare enterprise customers will ask for evidence of penetration testing as part of their vendor approval process.
Build an incident response plan. Before you have a breach, know how you'll detect it, who gets notified and when, how you'll scope which PHI was involved, and what your notification obligations are under your BAAs. A breach during which you're also figuring out your process for the first time is worse in every possible way.
HIPAA-compliant app development checklist
Requirement | Layer | Notes |
|---|---|---|
Encryption at rest (AES-256) | Infrastructure | Must cover database, backups, file storage |
Encryption in transit (TLS 1.2+) | Infrastructure + App | Infrastructure handles internal; app is responsible for outbound connections |
Network isolation | Infrastructure | Databases not publicly reachable; internal services on private network |
Unique user identification | Application | No shared accounts for PHI access |
Role-based access controls | Application | Least-privilege access enforced in code |
Automatic session logoff | Application | 15-minute inactivity timeout is common standard |
Emergency access procedure | Administrative | Documented; doesn't need to be elaborate |
PHI audit logs (6-year retention) | Application + Infrastructure | Application generates; infrastructure provides tamper-evident storage |
Secure deletion / disposal of PHI | Application + Administrative | PHI must be disposed of in a manner that renders it unrecoverable; document the process |
Immutable log storage | Infrastructure | Application cannot modify its own audit logs |
PHI excluded from application logs | Application | Requires deliberate engineering |
BAA: hosting provider | Vendor management | Must cover infrastructure layer |
BAA: managed database | Vendor management | Including backups |
BAA: all secondary vendors that touch PHI | Vendor management | Logging, monitoring, support, email, analytics |
Risk assessment (documented) | Administrative | Required; must be updated when environment changes |
Security officer designated | Administrative | Named individual; not a dedicated role at early stage |
Workforce training records | Administrative | All staff with PHI access; retained 6 years |
Incident response plan | Administrative | Must exist before a breach, not after |
Vulnerability management | Infrastructure | OS and dependency patching |
Penetration testing | Application + Infrastructure | Before launch; annually or after significant changes |
Documentation retention (6 years) | Administrative | Policies, risk assessments, training records, audit logs |
FAQs
Do I need HIPAA compliance for my healthcare app?
If your application processes, stores, or transmits protected health information on behalf of a covered entity (a healthcare provider, health plan, or clearinghouse), you are a business associate and HIPAA applies to you. The practical test: if a hospital, clinic, or insurance company is your customer and patient data flows through your platform, you're covered. If you're building a consumer wellness app with no connection to the healthcare system, HIPAA may not apply, but you should confirm with an attorney.
Can I build a HIPAA-compliant app directly on AWS?
Yes, but the compliance burden is substantially higher than on a managed platform. AWS signs a BAA and offers HIPAA-eligible services, but the Shared Responsibility Model means you are responsible for configuring every service correctly, maintaining encryption across every storage layer, handling OS-level patching, and inventorying every AWS service that touches PHI to ensure it's covered under your BAA. Teams that underestimate this scope regularly find gaps during audits or customer security reviews.
What's the difference between HIPAA-eligible and HIPAA-compliant?
HIPAA-eligible refers to cloud services or platforms that meet the conditions necessary to be used for PHI under HIPAA. It's a vendor status claim. HIPAA-compliant refers to an organization's posture: that it has implemented the required safeguards, documented them, and can demonstrate compliance. A HIPAA-eligible hosting provider is a prerequisite. Your own compliance is a separate question.
Does my mobile app need to be HIPAA compliant?
If the mobile app accesses or stores PHI, yes: the same technical safeguards apply. This means encrypted local storage if PHI is stored on device, session timeout, unique user authentication, and audit logging of PHI access. PHI should generally not be stored on device at all unless there's a specific reason to do so, as it expands your attack surface and your compliance scope.
How much does HIPAA compliant app development cost?
The largest cost variable is how much of the infrastructure compliance layer you inherit from your platform versus build and maintain yourself. Teams on managed HIPAA infrastructure typically spend $1,000-$3,000/month on hosting, with a one-time investment of $10,000-$30,000 to establish initial policies, documentation, and legal review. Teams managing compliance infrastructure themselves spend more in engineering time, ongoing maintenance, and evidence-generation work than they typically account for at the start.
Do I need a penetration test before launch?
HIPAA doesn't explicitly require penetration testing, but it is part of a reasonable risk management program and most healthcare enterprise customers will ask for evidence of it during vendor review. Consider it a prerequisite for any customer who takes their own compliance seriously.
Next steps
If you're starting from scratch on infrastructure: HIPAA Compliant Hosting: what to look for in a platform and how Aptible covers the infrastructure layer
If you need a complete control checklist: HIPAA Compliance Checklist: all required controls with ownership mapped between platform and application
If you're reviewing or negotiating vendor BAAs: What is a HIPAA BAA: what terms matter and what to look for before signing
If you're building with AI or LLMs: HIPAA-Compliant AI: BAA requirements, audit logging patterns, and AI gateway architecture for healthcare
Aptible is a HIPAA-compliant deployment platform built for digital health. Every environment includes a signed BAA, AES-256 encryption at rest, TLS in transit, network isolation, tamper-evident audit logging, and HITRUST R2-certified controls. Your team builds the application layer; Aptible handles the infrastructure. See how Aptible works
This guide is for informational purposes only. Aptible is not a law firm, and nothing here constitutes legal advice. Consult an attorney for advice specific to your organization's compliance obligations.
HIPAA Guide
HIPAA Compliant App Development: What Your Infrastructure Must Handle
Last updated: March 2026
Most guides on HIPAA-compliant app development are written by dev agencies trying to sell you an engagement, or by compliance consultants who have never shipped a production healthcare app. The result is advice that conflates two very different problems.
HIPAA compliance in software has two layers: infrastructure and application. The infrastructure layer covers encryption at rest, network isolation, audit log storage, patch management, and BAA coverage for your hosting stack. The application layer covers the controls you write in code: access management, session handling, PHI logging behavior, and integrity controls in your data model. These are distinct responsibilities. What your platform handles and what you build in your application are not the same question, and most HIPAA content never separates them.
This guide does. It covers what the Security Rule actually requires, what your infrastructure must handle for you to inherit, and what you have to build in your application regardless of which platform you use.
What HIPAA requires of app developers (the relevant parts)
HIPAA has three main rules. All three apply to business associates, which is the category most digital health startups fall into.
The Privacy Rule governs how PHI can be used and disclosed. For engineers, it surfaces primarily as two requirements: BAAs with every vendor that touches PHI, and the minimum necessary standard (transmit only the PHI required for the task, not a full record when a patient identifier will do).
The Security Rule is where most engineering work lives. It requires administrative, physical, and technical safeguards for electronic PHI, documented policies for each, and retention of all compliance documentation for a minimum of six years. The Security Rule is also risk-based: you select controls proportionate to your risk profile, document your reasoning, and revisit it when your environment changes.
Within the Security Rule, access control expectations extend beyond authentication. Systems must enforce least privilege, meaning users and services only have access to the minimum PHI necessary to perform their function. Separation of duties must also be implemented where appropriate, preventing any single individual from having unchecked control over critical operations such as access provisioning, audit log management, or data modification.
The Breach Notification Rule specifies what constitutes a reportable breach and who gets notified. For business associates, the obligation is to notify your covered entity customer without unreasonable delay, and no later than 60 days from discovery. Many BAAs specify 24-72 hours. Know which applies to you before something goes wrong.
The Security Rule is where your architecture decisions matter most. The rest of this guide focuses there.
The technical safeguards you must build in your app
The Security Rule's technical safeguards break down into five categories. Some of these are genuinely infrastructure concerns. Others you have to implement in application code. This section covers the ones that belong to you.
Access controls
HIPAA requires that each user have a unique identifier for accessing systems that contain PHI. Shared accounts are not compliant. If your application lets two people log in as "admin@company.com," that's a violation regardless of what your infrastructure does.
Required: unique user identification for anyone who can access PHI.
Addressable (but expected in practice): automatic session timeout after inactivity. Fifteen minutes is the common standard for web applications. "Addressable" under the Security Rule doesn't mean optional. It means you assess whether it's appropriate for your environment and either implement it or document why an equivalent control is sufficient. For healthcare applications with live patient data, automatic logoff will always be appropriate.
Also required: a documented emergency access procedure. This is the process for accessing PHI when the normal access mechanism fails, for example when the primary administrator is unavailable during an incident. Most startups skip this until an auditor asks for it.
Audit controls
This section covers what your application code must log. For the infrastructure layer (immutable storage, log drains, platform-level activity logs), see Hosting Requirements. For retention periods, protection requirements, and what auditors check, see Audit Log Retention.
This is one of the most commonly under-built controls in early-stage healthcare apps. HIPAA requires hardware, software, and procedural mechanisms to record and examine access and activity in systems that contain PHI. At the application level, that means audit logs: who accessed which PHI, what action they took, and when.
What must be logged:
Authentication events (login, logout, failed attempts)
PHI access (which records were viewed, by whom)
PHI modifications (creates, updates, deletes)
Permission changes (who gained or lost access to what)
Administrative actions that affect PHI visibility
These logs must be retained for a minimum of six years and protected against unauthorized modification. The logs themselves must be immutable. An audit log that the application itself can overwrite provides no compliance value.
If your application stores audit logs in the same database your application can write to, you have a problem.
Integrity controls
HIPAA requires mechanisms to ensure that ePHI has not been altered or destroyed in an unauthorized manner. At the application layer, this means access controls that prevent unauthorized writes, logging of all modifications with sufficient context to detect tampering, and database-level constraints that prevent corruption. It does not require cryptographic checksums on every record, but it does require that you can detect and demonstrate that unauthorized modification did not occur.
Session management and transmission security
PHI transmitted over a network must be protected against unauthorized interception. TLS 1.2 or higher is the current standard; TLS 1.0 and 1.1 are not acceptable. This is primarily an infrastructure concern if your platform handles TLS termination, but if your application makes outbound API calls to third-party services that handle PHI (an EHR integration, a lab results vendor), you are responsible for ensuring those connections use acceptable protocols.
PHI handling in application code
A few failure modes that are entirely your responsibility, regardless of infrastructure:
Logs. Application logs that capture request bodies, error details, or debug information can easily capture PHI. A stack trace that includes a patient record ID, a name, or a date of birth is a PHI exposure. Structure your logging so that PHI never appears in application logs by default. This requires deliberate design, not just good intentions.
Error messages. Error responses that return PHI (a "user not found" message that echoes back an email address, a validation error that includes a record ID) are a frequent source of unintentional disclosure. Audit your error handling for PHI leakage before launch.
Third-party scripts and SDKs. Analytics tools, error monitoring platforms, and customer support SDKs that run in your application or server environment can capture PHI if not configured carefully. Any of these that touch PHI require a BAA. Most don't offer one.
What your infrastructure must handle
The controls above are yours to build. The following are the responsibility of your infrastructure platform. If you're running on AWS directly, you're building and maintaining these yourself. If you're on a managed HIPAA-compliant platform, you inherit them.
Encryption at rest. ePHI stored on disk must be encrypted. AES-256 is the current standard. This applies to your application database, file storage, backups, and any other persistent storage layer. At the infrastructure level, this means encrypted EBS volumes, encrypted RDS instances, encrypted S3 buckets, and documented evidence that encryption is enabled on every volume, not just the ones you remember to configure.
Encryption in transit within your infrastructure. TLS between your application and your database, between services in your environment, and between your environment and external systems. The infrastructure layer should enforce this by default, not rely on each developer configuring it correctly.
Network isolation. ePHI environments should be isolated at the network level. Production databases should not be reachable from the public internet. Internal services should communicate over private networks. This is a deployment architecture concern, not something you configure in application code.
Vulnerability management. Your infrastructure components (the OS, web server, database, and supporting packages) require regular patching and monitoring for known vulnerabilities. On a self-managed stack, this is your team's ongoing responsibility. On a managed platform, the provider handles OS-level patching, and you're responsible for your application dependencies.
Audit log storage. The infrastructure layer should provide tamper-evident storage for audit logs that your application cannot modify. This is the separation that makes audit logs meaningful. If your application and your audit log storage share credentials, you don't have an audit log. You have a record your application could alter.
BAA coverage. Your hosting provider, managed database provider, and any infrastructure service that could process or store your application data must sign a BAA. AWS will sign a BAA, but it covers only the services and configurations you've explicitly set up as HIPAA-eligible. Getting it right requires navigating their Shared Responsibility Model carefully. A managed platform like Aptible provides a single BAA that covers the infrastructure layer, so you don't have to identify and cover each underlying service individually.
The practical difference: on AWS without a managed platform, you're responsible for inventorying every service that touches PHI and ensuring it's covered. That's 10-15 services before you write application code, and it grows as your architecture does.
Business associate agreements: more than your hosting provider
A BAA is required with every vendor that handles PHI on your behalf. Most startups cover the obvious ones (hosting, managed database) and miss the rest.
Go through your full vendor stack and ask: could this service receive PHI?
Error monitoring (Sentry, Datadog, Rollbar): if your application sends error context that includes request data, yes
Log management (Papertrail, Loggly, Elastic): if application logs could contain PHI, yes
Customer support tools (Intercom, Zendesk): if support conversations involve patient data, yes
Email service providers: if your application sends emails containing PHI, yes
Analytics platforms: if event data includes PHI, yes
Video conferencing (for telehealth): almost certainly yes
Some of these vendors offer BAAs. Many don't. If a vendor won't sign a BAA, you cannot use them in a context where they could receive PHI. That's not negotiable.
Getting a BAA from a vendor isn't just about collecting a signature. The BAA defines what security obligations the vendor has accepted. A BAA from a vendor with no security certifications and no evidence of controls is a piece of paper. When evaluating vendors, ask for their SOC 2 Type II report alongside the BAA. The report is evidence that the BAA means something.
For a full breakdown of what to look for in a BAA, see What is a HIPAA BAA.
Common mistakes that cause HIPAA failures in healthcare apps
These are the patterns that show up in breach reports and enforcement actions. All of them are preventable.
"AWS is HIPAA compliant, so our stack is HIPAA compliant." AWS offers HIPAA-eligible services, but compliance is your responsibility under their Shared Responsibility Model. An unencrypted S3 bucket, an RDS instance without encryption at rest enabled, or a security group that exposes your database to the public internet are all your problem, not AWS's. HIPAA eligibility from a cloud provider is a necessary condition, not a sufficient one.
PHI in application logs. This is how most accidental PHI exposures happen. A developer adds debug logging for a patient lookup endpoint. The logs go to a third-party log management service that didn't sign a BAA. Now you have a PHI exposure and a BAA violation at the same time. Audit your logging configuration before launch and on every significant code change.
Missing BAAs with secondary vendors. Teams are diligent about getting a BAA from their hosting provider and then integrate Sentry, add Intercom, connect a Twilio SMS notification flow, and never ask whether PHI could flow through any of them.
Shared administrative accounts. Unique user identification is required. If your ops team shares a database admin account, you cannot produce a meaningful audit trail for that account's activity. You also can't revoke one person's access without changing credentials for everyone.
No audit logging for PHI access. Most healthcare applications log errors and events. Far fewer log every PHI access event with sufficient context to reconstruct who accessed what and when. Audit logs are not just a compliance checkbox. They're what lets you scope a breach when one occurs.
PHI in error messages returned to clients. Error responses that echo back record identifiers, patient names, or other identifying information are a common source of PHI exposure. They're also the kind of thing that shows up in security research.
No documented emergency access procedure. The Security Rule requires one. It doesn't have to be elaborate, but "we'd figure it out" is not a compliant answer.
How to structure development for HIPAA compliance
HIPAA compliance isn't a feature you add at the end. The decisions that are hardest to retrofit (data model, access control architecture, audit logging, encryption scheme) are easiest to get right before you write the first line of production code.
Start with data and system classification. Before you build, identify where PHI exists, where it's stored, and how it flows through your system. That means databases, APIs, logs, and third-party integrations. Classify systems based on whether they store, process, or transmit PHI. For early-stage teams, this doesn't need to be formal. A simple map of data flows and PHI touchpoints is sufficient to start. This is one of the most commonly skipped steps in early healthcare applications. Teams focus on infrastructure and access controls but lack a clear picture of where PHI actually flows, which leads to missed controls, accidental exposure, and gaps in BAA coverage. It also enables everything else on this list: you can't do a meaningful risk assessment without knowing which parts of your system are in scope.
Start with a risk assessment. Document what risks exist at each point in your PHI flows and what controls you're implementing in response. This is a required component of the Security Rule, and it's also a useful design exercise. Teams that skip it often build without knowing which parts of their system are in scope.
Design for minimum necessary access. Your data model and access control architecture should reflect the principle that each user, service, and component accesses only the PHI it needs to function. This is easier to enforce with role-based access control designed in from the start than bolted on after the fact.
Document your security architecture. HIPAA requires documented policies and procedures for each safeguard. This doesn't mean writing a 50-page document before launch, but it does mean having written records of how your controls work, why you chose them, and how they're maintained. Those records need to be retained for six years.
Test before you launch. Penetration testing before your first production PHI deployment surfaces the vulnerabilities that code review misses. Many healthcare enterprise customers will ask for evidence of penetration testing as part of their vendor approval process.
Build an incident response plan. Before you have a breach, know how you'll detect it, who gets notified and when, how you'll scope which PHI was involved, and what your notification obligations are under your BAAs. A breach during which you're also figuring out your process for the first time is worse in every possible way.
HIPAA-compliant app development checklist
Requirement | Layer | Notes |
|---|---|---|
Encryption at rest (AES-256) | Infrastructure | Must cover database, backups, file storage |
Encryption in transit (TLS 1.2+) | Infrastructure + App | Infrastructure handles internal; app is responsible for outbound connections |
Network isolation | Infrastructure | Databases not publicly reachable; internal services on private network |
Unique user identification | Application | No shared accounts for PHI access |
Role-based access controls | Application | Least-privilege access enforced in code |
Automatic session logoff | Application | 15-minute inactivity timeout is common standard |
Emergency access procedure | Administrative | Documented; doesn't need to be elaborate |
PHI audit logs (6-year retention) | Application + Infrastructure | Application generates; infrastructure provides tamper-evident storage |
Secure deletion / disposal of PHI | Application + Administrative | PHI must be disposed of in a manner that renders it unrecoverable; document the process |
Immutable log storage | Infrastructure | Application cannot modify its own audit logs |
PHI excluded from application logs | Application | Requires deliberate engineering |
BAA: hosting provider | Vendor management | Must cover infrastructure layer |
BAA: managed database | Vendor management | Including backups |
BAA: all secondary vendors that touch PHI | Vendor management | Logging, monitoring, support, email, analytics |
Risk assessment (documented) | Administrative | Required; must be updated when environment changes |
Security officer designated | Administrative | Named individual; not a dedicated role at early stage |
Workforce training records | Administrative | All staff with PHI access; retained 6 years |
Incident response plan | Administrative | Must exist before a breach, not after |
Vulnerability management | Infrastructure | OS and dependency patching |
Penetration testing | Application + Infrastructure | Before launch; annually or after significant changes |
Documentation retention (6 years) | Administrative | Policies, risk assessments, training records, audit logs |
FAQs
Do I need HIPAA compliance for my healthcare app?
If your application processes, stores, or transmits protected health information on behalf of a covered entity (a healthcare provider, health plan, or clearinghouse), you are a business associate and HIPAA applies to you. The practical test: if a hospital, clinic, or insurance company is your customer and patient data flows through your platform, you're covered. If you're building a consumer wellness app with no connection to the healthcare system, HIPAA may not apply, but you should confirm with an attorney.
Can I build a HIPAA-compliant app directly on AWS?
Yes, but the compliance burden is substantially higher than on a managed platform. AWS signs a BAA and offers HIPAA-eligible services, but the Shared Responsibility Model means you are responsible for configuring every service correctly, maintaining encryption across every storage layer, handling OS-level patching, and inventorying every AWS service that touches PHI to ensure it's covered under your BAA. Teams that underestimate this scope regularly find gaps during audits or customer security reviews.
What's the difference between HIPAA-eligible and HIPAA-compliant?
HIPAA-eligible refers to cloud services or platforms that meet the conditions necessary to be used for PHI under HIPAA. It's a vendor status claim. HIPAA-compliant refers to an organization's posture: that it has implemented the required safeguards, documented them, and can demonstrate compliance. A HIPAA-eligible hosting provider is a prerequisite. Your own compliance is a separate question.
Does my mobile app need to be HIPAA compliant?
If the mobile app accesses or stores PHI, yes: the same technical safeguards apply. This means encrypted local storage if PHI is stored on device, session timeout, unique user authentication, and audit logging of PHI access. PHI should generally not be stored on device at all unless there's a specific reason to do so, as it expands your attack surface and your compliance scope.
How much does HIPAA compliant app development cost?
The largest cost variable is how much of the infrastructure compliance layer you inherit from your platform versus build and maintain yourself. Teams on managed HIPAA infrastructure typically spend $1,000-$3,000/month on hosting, with a one-time investment of $10,000-$30,000 to establish initial policies, documentation, and legal review. Teams managing compliance infrastructure themselves spend more in engineering time, ongoing maintenance, and evidence-generation work than they typically account for at the start.
Do I need a penetration test before launch?
HIPAA doesn't explicitly require penetration testing, but it is part of a reasonable risk management program and most healthcare enterprise customers will ask for evidence of it during vendor review. Consider it a prerequisite for any customer who takes their own compliance seriously.
Next steps
If you're starting from scratch on infrastructure: HIPAA Compliant Hosting: what to look for in a platform and how Aptible covers the infrastructure layer
If you need a complete control checklist: HIPAA Compliance Checklist: all required controls with ownership mapped between platform and application
If you're reviewing or negotiating vendor BAAs: What is a HIPAA BAA: what terms matter and what to look for before signing
If you're building with AI or LLMs: HIPAA-Compliant AI: BAA requirements, audit logging patterns, and AI gateway architecture for healthcare
Aptible is a HIPAA-compliant deployment platform built for digital health. Every environment includes a signed BAA, AES-256 encryption at rest, TLS in transit, network isolation, tamper-evident audit logging, and HITRUST R2-certified controls. Your team builds the application layer; Aptible handles the infrastructure. See how Aptible works
This guide is for informational purposes only. Aptible is not a law firm, and nothing here constitutes legal advice. Consult an attorney for advice specific to your organization's compliance obligations.
548 Market St #75826 San Francisco, CA 94104
© 2026. All rights reserved. Privacy Policy
548 Market St #75826 San Francisco, CA 94104
© 2026. All rights reserved. Privacy Policy