AI features that use PHI need more than a provider BAA

Route all AI traffic through Aptible’s gateway and get BAA coverage, audit logging, de-identification, key management, and cost controls enforced automatically.

“Our view: AI in healthcare must be trustworthy, traceable, and controllable, and we won’t compromise security for speed.”

The hard part of using AI in digital health isn’t the AI. It’s everything around it.

To use LLMs with PHI in production, teams need infrastructure for:

The hard part of using AI in digital health isn’t the AI. It’s everything around it.

To use LLMs with PHI in production, teams need infrastructure for:

The hard part of using AI in digital health isn’t the AI. It’s everything around it.

To use LLMs with PHI in production, teams need infrastructure for:

  • BAA coverage with every AI provider handling PHI

  • Audit logging of prompts and responses

  • Secure storage for AI logs

  • Log export to long-term retention for audits and investigations

  • Key management across teams and environments

  • Model access controls to govern which systems can call which models

  • PHI and PII de-identification to limit exposure in LLM calls and logs without affecting response quality

  • Request inspection and traceability for compliance reviews

  • Budget and usage controls that stop requests when limits are reached

  • Capacity and availability management for production workloads

A BAA from OpenAI or Anthropic covers the provider’s liability. It doesn’t give you audit logging, de-identification, or access controls. Those are still your problem.

Aptible AI Gateway replaces that entire control layer with a single managed gateway.

Aptible AI Gateway replaces that entire control layer with a single managed gateway.

Aptible AI Gateway replaces that entire control layer with a single managed gateway.

Compliance at the gateway, not in application code

BAA coverage, audit logging of every request and response, encrypted storage, and no model training on PHI are enforced on every LLM call. The compliance layer lives at the gateway, not in custom application code.

Key management and model governance

Organize usage with scopes for applications, teams, and environments. Restrict which models each scope can access and attribute every request for audit and cost visibility.

PHI de-identification

Reduce scope of PHI exposure by de-identifying sensitive data in requests and logs and restoring it only when needed. PHI is protected without breaking application logic or relying on manual safeguards.

Observability and verification

Inspect actual requests and responses, verify de-identification, and retain logs for compliance and incident review. Controls are visible and provable, not theoretical.

Cost and usage controls

Set budget limits per scope and set alerts or hard stops when thresholds are reached. Usage can be monitored in real time so teams can manage AI spending intentionally.

Production-grade reliability without managing AI infrastructure

Protocol translation, capacity management, and high availability are handled within the gateway. AI traffic runs on production-grade infrastructure designed for reliability and scale.

View docs

Managed AI fits into existing systems and workflows

Change models any time, no compliance review needed

Controls apply wherever AI is used, including internal dev work

Are requests are logged, secured and auditable

Developer workflows are part of the risk surface

AI risk doesn’t start and end with production features. Engineers use AI in debugging, data exploration, and internal workflows every day.

Without an approved path, it becomes difficult to explain and defend how AI is used across the organization when customers ask about PHI handling, data retention, or model training. AI Gateway provides a consistent way to use AI across production and developer workflows, with the same controls and auditability enforced automatically.

Change models without affecting risk

Choosing a model should be a product decision, not a compliance event.

With Aptible AI Gateway, adopting a new model doesn’t require redesigning controls or reopening compliance reviews. Model changes stay within the same managed control layer, so logging, encryption, and guardrails remain consistent as providers, tools, and usage evolve.

Built to evolve as AI workflows change

AI Gateway is designed as a control layer, not a point solution. As teams adopt agent workflows, developer tools, and new model interfaces, safeguards extend at the gateway without changing your risk model, audit surface, or compliance posture.

No Reopening

Compliance reviews stay closed as technology evolves

Consistent Layer

Same controls across all models and workflows

Future-Proof

Architecture designed to adapt as AI evolves

View the Changelog

With direct APIs, every provider integration is a separate problem

With direct APIs and DIY

Compliance controls

BAA coverage

One BAA covers AI Gateway usage

Separate BAAs per provider

Audit logging

Prompts, responses, and metadata logged automatically

Build and maintain your own logging pipeline

Log retention

Log drain support for long-term storage

Design and maintain your own export process

Encryption

Enforced in transit and at rest

Configure and verify per provider

No model training on PHI

Enforced at infrastructure layer

Rely on provider policy and configuration

Audit readiness and breach investigation

Logs and activity history available immediately for audits or incident investigation

Reconstruct activity across systems during audits or breach reviews

Key management and governance

Key organization

Scopes for apps, teams, environments

Manage keys individually through provider accounts

Model access controls

Restrict models per scope

Model access managed separately per provider and integration

Cost attribution

Usage tracked per scope and model

Aggregate bill with limited breakdown and across cost dashboards from different providers

Data protection

De-identification

PHI de-identification is built in to reduce scope of exposure

Build and maintain your own NLP pipeline

Consistency across models

Same controls regardless of provider

Re-implement safeguards per integration

Observability

Request inspection

View actual prompts and responses

Build dashboards or search raw logs

Cost and operations

Budget enforcement

Set alerts and hard stops for requests to limit spend

Monitor spending manually

Protocol translation

Same keys work across supported providers

Maintain separate integrations

Capacity management

Managed within the gateway

Manage rate limits and availability yourself

Time to safe usage

Immediate

Weeks to months

Use Cases

Why teams choose Aptible

AI Gateway supports production AI features and internal workflows that may touch PHI while providing the governance, visibility, and controls required for regulated systems.

Use Cases

Why teams choose Aptible

AI Gateway supports production AI features and internal workflows that may touch PHI while providing the governance, visibility, and controls required for regulated systems.

Ship AI features that use real patient data

Build LLM-powered workflows such as summarization, extraction, and care coordination that rely on full patient context. AI Gateway provides logging, de-identification, and compliance controls that allow teams to introduce AI features into regulated applications.

Separate production from experimentation

Create scopes for production, development, and internal workflows with different model access rules and budget limits. Teams can explore new models and ideas while keeping production environments stable and governed.

Provide clear evidence during audits and security reviews

Inspect prompts and responses, retain logs for compliance requirements, and demonstrate how PHI was handled across models and environments. AI Gateway gives teams the visibility needed to answer security and diligence questions confidently.

Control cost and operational risk

Set budget limits per scope, and receive alerts or stop requests entirely when thresholds are reached. Unlike cloud billing alerts, this cuts off usage before you exceed it. AI spending becomes predictable and governable, not just visible.

Keep shipping. Safety happens automatically.

Deploy in minutes.

Keep shipping. Safety happens automatically.

Deploy in minutes.