Skip to main content
Aptible’s AI Gateway lets you access LLMs from OpenAI, Anthropic, and Amazon Bedrock through a single, compliant API. The gateway is designed for regulated industries; it provides HIPAA compliance with BAA coverage, automatic audit logging, and encryption, so your team can build AI-powered features without managing compliance infrastructure yourself.

Getting Started

1

Sign up for the beta

Request access to the AI Gateway beta. Once approved, the AI Gateway features will be available at https://app.aptible.com/llm-keys.
2

Create an environment

LLM keys are scoped to environments, which is where you configure model access policies and monitor costs. You can use an existing environment or create a new one dedicated to your AI workloads.
3

Create an LLM key

Navigate to the AI Gateway section in the Aptible dashboard. You can create keys from:
  • The main AI Gateway > LLM Keys page
  • The environment details page under AI Gateway > LLM Keys
Give your key a descriptive name so you can identify its usage later when reviewing costs and request history.
Screenshot2026 02 20at6 00 09PM
Make sure to copy the key on the next screen — you won’t be able to see the key again once you close this window.
Screenshot2026 02 20at6 00 32PM
4

Connect to the gateway

All requests go through a single endpoint:
https://llm-gateway-api.aptible.com
The gateway is compatible with the OpenAI API format. Prefix your model name with the provider. You can see the available models in the key details page.
curl -X POST \
     -H "Content-Type: application/json" \
     -H "Authorization: Bearer $YOUR_LLM_GATEWAY_KEY" \
     -d '{
           "model": "anthropic/claude-sonnet-4-20250514",
           "messages": [{"role": "user", "content": "Hello world!"}]
         }' \
     https://llm-gateway-api.aptible.com/chat/completions
5

See and manage key usage in the Aptible UI

Once your key is in use, you can monitor its activity from the key details page in the Aptible dashboard. View per-key costs for the current billing period, request history for the past week, and token consumption per request.
Llm Key Details

Supported Models

The AI Gateway supports models from three providers:
ProviderExample ModelsPrefix
AnthropicClaude Opus, Claude Sonnetbedrock/anthropic
OpenAIGPT-5.2openai/
Amazon BedrockQwen, Llama, and other Bedrock-hosted modelsbedrock/
Model availability may change during the beta. Check the LLM key details page for the current list of available models.

Features

Model Access Policies

Control which models your team can use by configuring model access policies at the environment level. Policies apply to all LLM keys within an environment, giving you centralized control over model usage across your applications and developers.
Model Access

Cost Visibility

Track your LLM spend across your organization:
  • Organization-level costs — See total AI Gateway spend across all environments for the current billing period.
  • Environment-level costs — Break down spend by environment to understand which teams or applications are driving usage.
  • Per-key costs — View usage costs for each individual LLM key during the current billing period to identify which keys generate the most activity and find opportunities for cost optimization.
Key Cost

Request History

View request activity for each key over the past week. The key details page shows token consumption and associated costs for each request, giving you visibility into how your keys are being used.
Llm Key Details

Compliance

The AI Gateway provides HIPAA compliance out of the box:
  • BAA coverage — Aptible’s BAA covers all models and capabilities accessed through the gateway.
  • Audit logging — All LLM calls are automatically logged for compliance and auditing purposes.
  • No PHI training — LLM providers are prohibited from retaining or using PHI for model training.
  • Encryption — All data is encrypted in transit and at rest.

Coming Soon

We’re actively building new capabilities for the AI Gateway:
  • Request and response logs in the UI — View the full contents of your LLM requests and responses directly in the Aptible dashboard to help with prompt refinement and troubleshooting.
  • Log drains to Langfuse — Route your LLM logs to Langfuse for long-term storage and observability.
  • Data residency — Deploy the AI Gateway in specific regions to meet local data residency and compliance requirements.

Support and Feedback

We’d love to hear from you as you use the AI Gateway beta. If you have questions, run into issues, or have feature requests, contact us.