Heroku Alternatives
How to migrate off Heroku: a platform-agnostic guide (2026)
Last updated: March 2026
→ For technical steps specific to migrating to Aptible, see Aptible's Heroku migration docs.
→ If you are migrating from Heroku Shield specifically, see Migrating from Heroku Shield.
Most migration guides are destination-specific. This one covers the layer that actually gets teams into trouble: the inventory, the secrets trap, the observability gap, and the CI/CD decision that should be separated from the platform move.
TL;DR
Most migrations are not rewrites. They're platform abstraction replacements.
Small teams often complete migrations in days, not months.
AI-assisted Dockerization has meaningfully lowered the barrier to containerized builds.
The biggest risks are database cutover and workflow drift, not the platform move itself.
Build-time vs. run-time secrets is a genuine trap that catches teams off guard.
DNS-based cutover dramatically reduces rollback risk.
Separating migration from CI/CD modernization is the most important tactical decision you'll make.
A controlled migration is far safer than an emergency one.
How difficult is a Heroku migration?
Less difficult than most teams assume, but not for the reasons they assume. The application doesn't need to change. The deployment model does.
Migration effort by team profile
Solo or small team (one to three engineers). Measured in days for most applications. AI tools have made Dockerization substantially faster. If your application is a web process plus a database with a handful of add-ons, the migration is primarily inventory, Dockerfile creation, secrets migration, and DNS cutover. Minimal workflow disruption.
Mid-size SaaS team (four to fifteen engineers). Measured in weeks. CI/CD coordination required because multiple engineers are deploying. Database replication is likely necessary to minimize downtime. Worker processes need to be mapped carefully. The organizational coordination is often more time-consuming than the technical work.
Security-focused or regulated team. Multi-phase project. Networking redesign, RBAC restructuring, audit trail validation before and after cutover, and compliance documentation continuity all add scope. More internal alignment required because more stakeholders care about the outcome. If you're migrating from Heroku Shield specifically, see Migrating from Heroku Shield.
What migrating actually means
Platform abstractions, not application rewrites
You are not rewriting your application. You are replacing the platform's abstractions with the equivalent on your target platform.
What typically changes:
Build system: buildpacks become a Dockerfile or image-based deploy
Deployment workflow:
git push heroku mainbecomes a CI-driven image deploySecrets handling: Heroku config vars map to your target platform's secrets management
Add-ons: each Heroku add-on needs an equivalent managed service
Networking and isolation model: Private Spaces or standard routing map to your target's networking primitives
What typically stays the same:
Application code
Process model (web, worker, cron maps cleanly across most platforms)
Database schema
Core runtime and dependencies
The work is mostly mapping and parity. The hard parts are usually not where teams expect them to be.
Step 1: full inventory
Don't start anything else until the inventory is complete. Migrations fail at cutover because of things that weren't documented at the start.
Procfile and services
Capture every process type running on Heroku: web processes, worker processes, scheduled jobs, one-off tasks, pre-release commands, and release phase commands. The Procfile is your starting point, but it may not reflect reality if process types have drifted or if scheduled jobs are managed outside Heroku Scheduler.
Document what each process does, what environment variables it requires, and whether it has any Heroku-specific behavior (like Heroku's pre-boot feature).
Environment variables (the build-time trap)
This is the most common migration trap, and it's worth calling out explicitly before you start.
Heroku has two categories of secrets: run-time config vars (set via heroku config:set and injected at container startup) and build-time variables (injected during the build process, often through buildpack configuration or Heroku CI settings). These are different, and they're handled differently on every platform you might migrate to.
If your build process consumes secrets (npm auth tokens, private registry credentials, build-time API keys), document exactly where they live in Heroku and how your target platform handles build-time vs. run-time secret injection. Teams regularly complete a migration, push to production, and find that builds fail because the build environment on the new platform doesn't have access to credentials that Heroku was injecting silently during the build phase.
Run-time secrets to inventory: API keys, OAuth credentials, database URLs, service integration keys, any environment-specific configuration.
Build-time secrets to check for: private registry credentials, package manager auth tokens, build-time feature flags, CI-specific configuration.
Add-ons and external services
Inventory every add-on in your Heroku account. For each one, identify: what it does, which applications depend on it, how it's configured, whether your application code has Heroku-specific assumptions about how the add-on behaves, and what the equivalent managed service is on your target platform.
Common add-ons that need explicit mapping: Heroku Postgres, Heroku Data for Redis, Papertrail or other logging providers, New Relic or Scout for APM, Sendgrid or Mailgun for email, Bonsai or Searchly for Elasticsearch, Cloudinary or S3-compatible storage.
Note which add-ons are tightly coupled to Heroku's provisioning model. The database and Redis are the most critical to plan carefully.
Networking and DNS
Document: all custom domains, current SSL configuration, DNS TTL values for all domains (lower these at least 48 hours before cutover), webhooks pointing to your Heroku endpoints, IP allowlisting rules that reference Heroku's IP ranges, and any inbound or outbound integrations that depend on specific IP addresses or endpoints.
DNS TTL is easy to forget and has the longest setup time of anything in this list. If your DNS TTL is set to 24 hours and you need to cut over DNS, you have a 24-hour rollback lag. Lower TTL to 60 seconds several days before your cutover window.
Step 2: map Heroku abstractions
Dynos to containers
Web dynos become web services or containers. Worker dynos become worker services or jobs. Most platforms support a similar process model.
Where the mapping gets complicated: autoscaling behavior. Heroku's autoscaling for web dynos is based on queue depth. Your target platform may scale on CPU, memory, request count, or other signals. Document your current scaling policy and configure the equivalent explicitly on the target platform before cutover.
Buildpacks to Dockerfiles
Most modern platforms default to Docker. If your application uses buildpacks and doesn't have a Dockerfile, you'll need to create one. This is less daunting than it used to be.
AI coding assistants can generate a working Dockerfile from your existing Procfile and package manifest in minutes. The first pass won't be optimized (and shouldn't need to be for an initial migration), but it will typically produce a working build. The goal for migration is parity, not optimization.
If you want to avoid Docker entirely for an initial migration, Cloud Native Buildpacks (used by platforms like Heroku, Dokku, and some Render configurations) can produce container images from your existing buildpack setup. This is a useful bridge for teams that want to lift and shift first.
Config vars to secrets management
Map every Heroku config var to the equivalent in your target platform's secrets management. Most platforms support environment variable injection at runtime. The differences show up in: how secrets are scoped across environments, whether secrets can be versioned or rotated, and how build-time secrets are handled (see the trap above).
Be explicit about staging vs. production secrets. A common mistake is sharing a secrets configuration between environments and discovering mid-migration that your staging environment is pointing at production databases or services.
Add-ons to managed services
Heroku Postgres to your target platform's managed Postgres (or a standalone RDS instance). Heroku Redis to managed Redis on the target platform. Logging add-ons to your target platform's log management (some platforms include this; others require a log drain to an external service).
For each add-on, document the connection string format differences. Heroku injects DATABASE_URL automatically. Your target platform may use a different variable name or a different URL format.
What surprises teams
These are the failure modes that appear consistently in post-migration retrospectives. None of them are catastrophic. All of them are avoidable with advance planning.
Build reproducibility
The most common failure mode in the first few days after migration: builds that pass locally and fail in the new CI environment. Common causes:
Build-time environment differences (the target CI environment doesn't have access to secrets that Heroku injected silently)
Hidden buildpack behavior (Heroku's buildpacks do things that aren't documented and don't appear in your Procfile)
Pre-release commands that assume Heroku-specific utilities or behaviors
Missing system dependencies (native extensions that Heroku's buildpack installed automatically)
The fix in all these cases is the same: run the build in the target environment explicitly before cutover, document every failure, and trace each one back to its source. Do not ship this to production until builds are reproducible end to end in the new environment.
Workflow drift
Developers who are used to git push heroku main will need to change how they deploy. The degree of disruption depends on what you're migrating to. Some platforms support a similar git-push workflow. Others require pushing an image to a registry and triggering a deploy separately.
More significant workflow differences: rollback semantics (Heroku's rollback reverts to a previous slug; image-based rollbacks work differently), one-off console access (the equivalent of heroku run console varies by platform), and review app parity (if you rely on Heroku review apps, your target platform's equivalent may behave differently).
The advice here is consistent: preserve workflow first, modernize later. Get to parity with what you have on Heroku before redesigning how your team deploys. The disruption of changing both the platform and the deploy workflow simultaneously is organizational, not just technical.
Observability gaps
APM agents are frequently forgotten until after cutover. If you're using New Relic, Scout, Datadog, or another APM tool, verify that the agent works in your target environment before you switch traffic. Some APM agents rely on UDP for metrics, which may not be available in certain networking configurations. Others require specific environment variable names that differ from Heroku's conventions.
Log drain setup is also commonly deferred. On Heroku, log routing to an external provider is a single add-on configuration. On other platforms, you configure this differently, and the setup can take time to validate.
Configure observability in the new environment and verify it's working correctly before you cut over. Discovering that you have no metrics 20 minutes into a production cutover is an unpleasant experience.
Separate migration from modernization
This deserves its own section because it's the most actionable advice in this guide.
Migration and CI/CD redesign are two separate projects. Running them simultaneously is the most common reason Heroku migrations take longer and cost more than they should.
Option 1: lift and shift first. Replicate your current deploy workflow as closely as possible on the target platform. Minimize behavioral changes. The goal is: your team deploys the same way they did on Heroku, and nothing about the application changes. Get to that state first.
Once you're running in production on the new platform with no issues for a few weeks, then evaluate whether to modernize your CI/CD pipeline. You'll be making that decision from a stable baseline rather than under migration pressure.
Option 2: use migration to upgrade CI/CD. If you intentionally want to introduce GitHub Actions, add security scanning, enforce role-based deploy permissions, or redesign the environment promotion model, this can be done alongside the migration. But do it deliberately. Scope it explicitly. Communicate to the team that you're doing two things at once. And be prepared for the migration to take longer.
Do not accidentally attempt both simultaneously. The failure mode is: the migration doesn't complete, CI/CD is broken, you've introduced multiple variables into every failure, and no one is confident which change caused which problem.
Database migration strategy
The database migration is the highest-risk part of any Heroku migration. Treat it as the critical path.
Dump and restore
Best for small to medium databases (under 10GB, ideally) where a brief maintenance window is acceptable.
Steps: enable maintenance mode or stop writes to the application, capture a database backup using heroku pg:backups:capture, download the backup, restore it to the target database, validate row counts and critical data, update connection strings, and switch traffic.
Advantages: simple, reliable, easy to validate. Disadvantages: requires downtime while the restore runs. For large databases, the restore window can be substantial.
Logical replication
Best for larger databases or applications with minimal downtime tolerance.
High-level flow: set up continuous replication from Heroku Postgres to your target database, allow the replica to catch up to the primary, briefly freeze writes (seconds to minutes), validate replication sync, cut over connection strings, and monitor.
Heroku Postgres has a specific constraint worth knowing: direct replication control is not available to customers by default. If you need to set up a publication/subscription for logical replication, you may need to contact Heroku support to expose WAL archives in S3. This adds coordination overhead and timeline uncertainty. Plan for this explicitly, and initiate the support request well before your cutover window.
Downtime and rollback planning
Define your acceptable downtime window before you choose a migration strategy. Communicate it to stakeholders before cutover, not during.
Before any production database migration: confirm your backup is complete and restorable (test the restore, don't just assume), define the rollback trigger conditions explicitly (at what point do you revert?), freeze writes for a documented period before cutover, and have someone who isn't executing the migration monitoring error rates and availability.
Side-by-side migration: the recommended approach
The safest migration strategy for most teams:
Mirror one application in the target platform. Don't touch your existing Heroku deployment.
Keep your existing CI/CD workflow initially. Deploy to both environments from the same pipeline if possible, or deploy to the new environment manually to validate.
Use buildpacks or your Dockerfile on the target platform. Confirm builds are reproducible.
Attach your custom domain in the target platform in advance.
Lower DNS TTL 48 to 72 hours before cutover. Target 60 seconds.
Validate staging thoroughly. Reproduce your production environment as closely as possible and test it.
Complete the database migration (dump/restore or replication) according to your chosen strategy.
Flip DNS. Route traffic to the new platform.
Monitor aggressively for the first two to four hours. Error rates, latency, worker queue depth, database connection counts.
If rollback is needed: switch DNS back. If you used dump/restore, rollback is straightforward. If you used logical replication and haven't frozen the Heroku database, rollback is possible but requires careful sequencing.
DNS-based cutover is the key. When DNS is the traffic switch, rollback is as fast as changing a DNS record. If you cut over by modifying application configuration or load balancer rules, rollback is slower and more complex.
Migrating to AWS: what's different
AWS migrations follow the same inventory and mapping process, but the infrastructure rebuild scope is substantially larger.
Your target runtime model options: ECS with Fargate (recommended for most teams migrating off Heroku), EKS (significantly more complex), Elastic Beanstalk (simpler but limited), or Lambda and serverless functions (only appropriate for specific workload shapes).
What you'll build: VPC and subnet design, security groups, IAM roles and policies, an application load balancer, secrets management via AWS Secrets Manager or Parameter Store, a logging stack (CloudWatch or a third-party drain), and a monitoring stack. CI/CD redesign is almost always required, because AWS doesn't provide a deploy workflow out of the box.
The organizational implications: platform engineering ownership, on-call responsibility for infrastructure failures that didn't exist on Heroku, and the internal selling required to justify the operational complexity increase.
For teams with platform engineering capacity and specific networking or compliance requirements, this is the right path. For most Heroku teams, it's a significant scope increase. See DIY Cloud.
For regulated workloads: see the Shield migration guide
If you're currently running PHI workloads on Heroku Shield, your migration has additional scope: Private Space networking and isolation mapping, Shield database services and backup continuity, compliance documentation continuity during the transition, BAA transition timing (you need a new BAA with your target provider before you go live), audit trail validation before and after cutover, and worker and Redis job cutover sequencing.
Regulated workloads have less tolerance for configuration drift and require careful documentation of the control environment throughout the migration process.
For the full technical breakdown specific to Shield environments: Migrating from Heroku Shield.
Treat cutover like an incident
The cutover window is not routine. Treat it the way your team would treat a production incident.
Pre-cutover checklist:
Backup confirmed and restore tested
Secrets verified in target environment
Observability (logs, metrics, APM) confirmed active in target environment
Health checks configured and passing
Rollback triggers defined and documented in advance (not improvised during)
Stakeholders notified of the cutover window
Traffic switching:
Lower DNS TTL was done 48 to 72 hours ago
Freeze writes or enable maintenance mode if using dump/restore database migration
Switch load balancer or DNS
Monitor error rates and latency in the first minutes after switch
Validate worker queues separately from web traffic
Rollback plan:
Revert DNS if rollback is triggered
Restore database from backup if writes occurred on the new platform after cutover
Communicate internally: who calls the rollback, what the criteria are, who is responsible for the postmortem
If rollback is triggered, conduct a postmortem before the next attempt
Common migration mistakes
These appear consistently enough to be worth listing explicitly:
Not inventorying add-ons before starting (discovering a dependency after cutover is painful)
Ignoring build-time secrets until builds fail in the new environment
Underestimating database size and restore time
Forgetting background jobs and worker processes until they're not running in production
Not planning rollback in advance (improvising rollback during an incident is a bad experience)
Leaving DNS TTL high until the day of cutover
Migrating during peak traffic hours
Changing CI/CD and platform simultaneously without scoping them as separate projects
Not testing observability in the new environment before going live
Migration safety checklist
Before cutover:
Inventory complete (applications, workers, jobs, add-ons, domains, secrets, dependencies)
Dockerfile verified and builds reproducible in target environment
Build-time secrets mapped and confirmed in target environment
Target environment provisioned and validated
Database migration strategy selected and tested (restore validated)
Backups confirmed and restore tested
Observability configured and verified in target environment
CI/CD decision documented (lift and shift, or modernize simultaneously with explicit scope)
Cutover window defined and communicated
DNS TTL lowered 48 to 72 hours before cutover
Rollback plan written (not improvised)
Rollback triggers defined
Stakeholders aligned on timing and communication plan
Migrating to Aptible
Aptible's platform model is similar to Heroku's PaaS mental model with Docker-based deploys, Procfile compatibility, managed Postgres, private-by-default architecture, and built-in compliance infrastructure for regulated production workloads.
For teams migrating from Heroku's standard tier, the migration is typically one of the smoother paths because the operational model is similar and the compliance infrastructure is included rather than requiring separate configuration.
For a full technical walkthrough, see Aptible's Heroku migration docs.
If you're migrating from Heroku Shield and handling PHI, follow the dedicated guide: Migrating from Heroku Shield.
Next steps
If you haven't finalized your target platform: Read Compare Platforms before you start. The migration process looks similar across PaaS targets, but compliance requirements, database options, and operational model differences matter. Choosing the right target before you start executing prevents rework.
If you're migrating regulated workloads off Heroku Shield: See Migrating from Heroku Shield for the compliance-specific scope: BAA transition timing, isolation documentation, database migration for PHI environments, and audit continuity planning.
If you're still deciding whether to migrate: See Should You Migrate?. This guide assumes migration is the right call. If you haven't made that decision yet, the framework there will help you make it deliberately rather than by default.
Heroku Alternatives
How to migrate off Heroku: a platform-agnostic guide (2026)
Last updated: March 2026
→ For technical steps specific to migrating to Aptible, see Aptible's Heroku migration docs.
→ If you are migrating from Heroku Shield specifically, see Migrating from Heroku Shield.
Most migration guides are destination-specific. This one covers the layer that actually gets teams into trouble: the inventory, the secrets trap, the observability gap, and the CI/CD decision that should be separated from the platform move.
TL;DR
Most migrations are not rewrites. They're platform abstraction replacements.
Small teams often complete migrations in days, not months.
AI-assisted Dockerization has meaningfully lowered the barrier to containerized builds.
The biggest risks are database cutover and workflow drift, not the platform move itself.
Build-time vs. run-time secrets is a genuine trap that catches teams off guard.
DNS-based cutover dramatically reduces rollback risk.
Separating migration from CI/CD modernization is the most important tactical decision you'll make.
A controlled migration is far safer than an emergency one.
How difficult is a Heroku migration?
Less difficult than most teams assume, but not for the reasons they assume. The application doesn't need to change. The deployment model does.
Migration effort by team profile
Solo or small team (one to three engineers). Measured in days for most applications. AI tools have made Dockerization substantially faster. If your application is a web process plus a database with a handful of add-ons, the migration is primarily inventory, Dockerfile creation, secrets migration, and DNS cutover. Minimal workflow disruption.
Mid-size SaaS team (four to fifteen engineers). Measured in weeks. CI/CD coordination required because multiple engineers are deploying. Database replication is likely necessary to minimize downtime. Worker processes need to be mapped carefully. The organizational coordination is often more time-consuming than the technical work.
Security-focused or regulated team. Multi-phase project. Networking redesign, RBAC restructuring, audit trail validation before and after cutover, and compliance documentation continuity all add scope. More internal alignment required because more stakeholders care about the outcome. If you're migrating from Heroku Shield specifically, see Migrating from Heroku Shield.
What migrating actually means
Platform abstractions, not application rewrites
You are not rewriting your application. You are replacing the platform's abstractions with the equivalent on your target platform.
What typically changes:
Build system: buildpacks become a Dockerfile or image-based deploy
Deployment workflow:
git push heroku mainbecomes a CI-driven image deploySecrets handling: Heroku config vars map to your target platform's secrets management
Add-ons: each Heroku add-on needs an equivalent managed service
Networking and isolation model: Private Spaces or standard routing map to your target's networking primitives
What typically stays the same:
Application code
Process model (web, worker, cron maps cleanly across most platforms)
Database schema
Core runtime and dependencies
The work is mostly mapping and parity. The hard parts are usually not where teams expect them to be.
Step 1: full inventory
Don't start anything else until the inventory is complete. Migrations fail at cutover because of things that weren't documented at the start.
Procfile and services
Capture every process type running on Heroku: web processes, worker processes, scheduled jobs, one-off tasks, pre-release commands, and release phase commands. The Procfile is your starting point, but it may not reflect reality if process types have drifted or if scheduled jobs are managed outside Heroku Scheduler.
Document what each process does, what environment variables it requires, and whether it has any Heroku-specific behavior (like Heroku's pre-boot feature).
Environment variables (the build-time trap)
This is the most common migration trap, and it's worth calling out explicitly before you start.
Heroku has two categories of secrets: run-time config vars (set via heroku config:set and injected at container startup) and build-time variables (injected during the build process, often through buildpack configuration or Heroku CI settings). These are different, and they're handled differently on every platform you might migrate to.
If your build process consumes secrets (npm auth tokens, private registry credentials, build-time API keys), document exactly where they live in Heroku and how your target platform handles build-time vs. run-time secret injection. Teams regularly complete a migration, push to production, and find that builds fail because the build environment on the new platform doesn't have access to credentials that Heroku was injecting silently during the build phase.
Run-time secrets to inventory: API keys, OAuth credentials, database URLs, service integration keys, any environment-specific configuration.
Build-time secrets to check for: private registry credentials, package manager auth tokens, build-time feature flags, CI-specific configuration.
Add-ons and external services
Inventory every add-on in your Heroku account. For each one, identify: what it does, which applications depend on it, how it's configured, whether your application code has Heroku-specific assumptions about how the add-on behaves, and what the equivalent managed service is on your target platform.
Common add-ons that need explicit mapping: Heroku Postgres, Heroku Data for Redis, Papertrail or other logging providers, New Relic or Scout for APM, Sendgrid or Mailgun for email, Bonsai or Searchly for Elasticsearch, Cloudinary or S3-compatible storage.
Note which add-ons are tightly coupled to Heroku's provisioning model. The database and Redis are the most critical to plan carefully.
Networking and DNS
Document: all custom domains, current SSL configuration, DNS TTL values for all domains (lower these at least 48 hours before cutover), webhooks pointing to your Heroku endpoints, IP allowlisting rules that reference Heroku's IP ranges, and any inbound or outbound integrations that depend on specific IP addresses or endpoints.
DNS TTL is easy to forget and has the longest setup time of anything in this list. If your DNS TTL is set to 24 hours and you need to cut over DNS, you have a 24-hour rollback lag. Lower TTL to 60 seconds several days before your cutover window.
Step 2: map Heroku abstractions
Dynos to containers
Web dynos become web services or containers. Worker dynos become worker services or jobs. Most platforms support a similar process model.
Where the mapping gets complicated: autoscaling behavior. Heroku's autoscaling for web dynos is based on queue depth. Your target platform may scale on CPU, memory, request count, or other signals. Document your current scaling policy and configure the equivalent explicitly on the target platform before cutover.
Buildpacks to Dockerfiles
Most modern platforms default to Docker. If your application uses buildpacks and doesn't have a Dockerfile, you'll need to create one. This is less daunting than it used to be.
AI coding assistants can generate a working Dockerfile from your existing Procfile and package manifest in minutes. The first pass won't be optimized (and shouldn't need to be for an initial migration), but it will typically produce a working build. The goal for migration is parity, not optimization.
If you want to avoid Docker entirely for an initial migration, Cloud Native Buildpacks (used by platforms like Heroku, Dokku, and some Render configurations) can produce container images from your existing buildpack setup. This is a useful bridge for teams that want to lift and shift first.
Config vars to secrets management
Map every Heroku config var to the equivalent in your target platform's secrets management. Most platforms support environment variable injection at runtime. The differences show up in: how secrets are scoped across environments, whether secrets can be versioned or rotated, and how build-time secrets are handled (see the trap above).
Be explicit about staging vs. production secrets. A common mistake is sharing a secrets configuration between environments and discovering mid-migration that your staging environment is pointing at production databases or services.
Add-ons to managed services
Heroku Postgres to your target platform's managed Postgres (or a standalone RDS instance). Heroku Redis to managed Redis on the target platform. Logging add-ons to your target platform's log management (some platforms include this; others require a log drain to an external service).
For each add-on, document the connection string format differences. Heroku injects DATABASE_URL automatically. Your target platform may use a different variable name or a different URL format.
What surprises teams
These are the failure modes that appear consistently in post-migration retrospectives. None of them are catastrophic. All of them are avoidable with advance planning.
Build reproducibility
The most common failure mode in the first few days after migration: builds that pass locally and fail in the new CI environment. Common causes:
Build-time environment differences (the target CI environment doesn't have access to secrets that Heroku injected silently)
Hidden buildpack behavior (Heroku's buildpacks do things that aren't documented and don't appear in your Procfile)
Pre-release commands that assume Heroku-specific utilities or behaviors
Missing system dependencies (native extensions that Heroku's buildpack installed automatically)
The fix in all these cases is the same: run the build in the target environment explicitly before cutover, document every failure, and trace each one back to its source. Do not ship this to production until builds are reproducible end to end in the new environment.
Workflow drift
Developers who are used to git push heroku main will need to change how they deploy. The degree of disruption depends on what you're migrating to. Some platforms support a similar git-push workflow. Others require pushing an image to a registry and triggering a deploy separately.
More significant workflow differences: rollback semantics (Heroku's rollback reverts to a previous slug; image-based rollbacks work differently), one-off console access (the equivalent of heroku run console varies by platform), and review app parity (if you rely on Heroku review apps, your target platform's equivalent may behave differently).
The advice here is consistent: preserve workflow first, modernize later. Get to parity with what you have on Heroku before redesigning how your team deploys. The disruption of changing both the platform and the deploy workflow simultaneously is organizational, not just technical.
Observability gaps
APM agents are frequently forgotten until after cutover. If you're using New Relic, Scout, Datadog, or another APM tool, verify that the agent works in your target environment before you switch traffic. Some APM agents rely on UDP for metrics, which may not be available in certain networking configurations. Others require specific environment variable names that differ from Heroku's conventions.
Log drain setup is also commonly deferred. On Heroku, log routing to an external provider is a single add-on configuration. On other platforms, you configure this differently, and the setup can take time to validate.
Configure observability in the new environment and verify it's working correctly before you cut over. Discovering that you have no metrics 20 minutes into a production cutover is an unpleasant experience.
Separate migration from modernization
This deserves its own section because it's the most actionable advice in this guide.
Migration and CI/CD redesign are two separate projects. Running them simultaneously is the most common reason Heroku migrations take longer and cost more than they should.
Option 1: lift and shift first. Replicate your current deploy workflow as closely as possible on the target platform. Minimize behavioral changes. The goal is: your team deploys the same way they did on Heroku, and nothing about the application changes. Get to that state first.
Once you're running in production on the new platform with no issues for a few weeks, then evaluate whether to modernize your CI/CD pipeline. You'll be making that decision from a stable baseline rather than under migration pressure.
Option 2: use migration to upgrade CI/CD. If you intentionally want to introduce GitHub Actions, add security scanning, enforce role-based deploy permissions, or redesign the environment promotion model, this can be done alongside the migration. But do it deliberately. Scope it explicitly. Communicate to the team that you're doing two things at once. And be prepared for the migration to take longer.
Do not accidentally attempt both simultaneously. The failure mode is: the migration doesn't complete, CI/CD is broken, you've introduced multiple variables into every failure, and no one is confident which change caused which problem.
Database migration strategy
The database migration is the highest-risk part of any Heroku migration. Treat it as the critical path.
Dump and restore
Best for small to medium databases (under 10GB, ideally) where a brief maintenance window is acceptable.
Steps: enable maintenance mode or stop writes to the application, capture a database backup using heroku pg:backups:capture, download the backup, restore it to the target database, validate row counts and critical data, update connection strings, and switch traffic.
Advantages: simple, reliable, easy to validate. Disadvantages: requires downtime while the restore runs. For large databases, the restore window can be substantial.
Logical replication
Best for larger databases or applications with minimal downtime tolerance.
High-level flow: set up continuous replication from Heroku Postgres to your target database, allow the replica to catch up to the primary, briefly freeze writes (seconds to minutes), validate replication sync, cut over connection strings, and monitor.
Heroku Postgres has a specific constraint worth knowing: direct replication control is not available to customers by default. If you need to set up a publication/subscription for logical replication, you may need to contact Heroku support to expose WAL archives in S3. This adds coordination overhead and timeline uncertainty. Plan for this explicitly, and initiate the support request well before your cutover window.
Downtime and rollback planning
Define your acceptable downtime window before you choose a migration strategy. Communicate it to stakeholders before cutover, not during.
Before any production database migration: confirm your backup is complete and restorable (test the restore, don't just assume), define the rollback trigger conditions explicitly (at what point do you revert?), freeze writes for a documented period before cutover, and have someone who isn't executing the migration monitoring error rates and availability.
Side-by-side migration: the recommended approach
The safest migration strategy for most teams:
Mirror one application in the target platform. Don't touch your existing Heroku deployment.
Keep your existing CI/CD workflow initially. Deploy to both environments from the same pipeline if possible, or deploy to the new environment manually to validate.
Use buildpacks or your Dockerfile on the target platform. Confirm builds are reproducible.
Attach your custom domain in the target platform in advance.
Lower DNS TTL 48 to 72 hours before cutover. Target 60 seconds.
Validate staging thoroughly. Reproduce your production environment as closely as possible and test it.
Complete the database migration (dump/restore or replication) according to your chosen strategy.
Flip DNS. Route traffic to the new platform.
Monitor aggressively for the first two to four hours. Error rates, latency, worker queue depth, database connection counts.
If rollback is needed: switch DNS back. If you used dump/restore, rollback is straightforward. If you used logical replication and haven't frozen the Heroku database, rollback is possible but requires careful sequencing.
DNS-based cutover is the key. When DNS is the traffic switch, rollback is as fast as changing a DNS record. If you cut over by modifying application configuration or load balancer rules, rollback is slower and more complex.
Migrating to AWS: what's different
AWS migrations follow the same inventory and mapping process, but the infrastructure rebuild scope is substantially larger.
Your target runtime model options: ECS with Fargate (recommended for most teams migrating off Heroku), EKS (significantly more complex), Elastic Beanstalk (simpler but limited), or Lambda and serverless functions (only appropriate for specific workload shapes).
What you'll build: VPC and subnet design, security groups, IAM roles and policies, an application load balancer, secrets management via AWS Secrets Manager or Parameter Store, a logging stack (CloudWatch or a third-party drain), and a monitoring stack. CI/CD redesign is almost always required, because AWS doesn't provide a deploy workflow out of the box.
The organizational implications: platform engineering ownership, on-call responsibility for infrastructure failures that didn't exist on Heroku, and the internal selling required to justify the operational complexity increase.
For teams with platform engineering capacity and specific networking or compliance requirements, this is the right path. For most Heroku teams, it's a significant scope increase. See DIY Cloud.
For regulated workloads: see the Shield migration guide
If you're currently running PHI workloads on Heroku Shield, your migration has additional scope: Private Space networking and isolation mapping, Shield database services and backup continuity, compliance documentation continuity during the transition, BAA transition timing (you need a new BAA with your target provider before you go live), audit trail validation before and after cutover, and worker and Redis job cutover sequencing.
Regulated workloads have less tolerance for configuration drift and require careful documentation of the control environment throughout the migration process.
For the full technical breakdown specific to Shield environments: Migrating from Heroku Shield.
Treat cutover like an incident
The cutover window is not routine. Treat it the way your team would treat a production incident.
Pre-cutover checklist:
Backup confirmed and restore tested
Secrets verified in target environment
Observability (logs, metrics, APM) confirmed active in target environment
Health checks configured and passing
Rollback triggers defined and documented in advance (not improvised during)
Stakeholders notified of the cutover window
Traffic switching:
Lower DNS TTL was done 48 to 72 hours ago
Freeze writes or enable maintenance mode if using dump/restore database migration
Switch load balancer or DNS
Monitor error rates and latency in the first minutes after switch
Validate worker queues separately from web traffic
Rollback plan:
Revert DNS if rollback is triggered
Restore database from backup if writes occurred on the new platform after cutover
Communicate internally: who calls the rollback, what the criteria are, who is responsible for the postmortem
If rollback is triggered, conduct a postmortem before the next attempt
Common migration mistakes
These appear consistently enough to be worth listing explicitly:
Not inventorying add-ons before starting (discovering a dependency after cutover is painful)
Ignoring build-time secrets until builds fail in the new environment
Underestimating database size and restore time
Forgetting background jobs and worker processes until they're not running in production
Not planning rollback in advance (improvising rollback during an incident is a bad experience)
Leaving DNS TTL high until the day of cutover
Migrating during peak traffic hours
Changing CI/CD and platform simultaneously without scoping them as separate projects
Not testing observability in the new environment before going live
Migration safety checklist
Before cutover:
Inventory complete (applications, workers, jobs, add-ons, domains, secrets, dependencies)
Dockerfile verified and builds reproducible in target environment
Build-time secrets mapped and confirmed in target environment
Target environment provisioned and validated
Database migration strategy selected and tested (restore validated)
Backups confirmed and restore tested
Observability configured and verified in target environment
CI/CD decision documented (lift and shift, or modernize simultaneously with explicit scope)
Cutover window defined and communicated
DNS TTL lowered 48 to 72 hours before cutover
Rollback plan written (not improvised)
Rollback triggers defined
Stakeholders aligned on timing and communication plan
Migrating to Aptible
Aptible's platform model is similar to Heroku's PaaS mental model with Docker-based deploys, Procfile compatibility, managed Postgres, private-by-default architecture, and built-in compliance infrastructure for regulated production workloads.
For teams migrating from Heroku's standard tier, the migration is typically one of the smoother paths because the operational model is similar and the compliance infrastructure is included rather than requiring separate configuration.
For a full technical walkthrough, see Aptible's Heroku migration docs.
If you're migrating from Heroku Shield and handling PHI, follow the dedicated guide: Migrating from Heroku Shield.
Next steps
If you haven't finalized your target platform: Read Compare Platforms before you start. The migration process looks similar across PaaS targets, but compliance requirements, database options, and operational model differences matter. Choosing the right target before you start executing prevents rework.
If you're migrating regulated workloads off Heroku Shield: See Migrating from Heroku Shield for the compliance-specific scope: BAA transition timing, isolation documentation, database migration for PHI environments, and audit continuity planning.
If you're still deciding whether to migrate: See Should You Migrate?. This guide assumes migration is the right call. If you haven't made that decision yet, the framework there will help you make it deliberately rather than by default.
How to migrate off Heroku: a platform-agnostic guide (2026)
Last updated: March 2026
→ For technical steps specific to migrating to Aptible, see Aptible's Heroku migration docs.
→ If you are migrating from Heroku Shield specifically, see Migrating from Heroku Shield.
Most migration guides are destination-specific. This one covers the layer that actually gets teams into trouble: the inventory, the secrets trap, the observability gap, and the CI/CD decision that should be separated from the platform move.
TL;DR
Most migrations are not rewrites. They're platform abstraction replacements.
Small teams often complete migrations in days, not months.
AI-assisted Dockerization has meaningfully lowered the barrier to containerized builds.
The biggest risks are database cutover and workflow drift, not the platform move itself.
Build-time vs. run-time secrets is a genuine trap that catches teams off guard.
DNS-based cutover dramatically reduces rollback risk.
Separating migration from CI/CD modernization is the most important tactical decision you'll make.
A controlled migration is far safer than an emergency one.
How difficult is a Heroku migration?
Less difficult than most teams assume, but not for the reasons they assume. The application doesn't need to change. The deployment model does.
Migration effort by team profile
Solo or small team (one to three engineers). Measured in days for most applications. AI tools have made Dockerization substantially faster. If your application is a web process plus a database with a handful of add-ons, the migration is primarily inventory, Dockerfile creation, secrets migration, and DNS cutover. Minimal workflow disruption.
Mid-size SaaS team (four to fifteen engineers). Measured in weeks. CI/CD coordination required because multiple engineers are deploying. Database replication is likely necessary to minimize downtime. Worker processes need to be mapped carefully. The organizational coordination is often more time-consuming than the technical work.
Security-focused or regulated team. Multi-phase project. Networking redesign, RBAC restructuring, audit trail validation before and after cutover, and compliance documentation continuity all add scope. More internal alignment required because more stakeholders care about the outcome. If you're migrating from Heroku Shield specifically, see Migrating from Heroku Shield.
What migrating actually means
Platform abstractions, not application rewrites
You are not rewriting your application. You are replacing the platform's abstractions with the equivalent on your target platform.
What typically changes:
Build system: buildpacks become a Dockerfile or image-based deploy
Deployment workflow:
git push heroku mainbecomes a CI-driven image deploySecrets handling: Heroku config vars map to your target platform's secrets management
Add-ons: each Heroku add-on needs an equivalent managed service
Networking and isolation model: Private Spaces or standard routing map to your target's networking primitives
What typically stays the same:
Application code
Process model (web, worker, cron maps cleanly across most platforms)
Database schema
Core runtime and dependencies
The work is mostly mapping and parity. The hard parts are usually not where teams expect them to be.
Step 1: full inventory
Don't start anything else until the inventory is complete. Migrations fail at cutover because of things that weren't documented at the start.
Procfile and services
Capture every process type running on Heroku: web processes, worker processes, scheduled jobs, one-off tasks, pre-release commands, and release phase commands. The Procfile is your starting point, but it may not reflect reality if process types have drifted or if scheduled jobs are managed outside Heroku Scheduler.
Document what each process does, what environment variables it requires, and whether it has any Heroku-specific behavior (like Heroku's pre-boot feature).
Environment variables (the build-time trap)
This is the most common migration trap, and it's worth calling out explicitly before you start.
Heroku has two categories of secrets: run-time config vars (set via heroku config:set and injected at container startup) and build-time variables (injected during the build process, often through buildpack configuration or Heroku CI settings). These are different, and they're handled differently on every platform you might migrate to.
If your build process consumes secrets (npm auth tokens, private registry credentials, build-time API keys), document exactly where they live in Heroku and how your target platform handles build-time vs. run-time secret injection. Teams regularly complete a migration, push to production, and find that builds fail because the build environment on the new platform doesn't have access to credentials that Heroku was injecting silently during the build phase.
Run-time secrets to inventory: API keys, OAuth credentials, database URLs, service integration keys, any environment-specific configuration.
Build-time secrets to check for: private registry credentials, package manager auth tokens, build-time feature flags, CI-specific configuration.
Add-ons and external services
Inventory every add-on in your Heroku account. For each one, identify: what it does, which applications depend on it, how it's configured, whether your application code has Heroku-specific assumptions about how the add-on behaves, and what the equivalent managed service is on your target platform.
Common add-ons that need explicit mapping: Heroku Postgres, Heroku Data for Redis, Papertrail or other logging providers, New Relic or Scout for APM, Sendgrid or Mailgun for email, Bonsai or Searchly for Elasticsearch, Cloudinary or S3-compatible storage.
Note which add-ons are tightly coupled to Heroku's provisioning model. The database and Redis are the most critical to plan carefully.
Networking and DNS
Document: all custom domains, current SSL configuration, DNS TTL values for all domains (lower these at least 48 hours before cutover), webhooks pointing to your Heroku endpoints, IP allowlisting rules that reference Heroku's IP ranges, and any inbound or outbound integrations that depend on specific IP addresses or endpoints.
DNS TTL is easy to forget and has the longest setup time of anything in this list. If your DNS TTL is set to 24 hours and you need to cut over DNS, you have a 24-hour rollback lag. Lower TTL to 60 seconds several days before your cutover window.
Step 2: map Heroku abstractions
Dynos to containers
Web dynos become web services or containers. Worker dynos become worker services or jobs. Most platforms support a similar process model.
Where the mapping gets complicated: autoscaling behavior. Heroku's autoscaling for web dynos is based on queue depth. Your target platform may scale on CPU, memory, request count, or other signals. Document your current scaling policy and configure the equivalent explicitly on the target platform before cutover.
Buildpacks to Dockerfiles
Most modern platforms default to Docker. If your application uses buildpacks and doesn't have a Dockerfile, you'll need to create one. This is less daunting than it used to be.
AI coding assistants can generate a working Dockerfile from your existing Procfile and package manifest in minutes. The first pass won't be optimized (and shouldn't need to be for an initial migration), but it will typically produce a working build. The goal for migration is parity, not optimization.
If you want to avoid Docker entirely for an initial migration, Cloud Native Buildpacks (used by platforms like Heroku, Dokku, and some Render configurations) can produce container images from your existing buildpack setup. This is a useful bridge for teams that want to lift and shift first.
Config vars to secrets management
Map every Heroku config var to the equivalent in your target platform's secrets management. Most platforms support environment variable injection at runtime. The differences show up in: how secrets are scoped across environments, whether secrets can be versioned or rotated, and how build-time secrets are handled (see the trap above).
Be explicit about staging vs. production secrets. A common mistake is sharing a secrets configuration between environments and discovering mid-migration that your staging environment is pointing at production databases or services.
Add-ons to managed services
Heroku Postgres to your target platform's managed Postgres (or a standalone RDS instance). Heroku Redis to managed Redis on the target platform. Logging add-ons to your target platform's log management (some platforms include this; others require a log drain to an external service).
For each add-on, document the connection string format differences. Heroku injects DATABASE_URL automatically. Your target platform may use a different variable name or a different URL format.
What surprises teams
These are the failure modes that appear consistently in post-migration retrospectives. None of them are catastrophic. All of them are avoidable with advance planning.
Build reproducibility
The most common failure mode in the first few days after migration: builds that pass locally and fail in the new CI environment. Common causes:
Build-time environment differences (the target CI environment doesn't have access to secrets that Heroku injected silently)
Hidden buildpack behavior (Heroku's buildpacks do things that aren't documented and don't appear in your Procfile)
Pre-release commands that assume Heroku-specific utilities or behaviors
Missing system dependencies (native extensions that Heroku's buildpack installed automatically)
The fix in all these cases is the same: run the build in the target environment explicitly before cutover, document every failure, and trace each one back to its source. Do not ship this to production until builds are reproducible end to end in the new environment.
Workflow drift
Developers who are used to git push heroku main will need to change how they deploy. The degree of disruption depends on what you're migrating to. Some platforms support a similar git-push workflow. Others require pushing an image to a registry and triggering a deploy separately.
More significant workflow differences: rollback semantics (Heroku's rollback reverts to a previous slug; image-based rollbacks work differently), one-off console access (the equivalent of heroku run console varies by platform), and review app parity (if you rely on Heroku review apps, your target platform's equivalent may behave differently).
The advice here is consistent: preserve workflow first, modernize later. Get to parity with what you have on Heroku before redesigning how your team deploys. The disruption of changing both the platform and the deploy workflow simultaneously is organizational, not just technical.
Observability gaps
APM agents are frequently forgotten until after cutover. If you're using New Relic, Scout, Datadog, or another APM tool, verify that the agent works in your target environment before you switch traffic. Some APM agents rely on UDP for metrics, which may not be available in certain networking configurations. Others require specific environment variable names that differ from Heroku's conventions.
Log drain setup is also commonly deferred. On Heroku, log routing to an external provider is a single add-on configuration. On other platforms, you configure this differently, and the setup can take time to validate.
Configure observability in the new environment and verify it's working correctly before you cut over. Discovering that you have no metrics 20 minutes into a production cutover is an unpleasant experience.
Separate migration from modernization
This deserves its own section because it's the most actionable advice in this guide.
Migration and CI/CD redesign are two separate projects. Running them simultaneously is the most common reason Heroku migrations take longer and cost more than they should.
Option 1: lift and shift first. Replicate your current deploy workflow as closely as possible on the target platform. Minimize behavioral changes. The goal is: your team deploys the same way they did on Heroku, and nothing about the application changes. Get to that state first.
Once you're running in production on the new platform with no issues for a few weeks, then evaluate whether to modernize your CI/CD pipeline. You'll be making that decision from a stable baseline rather than under migration pressure.
Option 2: use migration to upgrade CI/CD. If you intentionally want to introduce GitHub Actions, add security scanning, enforce role-based deploy permissions, or redesign the environment promotion model, this can be done alongside the migration. But do it deliberately. Scope it explicitly. Communicate to the team that you're doing two things at once. And be prepared for the migration to take longer.
Do not accidentally attempt both simultaneously. The failure mode is: the migration doesn't complete, CI/CD is broken, you've introduced multiple variables into every failure, and no one is confident which change caused which problem.
Database migration strategy
The database migration is the highest-risk part of any Heroku migration. Treat it as the critical path.
Dump and restore
Best for small to medium databases (under 10GB, ideally) where a brief maintenance window is acceptable.
Steps: enable maintenance mode or stop writes to the application, capture a database backup using heroku pg:backups:capture, download the backup, restore it to the target database, validate row counts and critical data, update connection strings, and switch traffic.
Advantages: simple, reliable, easy to validate. Disadvantages: requires downtime while the restore runs. For large databases, the restore window can be substantial.
Logical replication
Best for larger databases or applications with minimal downtime tolerance.
High-level flow: set up continuous replication from Heroku Postgres to your target database, allow the replica to catch up to the primary, briefly freeze writes (seconds to minutes), validate replication sync, cut over connection strings, and monitor.
Heroku Postgres has a specific constraint worth knowing: direct replication control is not available to customers by default. If you need to set up a publication/subscription for logical replication, you may need to contact Heroku support to expose WAL archives in S3. This adds coordination overhead and timeline uncertainty. Plan for this explicitly, and initiate the support request well before your cutover window.
Downtime and rollback planning
Define your acceptable downtime window before you choose a migration strategy. Communicate it to stakeholders before cutover, not during.
Before any production database migration: confirm your backup is complete and restorable (test the restore, don't just assume), define the rollback trigger conditions explicitly (at what point do you revert?), freeze writes for a documented period before cutover, and have someone who isn't executing the migration monitoring error rates and availability.
Side-by-side migration: the recommended approach
The safest migration strategy for most teams:
Mirror one application in the target platform. Don't touch your existing Heroku deployment.
Keep your existing CI/CD workflow initially. Deploy to both environments from the same pipeline if possible, or deploy to the new environment manually to validate.
Use buildpacks or your Dockerfile on the target platform. Confirm builds are reproducible.
Attach your custom domain in the target platform in advance.
Lower DNS TTL 48 to 72 hours before cutover. Target 60 seconds.
Validate staging thoroughly. Reproduce your production environment as closely as possible and test it.
Complete the database migration (dump/restore or replication) according to your chosen strategy.
Flip DNS. Route traffic to the new platform.
Monitor aggressively for the first two to four hours. Error rates, latency, worker queue depth, database connection counts.
If rollback is needed: switch DNS back. If you used dump/restore, rollback is straightforward. If you used logical replication and haven't frozen the Heroku database, rollback is possible but requires careful sequencing.
DNS-based cutover is the key. When DNS is the traffic switch, rollback is as fast as changing a DNS record. If you cut over by modifying application configuration or load balancer rules, rollback is slower and more complex.
Migrating to AWS: what's different
AWS migrations follow the same inventory and mapping process, but the infrastructure rebuild scope is substantially larger.
Your target runtime model options: ECS with Fargate (recommended for most teams migrating off Heroku), EKS (significantly more complex), Elastic Beanstalk (simpler but limited), or Lambda and serverless functions (only appropriate for specific workload shapes).
What you'll build: VPC and subnet design, security groups, IAM roles and policies, an application load balancer, secrets management via AWS Secrets Manager or Parameter Store, a logging stack (CloudWatch or a third-party drain), and a monitoring stack. CI/CD redesign is almost always required, because AWS doesn't provide a deploy workflow out of the box.
The organizational implications: platform engineering ownership, on-call responsibility for infrastructure failures that didn't exist on Heroku, and the internal selling required to justify the operational complexity increase.
For teams with platform engineering capacity and specific networking or compliance requirements, this is the right path. For most Heroku teams, it's a significant scope increase. See DIY Cloud.
For regulated workloads: see the Shield migration guide
If you're currently running PHI workloads on Heroku Shield, your migration has additional scope: Private Space networking and isolation mapping, Shield database services and backup continuity, compliance documentation continuity during the transition, BAA transition timing (you need a new BAA with your target provider before you go live), audit trail validation before and after cutover, and worker and Redis job cutover sequencing.
Regulated workloads have less tolerance for configuration drift and require careful documentation of the control environment throughout the migration process.
For the full technical breakdown specific to Shield environments: Migrating from Heroku Shield.
Treat cutover like an incident
The cutover window is not routine. Treat it the way your team would treat a production incident.
Pre-cutover checklist:
Backup confirmed and restore tested
Secrets verified in target environment
Observability (logs, metrics, APM) confirmed active in target environment
Health checks configured and passing
Rollback triggers defined and documented in advance (not improvised during)
Stakeholders notified of the cutover window
Traffic switching:
Lower DNS TTL was done 48 to 72 hours ago
Freeze writes or enable maintenance mode if using dump/restore database migration
Switch load balancer or DNS
Monitor error rates and latency in the first minutes after switch
Validate worker queues separately from web traffic
Rollback plan:
Revert DNS if rollback is triggered
Restore database from backup if writes occurred on the new platform after cutover
Communicate internally: who calls the rollback, what the criteria are, who is responsible for the postmortem
If rollback is triggered, conduct a postmortem before the next attempt
Common migration mistakes
These appear consistently enough to be worth listing explicitly:
Not inventorying add-ons before starting (discovering a dependency after cutover is painful)
Ignoring build-time secrets until builds fail in the new environment
Underestimating database size and restore time
Forgetting background jobs and worker processes until they're not running in production
Not planning rollback in advance (improvising rollback during an incident is a bad experience)
Leaving DNS TTL high until the day of cutover
Migrating during peak traffic hours
Changing CI/CD and platform simultaneously without scoping them as separate projects
Not testing observability in the new environment before going live
Migration safety checklist
Before cutover:
Inventory complete (applications, workers, jobs, add-ons, domains, secrets, dependencies)
Dockerfile verified and builds reproducible in target environment
Build-time secrets mapped and confirmed in target environment
Target environment provisioned and validated
Database migration strategy selected and tested (restore validated)
Backups confirmed and restore tested
Observability configured and verified in target environment
CI/CD decision documented (lift and shift, or modernize simultaneously with explicit scope)
Cutover window defined and communicated
DNS TTL lowered 48 to 72 hours before cutover
Rollback plan written (not improvised)
Rollback triggers defined
Stakeholders aligned on timing and communication plan
Migrating to Aptible
Aptible's platform model is similar to Heroku's PaaS mental model with Docker-based deploys, Procfile compatibility, managed Postgres, private-by-default architecture, and built-in compliance infrastructure for regulated production workloads.
For teams migrating from Heroku's standard tier, the migration is typically one of the smoother paths because the operational model is similar and the compliance infrastructure is included rather than requiring separate configuration.
For a full technical walkthrough, see Aptible's Heroku migration docs.
If you're migrating from Heroku Shield and handling PHI, follow the dedicated guide: Migrating from Heroku Shield.
Next steps
If you haven't finalized your target platform: Read Compare Platforms before you start. The migration process looks similar across PaaS targets, but compliance requirements, database options, and operational model differences matter. Choosing the right target before you start executing prevents rework.
If you're migrating regulated workloads off Heroku Shield: See Migrating from Heroku Shield for the compliance-specific scope: BAA transition timing, isolation documentation, database migration for PHI environments, and audit continuity planning.
If you're still deciding whether to migrate: See Should You Migrate?. This guide assumes migration is the right call. If you haven't made that decision yet, the framework there will help you make it deliberately rather than by default.
548 Market St #75826 San Francisco, CA 94104
© 2026. All rights reserved. Privacy Policy
548 Market St #75826 San Francisco, CA 94104
© 2026. All rights reserved. Privacy Policy