Thomas Orozco's Posts

Enclave now enforces a (generous) limit on process counts in containers

Thomas Orozco on May 23, 2018

Enclave now limits the number of processes running on your containers to 16384. For comparison, a full Linux host limits the process count to 32768 by default (although we do use higher limits on Enclave hosts).

As such, this limit is extremely unlikely to affect you as a customer, but will provide meaningful stability improvements to the platform.

That said, if you’d like to monitor your process counts across containers, and compare them to the limit, we’ve exposed process counts and limits in Metric Drains.

Read more

Enclave Security Scans now use Clair

Thomas Orozco on May 23, 2018

Considering Appcanary’s immiment shutdown, we are happy to announce that Enclave’s Security Scans now use Clair instead.

Clair is an open-source container vulnerability analysis platform from CoreOS. However, as an end-user of Enclave’s Security Scans, this change will be fairly transparent to you.

Read more

Managed TLS Endpoints now support wildcard domains

Thomas Orozco on May 23, 2018

We are proud to announce that Managed TLS now supports wildcard certificates.

To set up a Managed TLS Endpoint using a wildcard certificate, simply use the wildcard format when specifying your Custom Domain (e.g. *

Note that you’ll have to use dns-01 validation to validate a wildcard certificate. In any case, the Dashboard or CLI will walk you through the CNAMEs you need to create to proceed.

Read more

Enclave's billing system has been updated to provide you with accurate billing projections

Thomas Orozco on May 23, 2018

We are proud to announce that we have overhauled our billing system in order to provide you with better visibility into your costs for Enclave and Gridiron.

Notably, our new billing platform provides you with the information you need to estimate and understand your costs:

  • Real-time billing projections and breakdowns.
  • Centralized access to historical invoices.
  • A listing of your contracted terms past and current.

As of this release, you can also manage multiple payment methods, add multiple billing contacts, and review your payments for past invoices.

Read more

CouchDB is now a supported Database on Enclave

Thomas Orozco on May 23, 2018

We are proud to announce that Enclave now supports CouchDB as a Database on Enclave.

CouchDB is a replication-centric database, with capabilities for offline mobile sync.

Version 2.1 is currently supported. You can launch CouchDB Databases throuh the Dashboard or the CLI.

Read more

Include Procfiles and .aptible.yml directly in your Docker Image

Thomas Orozco on February 1, 2018

We’re proud to announce that you can now include a Procfile and a .aptible.yml file in your Docker images. This lets you use a Procfile or .aptible.yml file with Direct Docker Image Deploy without the need for a Companion Git Repository.

This change only impacts customers who are using Direct Docker Image Deploy and leveraging a Companion Git Repository to provide a Procfile or .aptible.yml file.

If that’s your case, follow the steps below to upgrade to this new method. The key benefit of upgrading is that you will no longer need to use both a Docker image and a git repository to deploy: the Docker image alone will suffice.

  1. If you are using a Procfile, include it in your Docker image at /.aptible/Procfile.

  2. If you are using a .aptible.yml file, include it in your Docker image at /.aptible/.aptible.yml.

  3. Build your Docker image, then run aptible deploy with the --git-detach flag. This will ensure your git repository is ignored going forward, and that your Procfile and .aptible.yml files are read from your Docker image instead. You’ll never need to interact with the git repository again.

With this change, using a Companion Git Repository is now deprecated. However, we are not planning on removing this feature, so you’re free to migrate on your schedule, when it’s convenient for you to do so.

Read more

InfluxDB is now a supported Database on Enclave

Thomas Orozco on January 16, 2018

We’re proud to announce that we have added support for InfluxDB as a database on Enclave.

InfluxDB is a high-performance time-series database, which we’ve been using ourselves for our Container Metrics. It works particularly well with Grafana to quickly create insightful Dashboards.

Adding InfluxDB as a Database on Enclave was motivated by the introduction of Metric Drains, but you can of course use it for any use case.

Like any other supported Database, you can launch an InfluxDB Database through the Dashboard, or using the CLI.

Read more

Introducing Metric Drains

Thomas Orozco on January 16, 2018

We are proud to announce the release of Enclave Metric Drains. Metric Drains are the metrics counterpart of Log Drains: you configure them as a destination, and Enclave will periodically publish metrics for your Containers to the Metric Drain.

As of today, supported metrics include CPU and RAM usage for all containers, and disk usage and I/O for databases. As for destinations, you can route metrics to InfluxDB (self-hosted on Enclave and third-party) and Datadog.

This feature greatly expands our previously-released Dashboard Container Metrics, and will be particularly useful for sophisticated use cases that require real-time or historical access to detailed metrics.

Indeed, unlike Dashboard Container Metrics, Metric Drains allow you to:

  • Review metrics across releases and as far back as you’d like: since the metrics are pushed to you, you are free to define your own retention policies.

  • Alert when metrics cross pre-defined thresholds of your choosing: here again, since we’re pushing metrics to you, you’re free to alert on them however you’d like (alerting is respectively available in Grafana and in Datadog).

  • Correlate metrics with other sources of information: at this time, Metric Drains support pushing metrics to InfluxDB as well as Datadog, where you might already be publishing other metrics (e.g. using an application performance monitoring tool).

To provision a new Metric Drain, navigate to Environment of your choice in the Dashboard, and open the Metric Drains tab.

Create Metric Drain

PS: To make it easier to get started with Metric Drains, we also added support for InfluxDB as a Database on Enclave. This lets you easily route metrics to a self-hosted InfluxDB database. We also have detailed instructions on deploying Grafana on Enclave to create beautiful Dashboards and set up monitoring using these metrics.

Read more

VPN Tunnels and VPC Peers are now visible in the Dashboard

Thomas Orozco on December 7, 2017

Enclave has historically supported IPSEC VPN Tunnels and VPC Peering, we’re happy to announce that you can now view the status of these networks integrations for a given stack via the Aptible Dashboard.

To view these, navigate to the VPN Tunnels or VPC Peering tabs for a Stack. Keep in mind that VPN Tunnels and VPC Peering are only available for Dedicated-Tenancy Stacks.

Read more

Databases are now automatically optimized for their container footprint

Thomas Orozco on December 6, 2017

Since the introduction of Self-Service Database Scaling on Enclave, you’ve been able to conveniently resize your database containers to fit the evolution of your workload over time.

As of this week, we’re proud to announce that we’re taking this feature one step further by automatically configuring databases for optimum performance based on their container footprint.

Here’s what we do:

  • PostgreSQL: shared_buffers, work_mem, and a number of other parameters are configured according to pgtune guidelines.
  • MySQL: the InnoDB buffer pool is configured to use about 80% of available memory.
  • MongoDB: wiredTigerCacheSizeGB is configured to about 50% of the container size (more on larger containers).
  • Elasticsearch: the heap size is set to 50% of the container size to reserve 50% for Lucene caches.

Note that these settings only apply to databases launched after 12:00 UTC on December 4, 2017. For databases you launched before this date, you can use the aptible db:reload command to restart your database using this new configuration (this will cause a few seconds of downtime while your database restarts).

These new parameters are expected to yield better performance for most workloads, and help you better utilize the resources available to your database containers. That said, if you had previously opted to customize the configuration of your database (for PostgreSQL, you might have done so using ALTER SYSTEM), or would like to do so now to further improve performance, your custom parameters will take precedence over Enclave’s optimized configuration.

Read more

The Aptible CLI now supports JSON output

Thomas Orozco on December 6, 2017

We’re proud to announce that the Aptible CLI now supports JSON output. To enable JSON output, set the APTIBLE_OUTPUT_FORMAT environment variable to json when executing the Aptible CLI.

Over the past few months, and notably after the introduction of Endpoint management commands in the CLI, we have observed an uptick in customers using the Aptible CLI to automate workflows on Enclave. JSON output makes this a lot easier, since you no longer need an ad-hoc parser to process the CLI’s output.

Here’s an example:

$ aptible db:create --type redis my-db | python -m json.tool
    "connection_url": "redis://",
    "credentials": [
            "connection_url": "redis://",
            "default": true,
            "type": "redis"
            "connection_url": "rediss://",
            "default": false,
            "type": "redis+ssl"
    "environment": {
        "handle": "my-sandbox",
        "id": 1
    "handle": "my-db",
    "id": 1,
    "status": "provisioned",
    "type": "redis"

We’re planning further work towards enabling scriptability with the Aptible CLI. If you’re using (or would like to use) the CLI in a scripted context, we’d love to talk you. Please get in touch through support.

Read more

aptible db:create now supports choosing a version

Thomas Orozco on December 6, 2017

We’re pleased to announce that the aptible db:create command now supports a --version flag, which allows you to select the version for the database you’re provisioning.

To list available database versions, use the aptible db:versions command.

Read more

Introducing Strict Health Checks

Thomas Orozco on October 31, 2017

We’re happy to announce that you can now opt-in to Strict Health Checks for your Apps hosted on Enclave.

If you enable Strict Health Checks, Enclave will expect your app to respond on the /healthcheck route with 200 OK if it’s healthy, and any other status if not.

In contrast, if you do not enable this feature (i.e. just leave things as-is) enabled, Enclave simply expects your app to return a valid HTTP response (i.e. a 404 would be acceptable).

Strict Health Checks apply both to Release Health Checks and Runtime Health Checks. Release Health Checks let you cancel the deployment of your app if health checks are failing, and Runtime Health Checks let you route traffic away from unhealthy containers or failover to Enclave’s error page server if all your containers are down.

Read more

MongoDB Replica Sets are now reconfigured when restoring from backup

Thomas Orozco on October 31, 2017

Enclave now automatically re-configures MongoDB replica sets when restoring from backup. Prior to this change, this reconfiguration step would have had to be performed manually.

Here’s why: previously, when restoring a MongoDB backup, the new MongoDB instance would start with the replica set configuration that was in effect when the backup was created. This would cause the new MongoDB instance to try and join your existing database’s replica set. This, in turn, would fail because the new MongoDB instance was not a member of your existing database’s replica set, and the new MongoDB instance would transition to REMOVED state.

Now, when restoring a backup, you precisely don’t want the new MongoDB instance to become a member of your existing database’s replica set (however, note that we do support MongoDB clustering when you need it). Indeed, you probably want to use the new MongoDB instance for troubleshooting, development or reporting, and the last thing you want is for changes you make on the new MongoDB instance to affect your existing database!

The right approach in this case is to reconfigure the new MongoDB instance with its own independent replica set. Until now, this was a manual process, but as of today, Enclave does it automatically for you as part of the backup restoration process.

Read more

Introducing Self-Service Environment Creation

Thomas Orozco on October 24, 2017

We’re proud to announce that Environment creation on Enclave is now fully self-service. You can access this menu by clicking “Create Environment” from the sidebar:

Environment Creation Menu

As an Enclave user, this has two main implications for you:

  • When creating a new Shared-Tenancy Environment, you can now pick from a selection of eligible Shared-Tenancy Stacks. For example, you can now deploy in us-west-1 (N. California) or eu-central-1 (Frankfurt).

  • When creating a new Dedicated-Tenancy Environment, you no longer need to wait for us to activate your Environment after creating it. Instead, your Environment automatically activates, and you can start using it right away.

As part of this change, we’ve also upgraded the Dashboard sidebar to show you not only your Environments, but also the Stacks they’re deployed on, which gives you greater visibility into how your Enclave resources are organized.

Stacks Sidebar

This change is a good opportunity for a quick review of how Stacks and Environments relate. Here’s what you need to know:

  • Stacks are isolated virtual networks (AWS VPCs) consisting of a number of Docker hosts (AWS EC2 instances). Environments are mapped onto Stacks and provide a logical isolation layer.

  • Apps and Databases for a given Environment are deployed on the Docker hosts for the Environment’s Stack. There is no network-level isolation between Apps and Databases belonging to different Environments if they are deployed on the same Stack.

  • Stacks can be single-tenant (Dedicated Tenancy) or multi-tenant (Shared Tenancy). Environments that process PHI must be deployed on Dedicated-Tenancy Stacks as per your BAA with Aptible.

Read more

PostgreSQL 10 is now available on Enclave

Thomas Orozco on October 10, 2017

We’re proud to announce that PostgreSQL 10 is now available on Enclave. You can choose PostgreSQL 10 when creating a new database, and it will soon become the default as well:

Select PostgreSQL 10 from the dropdown when creating a new database

Upgrading to PostgreSQL 10

If you’d like to upgrade an existing database to PostgreSQL 10, you have two options:

  • Provision a new PostgreSQL 10 database, then dump the data from your old PostgreSQL database to the new PostgreSQL 10 database. This is the best approach for development databases and non-critical production databases.

  • Contact support to schedule an in-place upgrade of your database. This is the best approach for critical production databases.

Read more

Create, Modify, and Delete Endpoints using the Aptible Toolbelt

Thomas Orozco on October 10, 2017

We’re happy to announce that the Aptible Toolbelt now exposes commands that let you manage App and Database Endpoints:

Read more

TCP and TLS Endpoints are now Generally Available

Thomas Orozco on October 10, 2017

We’re happy to announce that TCP and TLS Endpoints have left private beta and are now generally available in Enclave!

Compared to Enclave’s other Endpoint type (HTTPS Endpoints), TCP and TLS Endpoints are lower-level primitives that give you more flexibility. For example, you can use TCP or TLS Endpoints to deploy non-HTTP apps on Enclave, or take ownership of TLS termination in your app. One particularly notable use case for healthcare companies is to run a Mirth Connect receiver to ingest HL7 data.

Note that, being lower-level primitives, TCP and TLS Endpoints do not include as many bells and whistles as HTTPS Endpoints. In particular, they do not currently automate zero-downtime deployment (but you can of course leverage them to architect that yourself).

You can create and manage TCP and TLS Endpoints starting today using Aptible Toolbelt commands:

Read-only access is already available in the Dashboard as well. Read-write access will be available in the Dashboard soon!

Read more

Introducing Activity Reports

Thomas Orozco on October 5, 2017

Whether you’re operating in a regulated industry or not, periodically reviewing activity on your resources for unexpected and suspicious changes is unquestionably a best practice.

Historically, Enclave has historically allowed you to do so via the “Activity” tab for each App and Database in your account, but at scale, this can be a fairly cumbersome approach.

That is why we are introducing Activity Reports. Activity Reports are CSV documents listing all operations that took place in a given Environment; they are posted on a weekly basis in the Aptible Dashboard.

Using Activity Reports, you get a consolidated view of your team’s activity in your Enclave Environment, including ssh access, database tunnel access, deployments, restarts, configuration changes, and more.

We recommend including periodic review of Activity Reports in your information security procedures.

If you’d like to see a report for yourself, head on over to the Aptible Dashboard, and download the latest report under the “Activity Reports” tab:

Download Activity Reports

Read more

Restore Database Backups across Environments

Thomas Orozco on September 29, 2017

Database Backups can now be restored across different Environments on Enclave. This change lets you easily support workflows that involve restoring backups of production data for analytics or investigation into lower-privileged environments.

To use it, add the --environment flag when running aptible backup:restore:

aptible backup:restore "$BACKUP_ID" --environment "$ENVIRONMENT_HANDLE"

To make sure you don’t accidentally transfer sensitive or regulated data to a non-compliant development environment, this feature ships with an important safeguard: while Backups can be restored across Environments, they cannot be restored across Stacks.

For example, this means data that was stored in a production PHI-ready environment can’t accidentally be restored into a non-PHI-ready development environment.

Read more

Capture SSH Session logs with Log Drains

Thomas Orozco on September 21, 2017

Originally, Enclave Log Drains only captured logs from app containers; after adding support for Database logging, we’re happy to announce that SSH Sessions logs are now available in your Enclave Log Drains as well! As of this week, you can now configure Log Drains to receive logs from SSH Sessions.

This new feature makes it easy for you to meet compliance requirements mandating that all access to production data be logged, without compromising your ability to perform maintenance tasks or respond to urgent incidents by accessing your production environment via aptible ssh.

How does it work?

SSH Session logging functions similarly to App and Database logging: all the output from ephemeral containers is captured and routed to a Log Drain. This output is pretty much exactly what an end-user would see on their own screen, which means:

  • Your Log Drains will often also receive what users are typing in, since most shells and consoles echo the user’s input back to them.

  • If you’re prompting the user for a password using a safe password prompt that does not write back anything, nothing will be sent to the Log Drain either. That prevents you from leaking your passwords to your logging provider.

However, unlike App and Database logs, SSH Session logs include extra metadata about the user running the SSH session if your Log Drain supports it, including their email and user name. Review the documentation for more information.

How do I use this?

Add a new Log Drain in your environment, and make sure to select the option to drain logs from ephemeral sessions (if you already have other Log Drains set up for Apps and Databases, you’ll probably want to un-select those options to avoid double-logging).

Read more

Maintenance pages are now served immediately for apps scaled to zero

Thomas Orozco on September 11, 2017

Scaling apps to zero containers on Enclave nows redirects your traffic to Enclave’s error-page server (Brickwall) before shutting down app containers.

Concretely, this means the failover from your app to your Custom Maintenance Page (if you configured one) will happen smoothly: clients will never see a generic error page.

For comparison, if you scaled down to zero containers before this change, the failover would happen automatically, but only once our monitoring detected your app was down. Often, this resulted in about a minute of latency during which clients would indeed see a generic error page.

Read more

Implicit Services no longer require a CMD when an ENTRYPOINT is present

Thomas Orozco on September 11, 2017

For more information, review our documentation on Implicit Services.

Read more

CPU utilization metrics are now available in the Dashboard

Thomas Orozco on September 11, 2017

The Dashboard now provides CPU utilization metrics for apps and databases. This change gives you more visibility into the resources used by your containers, and can help you make better scaling decisions.

As you review CPU utilization for your apps, keep in mind that:

  • By default, Enclave only enforces CPU Limits on shared stacks (i.e. non-production), but you can opt-in to CPU limits for production stacks via a support request.
  • Containers are allocated ¼th of a CPU thread per GB of RAM. For example, a 1 GB container should use no more than 25% of a CPU thread, while a 4 GB container should use no more than 100%.

For more information, review our documentation on CPU limits.

Read more

A new documentation site for Aptible

Thomas Orozco on July 31, 2017

We’re proud to announce that we launched a new documentation site for Aptible!

Aptible’s documentation site — like many startups’ — originally started as a FAQ, collecting questions that came up frequently in support tickets. Over time, it evolved with the addition of new features and additional support resources, yet retained its original FAQ structure.

But, with Enclave’s expanding feature set and the introduction of Gridiron, that was no longer sufficient.

In contrast, our new documentation still includes the original FAQ resources, but it complements them with comprehensive reference material, so you can know exactly how the features of Enclave and Gridiron work, and how they interact together.

Check out the new docs now!

Read more

Introducing Supercronic - Cron for containers

Thomas Orozco on July 20, 2017

We’re proud to announce our latest open-source project: Supercronic. Supercronic is a cron implementation designed with containers in mind.

Why a new cron for containers?

We’ve helped hundreds of Enclave customers roll out scheduled tasks in containerized environments. Along the way, we identified a number of recurring issues using traditional cron implementations such as Vixie cron or dcron in containers:

  • They purge the environment before running jobs. As a result, jobs fail, because all their configuration was provided in environment variables.

  • They redirect all output to log files, email or /dev/null. As a result, job logs are lost, because the user expected those logs to be routed to stdout / stderr.

  • They don’t log anything when jobs fail (or start). As a result, missing jobs and failures go completely unnoticed.

To be fair, there are very good architectural and security reasons traditional cron implementations behave the way they do. The only problem is: they’re not applicable to containerized environments.

Now, all these problems can be worked around, and historically, that is what we’ve suggested:

  • You can persist environment variables to a file before starting cron, and read them back when running jobs.

  • You can run tail in the background to capture logs from files and route them to stdout.

  • You can wrap jobs with some form of logging to capture exit codes.

But wouldn’t it better if workarounds simply weren’t necessary? We certainly think so!

Enter Supercronic

Supercronic is a cron implementation designed for the container age.

Unlike traditional cron implementations, it leaves your environment variables alone, and logs everything to stdout / stderr. It’ll also warn you when your jobs fail or take too long to run.

Perhaps just as importantly, Supercronic is designed with compatibility in mind. If you’re currently using “cron + workarounds” in a container, Supercronic should be a drop-in replacement:

$ cat ./my-crontab
*/5 * * * * * * echo "hello from Supercronic"

$ ./supercronic ./my-crontab
INFO[2017-07-10T19:40:44+02:00] read crontab: ./my-crontab
INFO[2017-07-10T19:40:50+02:00] starting                                      iteration=0 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"
INFO[2017-07-10T19:40:50+02:00] hello from Supercronic                        channel=stdout iteration=0 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"
INFO[2017-07-10T19:40:50+02:00] job succeeded                                 iteration=0 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"
INFO[2017-07-10T19:40:55+02:00] starting                                      iteration=1 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"
INFO[2017-07-10T19:40:55+02:00] hello from Supercronic                        channel=stdout iteration=1 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"
INFO[2017-07-10T19:40:55+02:00] job succeeded                                 iteration=1 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"

What’s next?

If you’re an Enclave customer, we’ve updated our cron jobs tutorial with instructions to use Supercronic. If you’re not using Enclave, then head on over to Supercronic’s GitHub page for installation and usage instructions.

Read more

Introducing Container Recovery

Thomas Orozco on June 22, 2017

We’re proud to announce that as of this week, Enclave automatically restarts application and database containers when they crash.

Thanks to this new feature, you no longer need to use process supervisors or shell while loops in your containers to ensure that they stay up no matter what: Enclave will take care of that for you.

How does it work?

Container Recovery functions similarly to Memory Management: if one of your containers crashes (or, in the case of Memory Management, exceeds its memory allocation), Enclave automatically restores your container to a pristine state, then restarts it.

You don’t have to do anything, it just works.

Why does this matter?

Enclave provides a number of features to ensure high-availability on your apps at the infrastructure level, including:

  • Automatically distributing your app containers across instances located in distinct EC2 availability zones.

  • Implementing health checks to automatically divert traffic away from crashed app containers.

These controls effectively protect you against infrastructure failures, but they can’t help you when your app containers all crash due to a bug affecting your app itself. Here are a few examples of the latter, which we’ve seen affect customer apps deployed on Enclave, and which are now mitigated by Container Recovery:

  • Apps that crash when their database connection is interrupted due to temporary network unavailability, a timeout, or simply downtime (for example, during a database resizing operation).

  • Background processors that crash. For example, all your Sidekiq workers exiting with an irrecoverable error, such as a segfault caused by a faulty native dependency.

If you’d like to learn more about this feature, please find a full overview of Container Recovery in the Enclave documentation.

Read more

Introducing Direct Docker Image Deploy

Thomas Orozco on June 13, 2017

We’re proud to announce that you can now deploy apps on Enclave directly from a Docker image, bypassing Enclave’s traditional git-based deployment process.

With this feature, you can easily use the same images for deployment on Enclave and test / dev via other Docker-based tools such as Docker Compose or Kubernetes. And, if you’re already using Docker for your development workflow but haven’t adopted Enclave yet, it’s now much easier for you to take the platform for a spin.

How does it work?

Direct docker image deployments on Enclave are done via the CLI interface. Here’s an example.

To deploy Docker’s official “hello-world” image to an app called “my-hello-world-app” on Enclave, you’d use this command:

aptible deploy --app my-hello-world-app --docker-image hello-world

And if your app follows the 12-factor configuration guidelines and uses the environment for configuration, you can include arbitrary environment variables for your app when running aptible deploy:

aptible deploy --app my-enclave-app --docker-image \

Why use it?

First off, if you’re currently using Enclave’s git-based deployment workflow, you can continue using that: it’s not going away! That being said, there are a few reasons why you might want to look at direct Docker image deploy as an alternative.

First, you might like more control over your Docker image build process. Indeed, when you deploy via git, Enclave follows a fairly opinionated build process:

  • The Docker build context is your git repository.

  • Enclave injects a .aptible.env file in your repository for you to access environment variables.

  • Enclave uses the Dockerfile from the root of your git repository.

This works just fine for a majority of apps, but if that’s not your case, use direct Docker image deploy for complete control over your build process, and make adjustments as needed. For example, you could inject private dependencies in your build context, leverage Docker build arguments, or use a different Dockerfile.

Other reasons for using this feature include:

  • You’re already building Docker images to use with other tools. Use this direct Docker image deploy feature to unify your deployments around a single build.

  • You’re using a public Docker image that’s available on the Docker hub. Use direct Docker image deploy so you don’t have to rebuild it from scratch.

If you’d like to learn more about this new feature, head for the documentation! And, as usual, let us know if you have any feedback.

Note: Astute readers will note that you’ve been able to deploy apps on Enclave directly from a Docker image for some time, but we did rework the feature to make it much easier to use. Specifically, here’s what changed:

  • Procfiles and git repositories are now optional: Enclave will use your Docker image’s CMD if you don’t have a Procfile.

  • You no longer need to run aptible config:set followed by aptible rebuild to deploy. Instead, you can do everything in one operation with aptible deploy.

Read more

Self-Service Database Scaling is now available on Enclave

Thomas Orozco on May 23, 2017

We’re proud to announce that you can now resize both the container and disk footprints of your Enclave databases from the Dashboard or CLI. For new databases, you can also configure the container size from day 1, whereas it previously defaulted to 1GB RAM.

Using this new feature, you can easily scale your database up as your traffic grows or when you’re about to run out of disk space. To that end, check out Container Metrics, which provides a real time view into your databases’ RAM and disk usage. Aptible Support will also notify you if your disk usage reaches 90%.

How does database scaling work?

There are two ways you can resize your database.

First, you can do so via the Dashboard. Just click modify next to the container or disk size, and proceed.

Database scaling via the Enclave dashboard (also available via CLI).

Second, you can do so via the CLI. For example, to scale a database named “demo-database” to a 2GB RAM container with 30 GB disk, you’d use:

aptible db:restart demo-database --container-size 2048 --disk-size 30

And under the hood?

To provide truly elastic database capacity, Enclave relies on AWS EC2 and EBS to support your database containers and volumes. As an Enclave end-user, this means two things for you.

First, it means resizing your database container may take a little while (on the order of 30 minutes) if we need to provision new EC2 capacity to support it. This will often be the case if you’re scaling to a 4GB or 7GB container, less so if you’re scaling to 512MB, 1GB, or 2GB.

However, the good news is that Enclave automatically minimizes downtime when resizing a database, so even if the resize operation takes 30 minutes to complete because new capacity was required, your database will only be unavailable for a few short minutes.

Second, is means resizing your database disk is consistently fast. Even for very large disks, you can expect a disk resize to complete within minutes.

If you have any questions or comments, please let us know at Thanks!

Read more

Vulnerability Scanning for your Dependencies: Why and How

Thomas Orozco on May 22, 2017

In a world where application dependency graphs are deeper than ever, secure engineering means more than securing your own software: tracking vulnerabilities in your dependencies is just as important.

We’ve greatly simplified this process for Enclave users with a recent feature release: Docker Image Security Scans. This is a good opportunity to take a step back, review motivations and strategies for vulnerability management, and explain how this new feature fits in.

Why Dependency Vulnerability Management?

Popular dependencies are very juicy targets for malicious actors: a single vulnerability in a project like Rails can potentially affect thousands of apps, so attackers are likely to invest their resources in uncovering and automatically exploiting those.

One infamous (albeit old) example of this is CVE-2013-0156: an unauthenticated remote-code-execution (RCE) vulnerability in Rails that’s trivial to automatically scan for and exploit. Among others, Metasploit provides modules to automatically identify and exploit it.

As an attacker, a vulnerability like CVE-2013-0156 is a gold mine. The exploit can be delivered via a simple HTTP request, so all an an attacker needed to do to compromise vulnerable Rails applications was send that request to as many public web servers as they could find (finding all of them is much easier than it sounds).

In other words: when it comes to vulnerabilities in third-party code, you’re actively being targeted right now, even if no one has ever heard of you or your business.

Strategies for Dependency Vulnerability Management

Now that we’ve established that vulnerability management matters, the question that remains is: what can you do?

Modern apps depend on a number of dependencies that come from diverse sources ranging from OS packages to vendored dependencies. Fundamentally, there’s no one-size-fits-all approach to track of vulnerabilities that affect them.

So let’s divide and conquer: from a vulnerability management perspective, there’s a useful dichotomy between two categories of third-party software.

  • On the one hand, there’s, third-party software you installed via a package manager

  • And on the other hand, there’s third-party software you didn’t install via a package manager.

The easiest dependencies to look after are those that you installed via a package manager, so let’s start with them.

Using a package manager? Leverage vulnerability databases

Package managers helpfully maintain a list of the packages you installed, which means you can easily compare the software you installed against a vulnerability database, and get a list of packages you need to update and unfixed vulnerabilities you need to mitigate.

Ideally, you want to automate this process in order to be notified about new vulnerabilities when they come out, as opposed to hearing about them when you remember to check. Indeed, remember that when it comes to vulnerable third party dependencies, you’re actively being targeted right now, so speed is of the essence.

How does this work?

There’s a number of open-source projects and commercial products you can use for this type of analysis. A few popular options are Appcanary (which Aptible uses and integrates with), Gemnasium, and Snyk.

They often work like this:

  • You extract the list of packages you installed from your package manager

  • You feed it to the analyzer

  • The analyzer tells you about vulnerabilities (commercial products will also often notify you when new vulnerabilities come up in the future)

That simple!? Almost: you’re probably using multiple package managers in your app, which means you may have to mix and match analyzers to cover everything. Indeed, for most modern apps, you’ll have at least two package managers:

  • A system-level package manager: if you’re using Ubuntu or Debian, this is dpkg, which you access via apt-get. If you’re using CentOS / Fedora, this is rpm, which you access via yum or dnf. If you’re using Alpine, it’s apk. Etc.

  • An app-level package manager: if you’re writing a Ruby app, this is Bundler. If you’re writing a Node app, it’s NPM or Yarn. Etc.

So, what you need to do here is locate the list of installed packages for each of those, and submit it to a compatible vulnerability analyzer.

New Enclave Feature: Docker Image Security Scans

Now’s the right time to tell you about this new Enclave feature I mentioned earlier in this post.

When you deploy your app on Enclave, we have access to its system image. Last week, we shipped a new feature that lets us extract the list of system packages installed in your app, and submit it to Appcanary for a security scan.

This can work in two different ways:

  • You can run a one-shot scan via the Enclave Dashboard. This gives you an idea of what you need to fix right now, but it will not notify you when new vulnerabilities are discovered in packages you use, or if you install a new vulnerable package.

  • You can sign up for Appcanary and connect Enclave and Appcanary accounts. Enclave will keep your Appcanary monitors in sync with your app deploys in Enclave, and in turn Appcanary will notify you whenever there’s a vulnerability you need to know about. This puts you in a great position from a security perspective, and will reassure security auditors.

How to run a vulnerability scan for your dependencies using Enclave.

To summarize: Enclave with Appcanary can now handle vulnerability monitoring for your system packages, and it’s really easy for you to set up!

However, for app-level packages, you still have to do a little bit of legwork to find and integrate a vulnerability monitoring tool that works with your app. Note that Appcanary does support scanning Ruby and PHP dependencies, so you might be able to use them for app-level scanning too.

Is that it?

Not quite: we still have to look at third-party code you didn’t install via a package manager. Here are a few examples: software you compiled from source, binaries you downloaded directly from a vendor or project website, and even vendored dependencies.

For these, there is — unfortunately! — no silver bullet. Here’s what we recommend:

  • When possible, try and minimize the amount of software you install this way.

  • When you absolutely need to install software this way, subscribe to said software’s announcement channels to ensure you’re notified about new vulnerabilities. This may be a mailing list, a blog, or perhaps a GitHub issue tracker. When possible, review how security issues were handled in the past.

This time, that about wraps it up! Or does it? Engineering is turtles all the way down, so even if you covered all your bases in terms of software you installed, there’s still the underlying platform to account for.

That being said, unless you’re hosted on bare metal on your own hardware, this is largely out of your control. At this point, your best strategy is to choose a deployment platform you can trust (if you read this far, hopefully you’ll consider Enclave to be one).

Read more

Introducing Database Endpoints

Thomas Orozco on May 16, 2017

We’re proud to announce that you can now create and manage external database endpoints via the Enclave dashboard. External endpoints are useful to grant third parties access to data stored in Enclave databases for purposes such as analytics and ETL (without an endpoint, your database is firewalled off and inaccessible from the public internet). To set up a new endpoint for one of your databases, simply navigate to the Endpoints tab for this database and follow the instructions. Like their app counterparts, database endpoints support IP filtering, so you can ensure only trusted third parties have access to your database:

IP Filtering with database endpoints

Note that we’ve historically supported database endpoints via support requests before introducing this feature. If you had been using a database endpoint before we introduced this feature, it was automatically migrated and you’ll be able to manage it via the Dashboard going forward!

Read more

Faster Enclave Database Resizing

Thomas Orozco on March 23, 2017

We’re happy to announce that Enclave now leverages AWS “Elastic Volumes” to resize database storage. This feature was released a little over a month ago by AWS, and lets us grow EBS volumes without the need to snapshot.

For Enclave users, this means resizing your database volume is faster than it’s ever been: it now takes just minutes on average, and scales very well to larger volumes.

For comparison, before the introduction of Elastic Volumes, the only way to resize an EBS volume on AWS was to snapshot the volume then recreate it. However, this approach scaled poorly as you stored more data: creating a snapshot might take a few minutes for small volumes, but several hours for active, large 1TB+ volumes!

Now, with Elastic Volume support, resizing always results in less downtime, even if you end up scaling faster than you anticipated.

If you need to resize your database volume, contact Aptible Support and we’ll coordinate a time with you to perform the resize. Our operations team may also reach out to you to do so if our monitoring indicates that you’re about to run out disk space. We plan to release self-serve resizing sometime down the road, as well.

As usual, let us know if you have any questions or feedback!

Read more

Aptible 2-Factor Authentication Now Supports FIDO U2F Security Keys

Thomas Orozco on March 21, 2017

We’re proud to announce that Aptible now supports hardware Security Keys as a second authentication factor! Specifically, Aptible now supports devices compliant with the FIDO Universal Second Factor (U2F) standard.

U2F Security Keys can be securely used across multiple accounts, and are rapidly gaining adoption: Google, GitHub, Dropbox, and many others now support U2F Security Keys.

Convenience and Security: Pick Both!

There are two main reasons to use a Security Key with your Aptible account: increased convenience and better security.

With a Security Key, you just touch the key to authenticate. No more fumbling for your phone.

But Security Keys also help better protect against phishing, a common and sometimes dangerous attack.

Security Keys protect your Aptible account against phishing

Token-based 2FA does a good job at protecting your account against attackers who only learn your password, but it remains vulnerable to phishing: an attacker can trick you into providing your token and try to use it before it expires. Service providers can’t reliably tell the difference between the attacker’s request and a legitimate one coming from you.

Security keys offer much stronger protection against phishing. Here’s how:

When you try to log in using a Security Key, Aptible provides a unique challenge, and your Security Key responds a signed authentication response unique to that challenge. But unlike a 6-digit 2FA token, the Security Key’s response includes useful metadata Aptible can leverage to protect your account:

  • The origin your browser was pointed at when you signed this response. If you’re being phished, this will be the attacker’s website, whereas if you’re actually logging in to Aptible, it’ll be
  • A device-specific response counter that your Security Key is responsible for increasing monotonically when it generates an authentication response. If your Security Key was cloned by an advanced attacker with physical access, inconsistent counter values may reveal their misdeed.

Once your Security Key has sent the response, Aptible verifies it as follows:

  • The response must be signed by a Security Key associated with your account. Naturally, the signature must be valid.
  • The response must have been generated for This protects you against phishing.
  • The response must be for a challenge Aptible issued recently, and that challenge must not have been used before. This protects you against replay attacks.
  • The response must include a counter that’s greater that any count we’ve seen before for this Security Key. This protects you — to some extent — against device cloning.

How do I use U2F with my Aptible account?

First, you’ll need to purchase an FIDO U2F-compliant device from a trusted vendor. The Aptible team uses Yubikeys, but there exist a number of other vendors.

You’ll also need to make sure your browser supports U2F Security Keys. Currently, only Chrome and Opera offer such support, but other browser vendors are working on adding support (U2F support is on the Firefox roadmap for Q2 2017).

Once you’re done, navigate to your account settings, make sure 2FA is enabled, click on “Add a new Security Key”, and follow the instructions:

How to add new U2F Security Key.

That’s it! Next time you attempt to log in, you’ll be prompted to touch your Security Key as an alternative to entering a 2FA token.

Can I stop using token-based 2-Factor Authentication altogether?

No: U2F Security Keys can be added as additional second factors on your account, but you can’t disable token-based authentication.

The reason for this is that U2F Security Keys aren’t supported everywhere yet, so you may need to occasionally fallback to a token to log in: not all browsers support them (only Chrome and Opera do at this time), and neither does the Aptible CLI. This may evolve over time, so it’s conceivable that we’ll eventually let you use U2F only.

As usual, let us know if you have any questions or feedback!

Read more

Managed HTTPS Endpoints now support Internal Endpoints

Thomas Orozco on March 14, 2017

We’re happy to announce that Managed HTTPS is now available on Enclave for Internal Endpoints (in addition to External Endpoints, which were supported from day 1).

This means your internal-facing apps can now enjoy the benefits of Managed HTTPS Endpoints:

  • Automated certificate provisioning
  • Automated certificate renewals
  • Monitoring to detect problems with renewals and alert you

Getting Started

When you create a new Managed HTTPS Endpoint, the Aptible Dashboard will indicate which CNAME records you need to create via your DNS provider in order for Enclave to provision and renew certificates on your behalf (you’ll see one record for internal Endpoints, and two for external Endpoints — read on to understand why):

Configure Existing Endpoint for Managed HTTPS

For existing Managed HTTPS Endpoints, the Dashboard lets you review your current DNS configuration, so you can easily review whether everything is configured properly:

Configure New Endpoint for Managed HTTPS

If your Endpoint DNS records are misconfigured and Enclave is unable to automatically renew the certificate, Aptible support staff will contact you.

How it works

Fundamentally, Managed HTTPS relies on Let’s Encrypt to provision and renew certificates from your apps. Let’s Encrypt offers multiple ways to verify control of a domain, but they all boil down to the same process: - We notify Let’s Encrypt that we’d like to provision a new certificate for your domain - Let’s Encrypt provides us with a set of challenges to try and prove we control the domain - We fulfill one of the challenges, and get the certificate

There’s a total of 3 types of challenges supported in Let’s Encrypt, and we now use 2 of them:

HTTP Challenges

For HTTP challenges, Let’s Encrypt provides us with an arbitrary token and a URL under the domain we’re attempting to verify, and expects us to serve the token when it makes a request to that URL.

The token is a random string of data, and the URL looks like this:


We’ve supported HTTP challenges since day one: when Let’s Encrypt makes its request to your app hosted on Enclave (i.e. assuming you created a CNAME from $YOUR_DOMAIN to your Enclave Endpoint), Enclave intercepts the requests, serves the token, and thus validates control of the domain.

Obviously, this only works if Let’s Encrypt can connect to your domain from the Internet. This becomes a problem for Internal Endpoints or Endpoints with IP Filtering, since Let’s Encrypt can’t connect to them!

That’s why we’ve now added support for DNS Challenges as well.

DNS Challenges

DNS challenges are comparatively simpler than HTTP challenges. Here again, Let’s Encrypt provides an arbitrary token, but this time we’re expected to serve that token as a TXT record in DNS under the following name:


Now, there’s one little hiccup here: we don’t control _acme-challenge.$YOUR_DOMAIN: you do! To make this work, you need to tell Let’s Encrypt that you trust us to provision and renew certificates on your behalf.

To do so, you simply need to create a CNAME via your DNS provider from that record Let’s Encrypt is interested in to another record controlled by Enclave. To make this easy for you, the Dashboard will instruct you to do so, and give you the exact record to create.

And of course, the upside of using a DNS challenge is that unlike a HTTP challenge, it works for Internal Endpoints and Endpoints with IP Filtering!

Note that DNS challenges work for both External and Internal Endpoints, which is why the Dashboard will always prompt you to create the corresponding record (whereas it’ll only prompt you to create the record required for HTTP verification for External Endpoints).

As usual, let us know if you have any questions or feedback!

Read more

IP Filtering Made Easy With Enclave Endpoints

Thomas Orozco on February 22, 2017

We’re proud to announce that as of this week, Enclave Endpoints support IP filtering. Using this new feature, you can restrict access to apps hosted on Enclave to a set of whitelisted IP addresses or networks and block other incoming incoming traffic.

Use Cases

While IP filtering is no substitute for strong authentication, this feature is useful to:

  • Further lock down access to sensitive apps and interfaces, such as admin dashboards or third party apps you’re hosting on Aptible for internal use only (e.g. Kibana, Sentry).

  • Restrict access to your apps and APIs to a set of trusted customers or data partners.

And if you’re hosting development apps on Aptible, IP filtering can also help you make sure no one outside your company can view your latest and greatest before you’re ready to release it the world.

Note that IP filtering only applies to Endpoints (i.e. traffic directed to your app), not to aptible ssh, aptible logs, and other backend access functionality provided by the Aptible CLI (this access is secured by strong mutual authentication, as we covered in our Q1 2017 webinar).

Getting Started with IP Filtering

IP filtering is configured via the Aptible Dashboard on a per-Endpoint basis.

2017-02-22 Blog Post IP Filtering

You can enable it when creating a new Endpoint, or after the fact for an existing Endpoint by editing it.

Enjoy! As usual, let us know if you have any feedback or questions!

Read more

Database Encryption now defaults to AES-256

Thomas Orozco on February 14, 2017

Until recently, Aptible has used AES-192 for disk encryption, but as of last week, Aptible databases (and their backups) now default to AES-256 instead.

While there is no security concern whatsoever regarding AES-192 as an encryption standard, it has become increasingly common for Aptible customers to have their own partners request 256-bit encryption everywhere from a compliance perspective, which is why we’re making this change.

If you’re curious to know which encryption algorithm is used for a given database, you can find that information on the Dashboard page for the database in question (along with the disk size and database credentials).

Read more

Logentries and Sumo Logic setup now a breeze

Thomas Orozco on February 14, 2017

We’re happy to announce that Aptible Log Drains now provide more flexible configuration, making it much easier to forward your Aptible logs to two logging providers that are becoming increasingly popular with Aptible customers (in large part because they sign BAAs):

For Logentries, you can now use token-based logging. This makes configuration much, much easier than before: create a new Token TCP Log in Logentries then copy the Logging Token you’re provided with in Aptible, and you’re done!

Log Drain to Logentries

For Sumo Logic, we now support full HTTPS URLs. Here again, this means setup is greatly simplified: all you need to do is create a new Hosted HTTP Collector in Sumo Logic, then copy the URL you’re provided with in Aptible.

2017-02-14 Log Drain to Sumo Logic

Enjoy! As usual, if you have any questions or feedback, feel free to contact us.

Read more


Thomas Orozco on February 14, 2017

As of last week; ALB Endpoints respect the SSL_PROTOCOLS_OVERRIDE app configuration variable, which was — until now — only applicable to ELB Endpoints.

In a nutshell, setting SSL_PROTOCOLS_OVERRIDE lets you customize the protocols exposed by your Endpoint for encrypted traffic.

For example, if you have a regulatory requirement to only expose TLSv1.2, you can do so using the following command (via the Aptible CLI):

aptible config:set FORCE_SSL=true "SSL_PROTOCOLS_OVERRIDE=TLSv1.2" --app my-app

Note that by default (i.e. if you don’t set SSL_PROTOCOLS_OVERRIDE), Aptible Endpoints accept connections over TLSv1, TLSv1.1, and TLSv1.2. This configuration will evolve over time as best practices in the industry continue to evolve.

You can learn more about the SSL_PROTOCOLS_OVERRIDE configuration variable (and other variables available) on our support website: How can I modify the way my app handles SSL?

Read more

Redis + SSL

Thomas Orozco on January 20, 2017

We’re proud to announce that as of today, new Redis databases provisioned on Aptible Enclave support SSL/TLS in addition to the regular Redis protocol. Because both AWS and Aptible require that you encrypt HIPAA Protected Health Information in transit, even within a private, dedicated Enclave stack, starting today you can now use Redis to store and process PHI on Enclave.

How does it work?

Redis doesn’t support SSL natively, but the solution the Redis community settled on is to run an SSL termination layer in front of Redis. On Enclave, we use stunnel, an industry standard. This means a good number of Redis clients just work and support it out of the box, including:

  • redis-rb (Ruby)
  • redis-py (Python)
  • Jedis (Java)
  • predis (PHP)
  • node_redis (Node.js)
  • StackExchange.Redis (.NET)

How do I use it?

For new Redis databases, select your Redis database in the Aptible Dashboard, and click “Reveal” under “Credentials” at the top. Aptible will provide two URLs:

  • A regular Redis URL using the redis:// protocol
  • A SSL Redis URL using the rediss:// protocol (note the two “s”!)

Most Redis clients will automatically recognize a rediss:// URL and connect over SSL, but review your client’s documentation if you run into any trouble.

What about existing Redis databases?

For existing Redis databases, Aptible can enable SSL/TLS following a short downtime (about 30 seconds). If you’d like to do that, or have any feedback or questions, just let us know!

Read more

RabbitMQ Management Interface

Thomas Orozco on January 20, 2017

We’re happy to announce that the RabbitMQ management interface is now available for RabbitMQ databases deployed on Aptible Enclave. Until now, only the AMQP port was exposed, so you could push messages to queues, but managing queues was more difficult.

There’s a lot the RabbitMQ management interface can be used for, but for the most part it’s useful to review and manipulate the queues that exist in your RabbitMQ container.

How do I access it?

The RabbitMQ management interface is exposed by default on new RabbitMQ databases provisioned on Enclave. In the Aptible Dashboard, select your database, then select the “Credentials” link at the top. A modal will reveal all connection strings for that database, named by function:

For existing RabbitMQ databases, we can enable the management interface following a short downtime (about 30 seconds). If you’d like to do that, or have any feedback or questions, just let us know!

Read more

Aptible CLI for Windows

Thomas Orozco on January 9, 2017

We’re proud to announce that the Aptible CLI is now supported on Windows!

More than a CLI: a Toolbelt!

We distribute the Aptible CLI as a package called the “Aptible Toolbelt.” The Toolbelt is available for several platforms, including macOS, Ubuntu, Debian, and CentOS. On Windows, it is available as an MSI installer.

On all platforms, the toolbelt includes:

  • The Aptible CLI itself, in the form of the aptible-cli Ruby gem; and

  • System dependencies the CLI needs to function properly. This includes Ruby (which the CLI is written in) and dependencies like OpenSSH (which the CLI uses for functionality like database tunnels).

The toolbelt integrates with your system to ensure that the aptible command lands on your PATH, so that when you type aptible in your command prompt, things just work. On Windows, this is done by modifying your PATH, and on OSX and Linux this is done by placing a symlink in /usr/local/bin.

Supported Platforms

The Windows package targets Windows 8.1 and up on the PC side, and Windows Server 2012r2 and up on the server side. In other words, it targets Windows NT 6.3 and up, which is why you’ll see that number in the installer name.

Download and Installation

To get the Aptible CLI on Windows, download it directly from the Aptible website, then run the installer.

You might receive a SmartScreen prompt indicating that the publisher (that’s us!) isn’t known. Because this is the first time we’ve shipped software for Windows, we don’t have a reputation with Microsoft yet. The installer is properly signed, so to proceed, click through “More Info” and verify that the reported publisher is Aptible, Inc.

Enjoy! Since this is still early days for the Windows version of the CLI, make sure to let us know if you hit any snags!

Read more

Cancel Running Deployments

Thomas Orozco on December 13, 2016

We’re happy to announce that as of this week, you can now cancel running deployments on Aptible Enclave!

When is cancelling a deployment useful?

1. Your app is failing the HTTP health check, and you know why

As described in this support article, Enclave performs an automatic health check on any app service with an endpoint attached to it. During this health check, the platform makes an HTTP request to the port exposed by your Docker container, and waits for an HTTP response (though not necessarily a successful HTTP status code).

When your app is failing the HTTP health check, Enclave waits for 10 minutes before giving up and cancelling the deployment.

But, if you know the health check is never going to succeed, that’s wasted time! In this case, just cancel the deployment, and the health check will stop immediately.

2. You need to stop your pre-release commands immediately

Running database migrations in a pre-release command is convenient, but it can sometimes backfire if you end up running a migration that’s unexpectedly expensive and impacts your live app.

In this case, you often want to just stop the pre-release command dead in its tracks. Cancelling the deployment will do that.

However, do note that Enclave cannot rollback whatever your pre-release command did before you cancelled it, so use this capability wisely!

How does it work?

When deploying an app on Enclave, you’ll be presented with an informational banner explaining how you might cancel that deployment if needed:

$ git push aptible master
Counting objects: 15, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (10/10), done.
Writing objects: 100% (15/15), 1.20 KiB | 0 bytes/s, done.
Total 15 (delta 5), reused 0 (delta 0)
remote: (8ミ | INFO: Authorizing...
remote: (8ミ | INFO: Initiating deploy...
remote: (8ミ | INFO: Deploying 5e173381...
remote: (8ミ | INFO: Pressing CTRL + C now will NOT interrupt this deploy
remote: (8ミ | INFO: (it will continue in the background)
remote: (8ミ | INFO: However, you can cancel this deploy using the Aptible CLI with:
remote: (8ミ | INFO:     aptible operation:cancel 15489
remote: (8ミ | INFO: (you might need to update your Aptible CLI)

At this point, running aptible operation:cancel .... in another terminal window will advise Enclave that you’d like to cancel this deployment.

Note that you’ll need version 0.8.0 of the Aptible CLI or greater to use this command. If you haven’t installed the CLI, or have an older version, then download the latest here. You can check your version from the CLI using aptible version.

Is it safe to cancel a deployment?

Yes! Under the hood, cancelling an Enclave operation initiates a rollback at the next safe point in your deployment. This ensures your app isn’t left in an inconsistent state when you cancel.

There are two considerations to keep in mind:

  1. You cannot cancel a deployment between safe points. Notably, this means you can’t cancel the deployment during the Docker build step, which is still one big step with no safe points. (We would like to change this in the future.)

  2. Cancelling your deployment may not take effect immediately, or at all. For example, if your deployment is already being rolled back, asking to cancel won’t do anything.


Read more

Database Logs

Thomas Orozco on November 30, 2016

We’re proud to announce that as of this week, you can now route database logs to a Log Drain, just like you’d do with app logs! This option is available when you create a new Log Drain; you can opt to send either app or database logs, or both:

If you already have a Log Drain set up for apps (you should!), you can opt to either recreate it to capture both app and database logs, or simply create a new one that only captures database logs.

Why Capture Database Logs?

Aptible customers have asked for database logs for two main use cases: compliance and operations.

From a compliance perspective, you can use database logs to facilitate audit logging, for example by logging sensitive queries made to your database (or all queries for that matter, if that’s a realistic option for you).

From an operations standpoint, you can use them to identify new performance problems (e.g. by logging slow queries made to your database), or to better understand problems you’ve already identified (e.g. by correlating database log entries with issues you’ve experienced).

What Does My Database Log?

Your database may not log what you care about out of the box. For example, Postgres is pretty quiet by default. You can usually modify logging parameters by connecting to your database and issuing re-configuration statements.

For example, to enable slow query logging in Postgres >= 9.4, you’d create a database tunnel and run the following commands:

ALTER SYSTEM SET log_min_duration_statement = 200;
ALTER SYSTEM SET log_min_messages = 'INFO';
SELECT pg_reload_conf();

Refer to your database’s documentation for more information, or contact support and we’ll be happy to help.

How Do I Differentiate Database Logs From App Logs?

For Elasticsearch and HTTPS Log Drains, log entries sent to your Log Drain now include a “layer” field that indicates whether the log came from an app or a database.

Here’s an example comparing app and database logs using Kibana. Most of the logs here came from the app (respectively from a web and a background service), but we also have a slow query logged by Postgres:

For Syslog Log Drains, the database handle and type is included as the source program (that’s the service field you can see reported in Kibana above).

CLI Support, and Aptible Legacy Infrastructure.

At this time, database logs are not available via the CLI, and are not available on Aptible legacy infrastructure. We’re working on adding support in the CLI, so this will be available very soon!

Update: aptible logs now supports databases! Download the latest CLI and use aptible logs --database HANDLE.

If you are still running on Aptible legacy infrastructure (as indicated in the Aptible Dashboard when you provision a Log Drain), we encourage you to contact Aptible Support to coordinate a migration. This will give you access to database logs, as well as a growing number of other new features (such as ALB Endpoints, support for deploying directly from a Docker image and on-demand database restores).


Read more

Managed HTTPS Endpoints

Thomas Orozco on August 4, 2016

Earlier this week, we released Managed HTTPS Endpoints. These endpoints have a few key benefits:

  1. Your SSL/TLS certificate is free (!)
  2. Aptible handles generating the initial certificate
  3. Aptible handles renewing the certificate

All you need to get started with a Managed HTTPS Endpoint is a domain name! No more ops headaches trying to generate CSRs, keep private keys and certs straight, or deal with inconveniently-timed renewals.

Under the hood, Managed HTTPS uses Let’s Encrypt to automatically provision certificates for you. Aptible customers requested this feature, and we are proud to contribute to the global movement towards 100% HTTPS.

How it works

'Create a New Endpoint' view

Setting up a Managed HTTPS Endpoint is a 3-step process:

  1. Add an Endpoint to your app, and choose Managed HTTPS as the endpoint type. You will need to provide the domain name you intend to use with your app (e.g. Aptible will use that name to provision a certificate via Let’s Encrypt.

  2. When you create the endpoint, Aptible will provide you with an endpoint address. Use your DNS provider to create a CNAME from your domain ( to this endpoint address (something like

  3. Back in the Aptible Dashboard, confirm that you created the CNAME. Aptible will automatically provision your certificate, and you’re in business!

Note that between steps 2 and 3, your app won’t be available because you need to set up the CNAME before Aptible can provision the certificate. This isn’t ideal if you are migrating an app from somewhere else. Fortunately, you can just provide a transitional certificate that Aptible will use until your new Let’s Encrypt certificate is available. If you need to add a new certificate for this, just select the “Certificates” tab under your main environment view.

Once your endpoint is up and running done, we recommend you review our instructions for customizing SSL, in order to redirect end-users to HTTPS and disable the use of weaker cipher suites, which will earn the much-coveted A+ grade on Qualys’ SSL Test!

Qualys SSL Test results

Why use Managed HTTPS?

Above all else, Managed HTTPS brings you simplicity and peace of mind:

  • Setup is greatly simplified: all you need is a domain name. No need to generate your own certificate signing request, deal with a CA, or upload your certificate and key to Aptible.
  • Maintenance is essentially eliminated: you won’t need to remember to renew a certificate ever again.
  • Oh, and did we mention it’s free?

Enjoy! As usual, let us know if you have any feedback.

Read more

On-Demand Database Backups

Thomas Orozco on June 21, 2016

Contingency planning and disaster recovery are critical parts of any developer’s HIPAA compliance program. The Aptible platform automates many aspects of secure data management, including long-term retention, encryption at rest, taking automatic daily backups of your databases, and distributing those backups across geographically separate regions. These benefits require no setup and no maintenance on your part: Aptible simply takes care of them.

That said, recovering a database from a backup has required a support request. While we take pride in providing timely and effective support, it’s nice to be able to do things at your own pace, without the need to wait on someone else.

That’s why we’re proud to announce that for all v2 stacks, you can view and restore backups directly in the Aptible dashboard and CLI! (For customers on v1 stacks, you can view, but not self-restore.)

How does it work?

In the dashboard, locate any database, then select the “Backups” tab. Find the backup you would like to restore from, and select the “Restore” action. From the CLI, first update to the newest version (gem update aptible-cli), then run aptible backup:list $HANDLE to view backups for a database, or aptible backup:restore $ID to restore a backup.

Restoring from a backup creates a new database - it never replaces or overwrites your existing database. You can use this feature to test your disaster recovery plans, test or review new database migrations before you run them against production, roll back to a prior backup, or simply review old data. When you are done using the restored database, you can deprovision it or promote it to be used by your apps.

But wait, there’s more!

Introducing On-Demand Backups

In addition to displaying automatic daily backups, you can now trigger a new backup on demand from the dashboard or CLI. In the dashboard, simply select the large green “Create New Backup” button. From the CLI, make sure you are running the latest version (gem update aptible-cli) then use aptible db:backup $HANDLE to trigger a new backup.

Now, before you do something scary with your database (like a big migration), you have an extra safety net. On-demand backups are easier than filing a support request and safer than using a tunnel to dump to a local environment, because you will never have to remember to purge data from your machine.

We hope you find both of these features useful! That’s it for today. As usual, if you have questions or feedback about this feature, just get in touch.

Read more

2-Factor Authentication!

Thomas Orozco on May 19, 2016

We’re happy to announce that two-factor authentication (2FA) is available for all users and account types in the Aptible dashboard and CLI! Multifactor authentication is a best practice that adds an additional layer of security on top of the normal username and password you use to verify your identity. You can enable it in your Aptible user settings.

How does it work?

Aptible 2-factor authentication implements the Time-based One-time Password (TOTP) algorithm specified in RFC 6238. We currently support the virtual token form factor - Google Authenticator is an excellent, free app you can use. We do not currently support SMS or hardware tokens.

When enabled, 2FA protects access to your Aptible account via the dashboard, CLI, and API. 2FA does not restrict Git pushes - these are still authenticated by your SSH key. In some cases, you may not push code with your own user credentials, for example if you deploy with a CI service such as Travis or Circle and perform all deploys via a robot user. If so, we encourage you to remove SSH keys from your Aptible user account.

What if I’m locked out?

When you enable 2FA, you get emergency backup codes, in case your device is lost, stolen, or temporarily unavailable. Keep these in a safe place. If you don’t have your device and are unable to access a backup code, please contact us.

As usual, we’d love to hear your feedback! If you have any questions or comments, please let us know!

Read more

Aptible Logs: v2

Thomas Orozco on May 16, 2016

If you are on an Aptible “v2” stack, which automatically scales your app containers across AWS Availability Zones, you have probably noticed that the aptible logs CLI command has been deprecated. As an alternative, you’ve been able to use Log Drains to collect app logs.

A Log Drain’s ability to persist logs (not just stream them) makes it a robust option, however each drain requires some setup. aptible logs is built in to the Aptible CLI, requires no additional setup, and makes it easy to see what is happening in your app right now.

We’re happy to announce that aptible logs is available on Aptible v2 stacks!

How Can I Use It?

If you already have the Aptible CLI installed, then you don’t need to do anything: using aptible logs from the CLI works on all stacks as of today. There is a deprecation notice for aptible logs in older versions of the CLI - you can make it go away by updating the CLI.

If you don’t have the CLI installed, follow the installation instructions first.

Technical Details

aptible logs on v2 stacks is implemented as a Log Drain that doesn’t drain: instead, it buffers logs received from log forwarders and allows clients to stream the buffer.

As a result, the first time you use aptible logs on a v2 stack, we’ll take a few minutes to automatically provision a special new “tail” Log Drain, if you don’t already have one. Once you have a tail Log Drain, subsequent aptible logs calls are fast.

If you have any questions or feedback about this new feature, please let us know!

Read more

Introducing Container Metrics

Thomas Orozco on April 7, 2016

Aptible customers have been asking how they could view performance metrics such as RAM and CPU usage for their containers. We’re happy to announce that the wait is coming to an end!

Last week, we started rolling out the first iteration of our new Container Metrics feature. You can access them via the “View Metrics” buttons on an App’s service list, or the “Metrics” tab for a Database. As an Aptible user, this lets you visualize performance metrics for your app and database containers directly from your dashboard. In turn, you can use this information to identify performance bottlenecks and make informed scaling decisions.

Metrics are available for apps and databases. In both cases, you can vizualize:

  • Memory usage, including a breakdown in terms of RSS vs. caches / buffers. We’ll soon be including your memory limits in the graph as well, so you can compare your actual usage to your memory allocation.

  • Load average, which reflects the overall activity of your container in terms of CPU and I/O.

Both of these metrics are “bog-standard” Linux metrics, meaning there is a ton of information about them on the Internet. That being said, you can also hover over the little “?” icon in the UI for a quick reminder:

image alt text

Using Container Metrics to Debug Performance

Let’s work through an example of how you can use these charts to understand performance issues and make scaling decisions. In this example, we’re running pgbench against a Postgres database (initially provisioned on a 1GB container), and we’ll explore easy ways to get better performance out of it.

First, take a look at the graphs:

image alt text

  1. It looks like database traffic surged at 6:24 PM UTC, lasting until 6:44 PM UTC. That’s our pgbench run.

  2. Our container quickly consumed 100% of its 1 GB of available memory. Most of the memory was allocated for kernel page caches, which Linux uses to minimize expensive I/O requests.

  3. With a load average consistently over 20 (i.e. > 20 tasks blocked waiting on CPU or I/O), our database operations are going to be very delayed. If our app was experiencing slowdowns around the same time, our database would be a likely suspect.

Armed with that knowledge, what can we do? A high load average can be caused by a bottleneck in terms of I/O or CPU, or both. Detailed CPU and I/O metrics are coming soon. In the meantime, upgrading to a bigger container might help with both our problems:

  • Our CPU allocation would be bigger, which essentially means we’d run CPU tasks faster.

  • Our memory allocation would be bigger, which means more memory for caches and buffers, which means faster disk reads (disk writes on the other hand would probably not be faster, since it’s important that they actually hit the disk for durability, rather than sit in a buffer).

Using Container Metrics to Evaluate Scaling

After upgrading our container, let’s run the benchmark again:

image alt text

Clearly, the kernel is making good use of that extra memory we allocated for the container!

This time around, the benchmark completed faster, finishing in 12 minutes instead of 20, and with a load average that hung around 10, not 20. If we had an app connecting to our database and running actual queries, we’d be experiencing shorter delays when hitting the database.

Now, there’s still room for improvement. In a real-world scenario, you’d have several options to explore next:

  • Throw even more resources at the problem, e.g., an 8GB container, or bigger. Perhaps more unexpectedly, using a larger database volume would probably help as well: Aptible stores data on AWS EBS volumes, and larger EBS volumes are allocated more I/O bandwidth.

  • Optimize the queries you’re making against your database. Using an APM tool like New Relic can help you find which ones are draining your performance the most.

  • Investigate database-level parameter tuning (e.g. work_mem on Postgres).

I hope this example gives you an idea of how you can use Container Metrics to keep tabs on your application and database performance, and make informed scaling decisions. If you have any feedback or questions regarding this new feature, please do get in touch with Aptible support!

Read more