We’re proud to announce our latest open-source project: Supercronic. Supercronic is a cron implementation designed with containers in mind.
Why a new cron for containers?
We’ve helped hundreds of Enclave customers roll out scheduled tasks in containerized environments. Along the way, we identified a number of recurring issues using traditional cron implementations such as Vixie cron or dcron in containers:
They purge the environment before running jobs. As a result, jobs fail, because all their configuration was provided in environment variables.
They redirect all output to log files, email or /dev/null. As a result, job logs are lost, because the user expected those logs to be routed to stdout / stderr.
They don’t log anything when jobs fail (or start). As a result, missing jobs and failures go completely unnoticed.
To be fair, there are very good architectural and security reasons traditional cron implementations behave the way they do. The only problem is: they’re not applicable to containerized environments.
Now, all these problems can be worked around, and historically, that is what we’ve suggested:
You can persist environment variables to a file before starting cron, and read them back when running jobs.
You can run
tailin the background to capture logs from files and route them to stdout.
You can wrap jobs with some form of logging to capture exit codes.
But wouldn’t it better if workarounds simply weren’t necessary? We certainly think so!
Supercronic is a cron implementation designed for the container age.
Unlike traditional cron implementations, it leaves your environment variables alone, and logs everything to stdout / stderr. It’ll also warn you when your jobs fail or take too long to run.
Perhaps just as importantly, Supercronic is designed with compatibility in mind. If you’re currently using “cron + workarounds” in a container, Supercronic should be a drop-in replacement:
1 2 3 4 5 6 7 8 9 10 11 $ cat ./my-crontab */5 * * * * * * echo "hello from Supercronic" $ ./supercronic ./my-crontab INFO[2017-07-10T19:40:44+02:00] read crontab: ./my-crontab INFO[2017-07-10T19:40:50+02:00] starting iteration=0 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *" INFO[2017-07-10T19:40:50+02:00] hello from Supercronic channel=stdout iteration=0 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *" INFO[2017-07-10T19:40:50+02:00] job succeeded iteration=0 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *" INFO[2017-07-10T19:40:55+02:00] starting iteration=1 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *" INFO[2017-07-10T19:40:55+02:00] hello from Supercronic channel=stdout iteration=1 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *" INFO[2017-07-10T19:40:55+02:00] job succeeded iteration=1 job.command="echo "hello from Supercronic"" job.position=0 job.schedule="*/5 * * * * * *"
If you’re an Enclave customer, we’ve updated our cron jobs tutorial with instructions to use Supercronic. If you’re not using Enclave, then head on over to Supercronic’s GitHub page for installation and usage instructions.
We’re proud to announce that as of this week, Enclave automatically restarts application and database containers when they crash.
Thanks to this new feature, you no longer need to use process supervisors or shell
while loops in your containers to ensure that they stay up no matter what: Enclave will take care of that for you.
How does it work?
Container Recovery functions similarly to Memory Management: if one of your containers crashes (or, in the case of Memory Management, exceeds its memory allocation), Enclave automatically restores your container to a pristine state, then restarts it.
You don’t have to do anything, it just works.
Why does this matter?
Enclave provides a number of features to ensure high-availability on your apps at the infrastructure level, including:
Automatically distributing your app containers across instances located in distinct EC2 availability zones.
Implementing health checks to automatically divert traffic away from crashed app containers.
These controls effectively protect you against infrastructure failures, but they can’t help you when your app containers all crash due to a bug affecting your app itself. Here are a few examples of the latter, which we’ve seen affect customer apps deployed on Enclave, and which are now mitigated by Container Recovery:
Apps that crash when their database connection is interrupted due to temporary network unavailability, a timeout, or simply downtime (for example, during a database resizing operation).
Background processors that crash. For example, all your Sidekiq workers exiting with an irrecoverable error, such as a segfault caused by a faulty native dependency.
If you’d like to learn more about this feature, please find a full overview of Container Recovery in the Enclave documentation.
We’re proud to announce that you can now deploy apps on Enclave directly from a Docker image, bypassing Enclave’s traditional git-based deployment process.
With this feature, you can easily use the same images for deployment on Enclave and test / dev via other Docker-based tools such as Docker Compose or Kubernetes. And, if you’re already using Docker for your development workflow but haven’t adopted Enclave yet, it’s now much easier for you to take the platform for a spin.
How does it work?
Direct docker image deployments on Enclave are done via the CLI interface. Here’s an example.
To deploy Docker’s official “hello-world” image to an app called “my-hello-world-app” on Enclave, you’d use this command:
1 aptible deploy --app my-hello-world-app --docker-image hello-world
And if your app follows the
12-factor configuration guidelines and uses the environment for configuration, you can include arbitrary environment variables for your app when running
1 2 aptible deploy --app my-enclave-app --docker-image quay.io/my-org/my-app \ DATABASE_URL=postgresql://...
Why use it?
First off, if you’re currently using Enclave’s git-based deployment workflow, you can continue using that: it’s not going away! That being said, there are a few reasons why you might want to look at direct Docker image deploy as an alternative.
First, you might like more control over your Docker image build process. Indeed, when you deploy via git, Enclave follows a fairly opinionated build process:
The Docker build context is your git repository.
Enclave injects a .aptible.env file in your repository for you to access environment variables.
Enclave uses the Dockerfile from the root of your git repository.
This works just fine for a majority of apps, but if that’s not your case, use direct Docker image deploy for complete control over your build process, and make adjustments as needed. For example, you could inject private dependencies in your build context, leverage Docker build arguments, or use a different Dockerfile.
Other reasons for using this feature include:
You’re already building Docker images to use with other tools. Use this direct Docker image deploy feature to unify your deployments around a single build.
You’re using a public Docker image that’s available on the Docker hub. Use direct Docker image deploy so you don’t have to rebuild it from scratch.
If you’d like to learn more about this new feature, head for the documentation! And, as usual, let us know if you have any feedback.
Note: Astute readers will note that you’ve been able to deploy apps on Enclave directly from a Docker image for some time, but we did rework the feature to make it much easier to use. Specifically, here’s what changed:
Procfiles and git repositories are now optional: Enclave will use your Docker image’s CMD if you don’t have a Procfile.
You no longer need to run
aptible config:setfollowed by
aptible rebuildto deploy. Instead, you can do everything in one operation with
We’re proud to announce that you can now resize both the container and disk footprints of your Enclave databases from the Dashboard or CLI. For new databases, you can also configure the container size from day 1, whereas it previously defaulted to 1GB RAM.
Using this new feature, you can easily scale your database up as your traffic grows or when you’re about to run out of disk space. To that end, check out Container Metrics, which provides a real time view into your databases’ RAM and disk usage. Aptible Support will also notify you if your disk usage reaches 90%.
How does database scaling work?
There are two ways you can resize your database.
First, you can do so via the Dashboard. Just click modify next to the container or disk size, and proceed.
Second, you can do so via the CLI. For example, to scale a database named “demo-database” to a 2GB RAM container with 30 GB disk, you’d use:
1 aptible db:restart demo-database --container-size 2048 --disk-size 30
And under the hood?
To provide truly elastic database capacity, Enclave relies on AWS EC2 and EBS to support your database containers and volumes. As an Enclave end-user, this means two things for you.
First, it means resizing your database container may take a little while (on the order of 30 minutes) if we need to provision new EC2 capacity to support it. This will often be the case if you’re scaling to a 4GB or 7GB container, less so if you’re scaling to 512MB, 1GB, or 2GB.
However, the good news is that Enclave automatically minimizes downtime when resizing a database, so even if the resize operation takes 30 minutes to complete because new capacity was required, your database will only be unavailable for a few short minutes.
If you have any questions or comments, please let us know at firstname.lastname@example.org. Thanks!
In a world where application dependency graphs are deeper than ever, secure engineering means more than securing your own software: tracking vulnerabilities in your dependencies is just as important.
We’ve greatly simplified this process for Enclave users with a recent feature release: Docker Image Security Scans. This is a good opportunity to take a step back, review motivations and strategies for vulnerability management, and explain how this new feature fits in.
Why Dependency Vulnerability Management?
Popular dependencies are very juicy targets for malicious actors: a single vulnerability in a project like Rails can potentially affect thousands of apps, so attackers are likely to invest their resources in uncovering and automatically exploiting those.
One infamous (albeit old) example of this is CVE-2013-0156: an unauthenticated remote-code-execution (RCE) vulnerability in Rails that’s trivial to automatically scan for and exploit. Among others, Metasploit provides modules to automatically identify and exploit it.
As an attacker, a vulnerability like CVE-2013-0156 is a gold mine. The exploit can be delivered via a simple HTTP request, so all an an attacker needed to do to compromise vulnerable Rails applications was send that request to as many public web servers as they could find (finding all of them is much easier than it sounds).
In other words: when it comes to vulnerabilities in third-party code, you’re actively being targeted right now, even if no one has ever heard of you or your business.
Strategies for Dependency Vulnerability Management
Now that we’ve established that vulnerability management matters, the question that remains is: what can you do?
Modern apps depend on a number of dependencies that come from diverse sources ranging from OS packages to vendored dependencies. Fundamentally, there’s no one-size-fits-all approach to track of vulnerabilities that affect them.
So let’s divide and conquer: from a vulnerability management perspective, there’s a useful dichotomy between two categories of third-party software.
On the one hand, there’s, third-party software you installed via a package manager
And on the other hand, there’s third-party software you didn’t install via a package manager.
The easiest dependencies to look after are those that you installed via a package manager, so let’s start with them.
Using a package manager? Leverage vulnerability databases
Package managers helpfully maintain a list of the packages you installed, which means you can easily compare the software you installed against a vulnerability database, and get a list of packages you need to update and unfixed vulnerabilities you need to mitigate.
Ideally, you want to automate this process in order to be notified about new vulnerabilities when they come out, as opposed to hearing about them when you remember to check. Indeed, remember that when it comes to vulnerable third party dependencies, you’re actively being targeted right now, so speed is of the essence.
How does this work?
There’s a number of open-source projects and commercial products you can use for this type of analysis. A few popular options are Appcanary (which Aptible uses and integrates with), Gemnasium, and Snyk.
They often work like this:
You extract the list of packages you installed from your package manager
You feed it to the analyzer
The analyzer tells you about vulnerabilities (commercial products will also often notify you when new vulnerabilities come up in the future)
That simple!? Almost: you’re probably using multiple package managers in your app, which means you may have to mix and match analyzers to cover everything. Indeed, for most modern apps, you’ll have at least two package managers:
A system-level package manager: if you’re using Ubuntu or Debian, this is dpkg, which you access via apt-get. If you’re using CentOS / Fedora, this is rpm, which you access via yum or dnf. If you’re using Alpine, it’s apk. Etc.
An app-level package manager: if you’re writing a Ruby app, this is Bundler. If you’re writing a Node app, it’s NPM or Yarn. Etc.
So, what you need to do here is locate the list of installed packages for each of those, and submit it to a compatible vulnerability analyzer.
New Enclave Feature: Docker Image Security Scans
Now’s the right time to tell you about this new Enclave feature I mentioned earlier in this post.
When you deploy your app on Enclave, we have access to its system image. Last week, we shipped a new feature that lets us extract the list of system packages installed in your app, and submit it to Appcanary for a security scan.
This can work in two different ways:
You can run a one-shot scan via the Enclave Dashboard. This gives you an idea of what you need to fix right now, but it will not notify you when new vulnerabilities are discovered in packages you use, or if you install a new vulnerable package.
You can sign up for Appcanary and connect Enclave and Appcanary accounts. Enclave will keep your Appcanary monitors in sync with your app deploys in Enclave, and in turn Appcanary will notify you whenever there’s a vulnerability you need to know about. This puts you in a great position from a security perspective, and will reassure security auditors.
To summarize: Enclave with Appcanary can now handle vulnerability monitoring for your system packages, and it’s really easy for you to set up!
However, for app-level packages, you still have to do a little bit of legwork to find and integrate a vulnerability monitoring tool that works with your app. Note that Appcanary does support scanning Ruby and PHP dependencies, so you might be able to use them for app-level scanning too.
Is that it?
Not quite: we still have to look at third-party code you didn’t install via a package manager. Here are a few examples: software you compiled from source, binaries you downloaded directly from a vendor or project website, and even vendored dependencies.
For these, there is — unfortunately! — no silver bullet. Here’s what we recommend:
When possible, try and minimize the amount of software you install this way.
When you absolutely need to install software this way, subscribe to said software’s announcement channels to ensure you’re notified about new vulnerabilities. This may be a mailing list, a blog, or perhaps a GitHub issue tracker. When possible, review how security issues were handled in the past.
This time, that about wraps it up! Or does it? Engineering is turtles all the way down, so even if you covered all your bases in terms of software you installed, there’s still the underlying platform to account for.
That being said, unless you’re hosted on bare metal on your own hardware, this is largely out of your control. At this point, your best strategy is to choose a deployment platform you can trust (if you read this far, hopefully you’ll consider Enclave to be one).
We’re proud to announce that you can now create and manage external database endpoints via the Enclave dashboard. External endpoints are useful to grant third parties access to data stored in Enclave databases for purposes such as analytics and ETL (without an endpoint, your database is firewalled off and inaccessible from the public internet). To set up a new endpoint for one of your databases, simply navigate to the Endpoints tab for this database and follow the instructions. Like their app counterparts, database endpoints support IP filtering, so you can ensure only trusted third parties have access to your database:
Note that we’ve historically supported database endpoints via support requests before introducing this feature. If you had been using a database endpoint before we introduced this feature, it was automatically migrated and you’ll be able to manage it via the Dashboard going forward!
We’re happy to announce that Enclave now leverages AWS “Elastic Volumes” to resize database storage. This feature was released a little over a month ago by AWS, and lets us grow EBS volumes without the need to snapshot.
For Enclave users, this means resizing your database volume is faster than it’s ever been: it now takes just minutes on average, and scales very well to larger volumes.
For comparison, before the introduction of Elastic Volumes, the only way to resize an EBS volume on AWS was to snapshot the volume then recreate it. However, this approach scaled poorly as you stored more data: creating a snapshot might take a few minutes for small volumes, but several hours for active, large 1TB+ volumes!
Now, with Elastic Volume support, resizing always results in less downtime, even if you end up scaling faster than you anticipated.
If you need to resize your database volume, contact Aptible Support and we’ll coordinate a time with you to perform the resize. Our operations team may also reach out to you to do so if our monitoring indicates that you’re about to run out disk space. We plan to release self-serve resizing sometime down the road, as well.
We’re proud to announce that Aptible now supports hardware Security Keys as a second authentication factor! Specifically, Aptible now supports devices compliant with the FIDO Universal Second Factor (U2F) standard.
U2F Security Keys can be securely used across multiple accounts, and are rapidly gaining adoption: Google, GitHub, Dropbox, and many others now support U2F Security Keys.
Convenience and Security: Pick Both!
There are two main reasons to use a Security Key with your Aptible account: increased convenience and better security.
With a Security Key, you just touch the key to authenticate. No more fumbling for your phone.
But Security Keys also help better protect against phishing, a common and sometimes dangerous attack.
Security Keys protect your Aptible account against phishing
Token-based 2FA does a good job at protecting your account against attackers who only learn your password, but it remains vulnerable to phishing: an attacker can trick you into providing your token and try to use it before it expires. Service providers can’t reliably tell the difference between the attacker’s request and a legitimate one coming from you.
Security keys offer much stronger protection against phishing. Here’s how:
When you try to log in using a Security Key, Aptible provides a unique challenge, and your Security Key responds a signed authentication response unique to that challenge. But unlike a 6-digit 2FA token, the Security Key’s response includes useful metadata Aptible can leverage to protect your account:
- The origin your browser was pointed at when you signed this response. If you’re being phished, this will be the attacker’s website, whereas if you’re actually logging in to Aptible, it’ll be dashboard.aptible.com.
- A device-specific response counter that your Security Key is responsible for increasing monotonically when it generates an authentication response. If your Security Key was cloned by an advanced attacker with physical access, inconsistent counter values may reveal their misdeed.
Once your Security Key has sent the response, Aptible verifies it as follows:
- The response must be signed by a Security Key associated with your account. Naturally, the signature must be valid.
- The response must have been generated for dashboard.aptible.com. This protects you against phishing.
- The response must be for a challenge Aptible issued recently, and that challenge must not have been used before. This protects you against replay attacks.
- The response must include a counter that’s greater that any count we’ve seen before for this Security Key. This protects you — to some extent — against device cloning.
How do I use U2F with my Aptible account?
First, you’ll need to purchase an FIDO U2F-compliant device from a trusted vendor. The Aptible team uses Yubikeys, but there exist a number of other vendors.
You’ll also need to make sure your browser supports U2F Security Keys. Currently, only Chrome and Opera offer such support, but other browser vendors are working on adding support (U2F support is on the Firefox roadmap for Q2 2017).
Once you’re done, navigate to your account settings, make sure 2FA is enabled, click on “Add a new Security Key”, and follow the instructions:
That’s it! Next time you attempt to log in, you’ll be prompted to touch your Security Key as an alternative to entering a 2FA token.
Can I stop using token-based 2-Factor Authentication altogether?
No: U2F Security Keys can be added as additional second factors on your account, but you can’t disable token-based authentication.
The reason for this is that U2F Security Keys aren’t supported everywhere yet, so you may need to occasionally fallback to a token to log in: not all browsers support them (only Chrome and Opera do at this time), and neither does the Aptible CLI. This may evolve over time, so it’s conceivable that we’ll eventually let you use U2F only.
We’re happy to announce that Managed HTTPS is now available on Enclave for Internal Endpoints (in addition to External Endpoints, which were supported from day 1).
This means your internal-facing apps can now enjoy the benefits of Managed HTTPS Endpoints:
- Automated certificate provisioning
- Automated certificate renewals
- Monitoring to detect problems with renewals and alert you
When you create a new Managed HTTPS Endpoint, the Aptible Dashboard will indicate which CNAME records you need to create via your DNS provider in order for Enclave to provision and renew certificates on your behalf (you’ll see one record for internal Endpoints, and two for external Endpoints — read on to understand why):
For existing Managed HTTPS Endpoints, the Dashboard lets you review your current DNS configuration, so you can easily review whether everything is configured properly:
If your Endpoint DNS records are misconfigured and Enclave is unable to automatically renew the certificate, Aptible support staff will contact you.
How it works
Fundamentally, Managed HTTPS relies on Let’s Encrypt to provision and renew certificates from your apps. Let’s Encrypt offers multiple ways to verify control of a domain, but they all boil down to the same process: - We notify Let’s Encrypt that we’d like to provision a new certificate for your domain - Let’s Encrypt provides us with a set of challenges to try and prove we control the domain - We fulfill one of the challenges, and get the certificate
There’s a total of 3 types of challenges supported in Let’s Encrypt, and we now use 2 of them:
For HTTP challenges, Let’s Encrypt provides us with an arbitrary token and a URL under the domain we’re attempting to verify, and expects us to serve the token when it makes a request to that URL.
The token is a random string of data, and the URL looks like this:
We’ve supported HTTP challenges since day one: when Let’s Encrypt makes its request to your app hosted on Enclave (i.e. assuming you created a
$YOUR_DOMAIN to your Enclave Endpoint), Enclave intercepts the requests, serves the token, and thus validates control of the domain.
Obviously, this only works if Let’s Encrypt can connect to your domain from the Internet. This becomes a problem for Internal Endpoints or Endpoints with IP Filtering, since Let’s Encrypt can’t connect to them!
That’s why we’ve now added support for DNS Challenges as well.
DNS challenges are comparatively simpler than HTTP challenges. Here again, Let’s Encrypt provides an arbitrary token, but this time we’re expected to serve that token as a TXT record in DNS under the following name:
Now, there’s one little hiccup here: we don’t control
_acme-challenge.$YOUR_DOMAIN: you do! To make this work, you need to tell Let’s Encrypt that you trust us to provision and renew certificates on your behalf.
To do so, you simply need to create a CNAME via your DNS provider from that record Let’s Encrypt is interested in to another record controlled by Enclave. To make this easy for you, the Dashboard will instruct you to do so, and give you the exact record to create.
And of course, the upside of using a DNS challenge is that unlike a HTTP challenge, it works for Internal Endpoints and Endpoints with IP Filtering!
Note that DNS challenges work for both External and Internal Endpoints, which is why the Dashboard will always prompt you to create the corresponding record (whereas it’ll only prompt you to create the record required for HTTP verification for External Endpoints).
As usual, let us know if you have any questions or feedback!
We’re proud to announce that as of this week, Enclave Endpoints support IP filtering. Using this new feature, you can restrict access to apps hosted on Enclave to a set of whitelisted IP addresses or networks and block other incoming incoming traffic.
While IP filtering is no substitute for strong authentication, this feature is useful to:
Further lock down access to sensitive apps and interfaces, such as admin dashboards or third party apps you’re hosting on Aptible for internal use only (e.g. Kibana, Sentry).
Restrict access to your apps and APIs to a set of trusted customers or data partners.
And if you’re hosting development apps on Aptible, IP filtering can also help you make sure no one outside your company can view your latest and greatest before you’re ready to release it the world.
Note that IP filtering only applies to Endpoints (i.e. traffic directed to your app), not to
aptible logs, and other backend access functionality provided by the Aptible CLI (this access is secured by strong mutual authentication, as we covered in our Q1 2017 webinar).
Getting Started with IP Filtering
IP filtering is configured via the Aptible Dashboard on a per-Endpoint basis.
You can enable it when creating a new Endpoint, or after the fact for an existing Endpoint by editing it.
Enjoy! As usual, let us know if you have any feedback or questions!
Until recently, Aptible has used AES-192 for disk encryption, but as of last week, Aptible databases (and their backups) now default to AES-256 instead.
While there is no security concern whatsoever regarding AES-192 as an encryption standard, it has become increasingly common for Aptible customers to have their own partners request 256-bit encryption everywhere from a compliance perspective, which is why we’re making this change.
If you’re curious to know which encryption algorithm is used for a given database, you can find that information on the Dashboard page for the database in question (along with the disk size and database credentials).
As of last week; ALB Endpoints respect the
SSL_PROTOCOLS_OVERRIDE app configuration variable, which was — until now — only applicable to ELB Endpoints.
In a nutshell, setting
SSL_PROTOCOLS_OVERRIDE lets you customize the protocols exposed by your Endpoint for encrypted traffic.
For example, if you have a regulatory requirement to only expose TLSv1.2, you can do so using the following command (via the Aptible CLI):
1 aptible config:set FORCE_SSL=true "SSL_PROTOCOLS_OVERRIDE=TLSv1.2" --app my-app
Note that by default (i.e. if you don’t set
SSL_PROTOCOLS_OVERRIDE), Aptible Endpoints accept connections over TLSv1, TLSv1.1, and TLSv1.2. This configuration will evolve over time as best practices in the industry continue to evolve.
You can learn more about the
SSL_PROTOCOLS_OVERRIDE configuration variable (and other variables available) on our support website: How can I modify the way my app handles SSL?
We’re happy to announce that Aptible Log Drains now provide more flexible configuration, making it much easier to forward your Aptible logs to two logging providers that are becoming increasingly popular with Aptible customers (in large part because they sign BAAs):
For Logentries, you can now use token-based logging. This makes configuration much, much easier than before: create a new Token TCP Log in Logentries then copy the Logging Token you’re provided with in Aptible, and you’re done!
For Sumo Logic, we now support full HTTPS URLs. Here again, this means setup is greatly simplified: all you need to do is create a new Hosted HTTP Collector in Sumo Logic, then copy the URL you’re provided with in Aptible.
Enjoy! As usual, if you have any questions or feedback, feel free to contact us.
We’re proud to announce that as of today, new Redis databases provisioned on Aptible Enclave support SSL/TLS in addition to the regular Redis protocol. Because both AWS and Aptible require that you encrypt HIPAA Protected Health Information in transit, even within a private, dedicated Enclave stack, starting today you can now use Redis to store and process PHI on Enclave.
How does it work?
Redis doesn’t support SSL natively, but the solution the Redis community settled on is to run an SSL termination layer in front of Redis. On Enclave, we use stunnel, an industry standard. This means a good number of Redis clients just work and support it out of the box, including:
- redis-rb (Ruby)
- redis-py (Python)
- Jedis (Java)
- predis (PHP)
- node_redis (Node.js)
- StackExchange.Redis (.NET)
How do I use it?
For new Redis databases, select your Redis database in the Aptible Dashboard, and click “Reveal” under “Credentials” at the top. Aptible will provide two URLs:
- A regular Redis URL using the
- A SSL Redis URL using the
rediss://protocol (note the two “s”!)
Most Redis clients will automatically recognize a
rediss:// URL and connect over SSL, but review your client’s documentation if you run into any trouble.
What about existing Redis databases?
For existing Redis databases, Aptible can enable SSL/TLS following a short downtime (about 30 seconds). If you’d like to do that, or have any feedback or questions, just let us know!
We’re happy to announce that the RabbitMQ management interface is now available for RabbitMQ databases deployed on Aptible Enclave. Until now, only the AMQP port was exposed, so you could push messages to queues, but managing queues was more difficult.
There’s a lot the RabbitMQ management interface can be used for, but for the most part it’s useful to review and manipulate the queues that exist in your RabbitMQ container.
How do I access it?
The RabbitMQ management interface is exposed by default on new RabbitMQ databases provisioned on Enclave. In the Aptible Dashboard, select your database, then select the “Credentials” link at the top. A modal will reveal all connection strings for that database, named by function:
For existing RabbitMQ databases, we can enable the management interface following a short downtime (about 30 seconds). If you’d like to do that, or have any feedback or questions, just let us know!
We’re proud to announce that the Aptible CLI is now supported on Windows!
More than a CLI: a Toolbelt!
We distribute the Aptible CLI as a package called the “Aptible Toolbelt.” The Toolbelt is available for several platforms, including macOS, Ubuntu, Debian, and CentOS. On Windows, it is available as an MSI installer.
On all platforms, the toolbelt includes:
The Aptible CLI itself, in the form of the aptible-cli Ruby gem; and
System dependencies the CLI needs to function properly. This includes Ruby (which the CLI is written in) and dependencies like OpenSSH (which the CLI uses for functionality like database tunnels).
The toolbelt integrates with your system to ensure that the
aptible command lands on your
PATH, so that when you type
aptible in your command prompt, things just work. On Windows, this is done by modifying your
PATH, and on OSX and Linux this is done by placing a symlink in
The Windows package targets Windows 8.1 and up on the PC side, and Windows Server 2012r2 and up on the server side. In other words, it targets Windows NT 6.3 and up, which is why you’ll see that number in the installer name.
Download and Installation
To get the Aptible CLI on Windows, download it directly from the Aptible website, then run the installer.
You might receive a SmartScreen prompt indicating that the publisher (that’s us!) isn’t known. Because this is the first time we’ve shipped software for Windows, we don’t have a reputation with Microsoft yet. The installer is properly signed, so to proceed, click through “More Info” and verify that the reported publisher is Aptible, Inc.
Enjoy! Since this is still early days for the Windows version of the CLI, make sure to let us know if you hit any snags!
We’re happy to announce that as of this week, you can now cancel running deployments on Aptible Enclave!
When is cancelling a deployment useful?
1. Your app is failing the HTTP health check, and you know why
As described in this support article, Enclave performs an automatic health check on any app service with an endpoint attached to it. During this health check, the platform makes an HTTP request to the port exposed by your Docker container, and waits for an HTTP response (though not necessarily a successful HTTP status code).
When your app is failing the HTTP health check, Enclave waits for 10 minutes before giving up and cancelling the deployment.
But, if you know the health check is never going to succeed, that’s wasted time! In this case, just cancel the deployment, and the health check will stop immediately.
2. You need to stop your pre-release commands immediately
Running database migrations in a pre-release command is convenient, but it can sometimes backfire if you end up running a migration that’s unexpectedly expensive and impacts your live app.
In this case, you often want to just stop the pre-release command dead in its tracks. Cancelling the deployment will do that.
However, do note that Enclave cannot rollback whatever your pre-release command did before you cancelled it, so use this capability wisely!
How does it work?
When deploying an app on Enclave, you’ll be presented with an informational banner explaining how you might cancel that deployment if needed:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 $ git push aptible master Counting objects: 15, done. Delta compression using up to 8 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (15/15), 1.20 KiB | 0 bytes/s, done. Total 15 (delta 5), reused 0 (delta 0) remote: (8ミ | INFO: Authorizing... remote: (8ミ | INFO: Initiating deploy... remote: (8ミ | INFO: Deploying 5e173381... remote: remote: (8ミ | INFO: Pressing CTRL + C now will NOT interrupt this deploy remote: (8ミ | INFO: (it will continue in the background) remote: remote: (8ミ | INFO: However, you can cancel this deploy using the Aptible CLI with: remote: (8ミ | INFO: aptible operation:cancel 15489 remote: (8ミ | INFO: (you might need to update your Aptible CLI)
At this point, running
aptible operation:cancel .... in another terminal
window will advise Enclave that you’d like to cancel this deployment.
Note that you’ll need version 0.8.0 of the Aptible CLI or greater to use
this command. If you haven’t installed the CLI, or have an older version, then
download the latest here. You can check your version from the CLI using
Is it safe to cancel a deployment?
Yes! Under the hood, cancelling an Enclave operation initiates a rollback at the next safe point in your deployment. This ensures your app isn’t left in an inconsistent state when you cancel.
There are two considerations to keep in mind:
You cannot cancel a deployment between safe points. Notably, this means you can’t cancel the deployment during the Docker build step, which is still one big step with no safe points. (We would like to change this in the future.)
Cancelling your deployment may not take effect immediately, or at all. For example, if your deployment is already being rolled back, asking to cancel won’t do anything.
We’re proud to announce that as of this week, you can now route database logs to a Log Drain, just like you’d do with app logs! This option is available when you create a new Log Drain; you can opt to send either app or database logs, or both:
If you already have a Log Drain set up for apps (you should!), you can opt to either recreate it to capture both app and database logs, or simply create a new one that only captures database logs.
Why Capture Database Logs?
Aptible customers have asked for database logs for two main use cases: compliance and operations.
From a compliance perspective, you can use database logs to facilitate audit logging, for example by logging sensitive queries made to your database (or all queries for that matter, if that’s a realistic option for you).
From an operations standpoint, you can use them to identify new performance problems (e.g. by logging slow queries made to your database), or to better understand problems you’ve already identified (e.g. by correlating database log entries with issues you’ve experienced).
What Does My Database Log?
Your database may not log what you care about out of the box. For example, Postgres is pretty quiet by default. You can usually modify logging parameters by connecting to your database and issuing re-configuration statements.
For example, to enable slow query logging in Postgres >= 9.4, you’d create a database tunnel and run the following commands:
1 2 3 ALTER SYSTEM SET log_min_duration_statement = 200; ALTER SYSTEM SET log_min_messages = 'INFO'; SELECT pg_reload_conf();
Refer to your database’s documentation for more information, or contact support and we’ll be happy to help.
How Do I Differentiate Database Logs From App Logs?
For Elasticsearch and HTTPS Log Drains, log entries sent to your Log Drain now include a “layer” field that indicates whether the log came from an app or a database.
Here’s an example comparing app and database logs using Kibana. Most of the logs here came from the app (respectively from a web and a background service), but we also have a slow query logged by Postgres:
For Syslog Log Drains, the database handle and type is included as the source program (that’s the service field you can see reported in Kibana above).
CLI Support, and Aptible Legacy Infrastructure.
At this time, database logs are not available via the CLI, and are not available on Aptible legacy infrastructure. We’re working on adding support in the CLI, so this will be available very soon!
aptible logs now supports databases! Download the latest CLI and use
aptible logs --database HANDLE.
If you are still running on Aptible legacy infrastructure (as indicated in the Aptible Dashboard when you provision a Log Drain), we encourage you to contact Aptible Support to coordinate a migration. This will give you access to database logs, as well as a growing number of other new features (such as ALB Endpoints, support for deploying directly from a Docker image and on-demand database restores).
Earlier this week, we released Managed HTTPS Endpoints. These endpoints have a few key benefits:
- Your SSL/TLS certificate is free (!)
- Aptible handles generating the initial certificate
- Aptible handles renewing the certificate
All you need to get started with a Managed HTTPS Endpoint is a domain name! No more ops headaches trying to generate CSRs, keep private keys and certs straight, or deal with inconveniently-timed renewals.
Under the hood, Managed HTTPS uses Let’s Encrypt to automatically provision certificates for you. Aptible customers requested this feature, and we are proud to contribute to the global movement towards 100% HTTPS.
How it works
Setting up a Managed HTTPS Endpoint is a 3-step process:
Add an Endpoint to your app, and choose Managed HTTPS as the endpoint type. You will need to provide the domain name you intend to use with your app (e.g. www.myapp.com). Aptible will use that name to provision a certificate via Let’s Encrypt.
When you create the endpoint, Aptible will provide you with an endpoint address. Use your DNS provider to create a CNAME from your domain (www.myapp.com) to this endpoint address (something like elb-1234.aptible.in).
Back in the Aptible Dashboard, confirm that you created the CNAME. Aptible will automatically provision your certificate, and you’re in business!
Note that between steps 2 and 3, your app won’t be available because you need to set up the CNAME before Aptible can provision the certificate. This isn’t ideal if you are migrating an app from somewhere else. Fortunately, you can just provide a transitional certificate that Aptible will use until your new Let’s Encrypt certificate is available. If you need to add a new certificate for this, just select the “Certificates” tab under your main environment view.
Once your endpoint is up and running done, we recommend you review our instructions for customizing SSL, in order to redirect end-users to HTTPS and disable the use of weaker cipher suites, which will earn the much-coveted A+ grade on Qualys’ SSL Test!
Why use Managed HTTPS?
Above all else, Managed HTTPS brings you simplicity and peace of mind:
- Setup is greatly simplified: all you need is a domain name. No need to generate your own certificate signing request, deal with a CA, or upload your certificate and key to Aptible.
- Maintenance is essentially eliminated: you won’t need to remember to renew a certificate ever again.
- Oh, and did we mention it’s free?
Enjoy! As usual, let us know if you have any feedback.
Contingency planning and disaster recovery are critical parts of any developer’s HIPAA compliance program. The Aptible platform automates many aspects of secure data management, including long-term retention, encryption at rest, taking automatic daily backups of your databases, and distributing those backups across geographically separate regions. These benefits require no setup and no maintenance on your part: Aptible simply takes care of them.
That said, recovering a database from a backup has required a support request. While we take pride in providing timely and effective support, it’s nice to be able to do things at your own pace, without the need to wait on someone else.
That’s why we’re proud to announce that for all v2 stacks, you can view and restore backups directly in the Aptible dashboard and CLI! (For customers on v1 stacks, you can view, but not self-restore.)
How does it work?
In the dashboard, locate any database, then select the “Backups” tab. Find the backup you would like to restore from, and select the “Restore” action. From the CLI, first update to the newest version (
gem update aptible-cli), then run
aptible backup:list $HANDLE to view backups for a database, or
aptible backup:restore $ID to restore a backup.
Restoring from a backup creates a new database - it never replaces or overwrites your existing database. You can use this feature to test your disaster recovery plans, test or review new database migrations before you run them against production, roll back to a prior backup, or simply review old data. When you are done using the restored database, you can deprovision it or promote it to be used by your apps.
But wait, there’s more!
Introducing On-Demand Backups
In addition to displaying automatic daily backups, you can now trigger a new backup on demand from the dashboard or CLI. In the dashboard, simply select the large green “Create New Backup” button. From the CLI, make sure you are running the latest version (
gem update aptible-cli) then use
aptible db:backup $HANDLE to trigger a new backup.
Now, before you do something scary with your database (like a big migration), you have an extra safety net. On-demand backups are easier than filing a support request and safer than using a tunnel to dump to a local environment, because you will never have to remember to purge data from your machine.
We hope you find both of these features useful! That’s it for today. As usual, if you have questions or feedback about this feature, just get in touch.
We’re happy to announce that two-factor authentication (2FA) is available for all users and account types in the Aptible dashboard and CLI! Multifactor authentication is a best practice that adds an additional layer of security on top of the normal username and password you use to verify your identity. You can enable it in your Aptible user settings.
How does it work?
Aptible 2-factor authentication implements the Time-based One-time Password (TOTP) algorithm specified in RFC 6238. We currently support the virtual token form factor - Google Authenticator is an excellent, free app you can use. We do not currently support SMS or hardware tokens.
When enabled, 2FA protects access to your Aptible account via the dashboard, CLI, and API. 2FA does not restrict Git pushes - these are still authenticated by your SSH key. In some cases, you may not push code with your own user credentials, for example if you deploy with a CI service such as Travis or Circle and perform all deploys via a robot user. If so, we encourage you to remove SSH keys from your Aptible user account.
What if I’m locked out?
When you enable 2FA, you get emergency backup codes, in case your device is lost, stolen, or temporarily unavailable. Keep these in a safe place. If you don’t have your device and are unable to access a backup code, please contact us.
As usual, we’d love to hear your feedback! If you have any questions or comments, please let us know!
If you are on an Aptible “v2” stack, which automatically scales your app containers across AWS Availability Zones, you have probably noticed that the
aptible logs CLI command has been deprecated. As an alternative, you’ve been able to use Log Drains to collect app logs.
A Log Drain’s ability to persist logs (not just stream them) makes it a robust option, however each drain requires some setup.
aptible logs is built in to the Aptible CLI, requires no additional setup, and makes it easy to see what is happening in your app right now.
We’re happy to announce that
aptible logs is available on Aptible v2 stacks!
How Can I Use It?
If you already have the Aptible CLI installed, then you don’t need to do anything: using
aptible logs from the CLI works on all stacks as of today. There is a deprecation notice for
aptible logs in older versions of the CLI - you can make it go away by updating the CLI.
If you don’t have the CLI installed, follow the installation instructions first.
aptible logs on v2 stacks is implemented as a Log Drain that doesn’t drain: instead, it buffers logs received from log forwarders and allows clients to stream the buffer.
As a result, the first time you use
aptible logs on a v2 stack, we’ll take a few minutes to automatically provision a special new “tail” Log Drain, if you don’t already have one. Once you have a tail Log Drain, subsequent
aptible logs calls are fast.
If you have any questions or feedback about this new feature, please let us know!
Aptible customers have been asking how they could view performance metrics such as RAM and CPU usage for their containers. We’re happy to announce that the wait is coming to an end!
Last week, we started rolling out the first iteration of our new Container Metrics feature. You can access them via the “View Metrics” buttons on an App’s service list, or the “Metrics” tab for a Database. As an Aptible user, this lets you visualize performance metrics for your app and database containers directly from your dashboard. In turn, you can use this information to identify performance bottlenecks and make informed scaling decisions.
Metrics are available for apps and databases. In both cases, you can vizualize:
Memory usage, including a breakdown in terms of RSS vs. caches / buffers. We’ll soon be including your memory limits in the graph as well, so you can compare your actual usage to your memory allocation.
Load average, which reflects the overall activity of your container in terms of CPU and I/O.
Both of these metrics are “bog-standard” Linux metrics, meaning there is a ton of information about them on the Internet. That being said, you can also hover over the little “?” icon in the UI for a quick reminder:
Using Container Metrics to Debug Performance
Let’s work through an example of how you can use these charts to understand performance issues and make scaling decisions. In this example, we’re running
pgbench against a Postgres database (initially provisioned on a 1GB container), and we’ll explore easy ways to get better performance out of it.
First, take a look at the graphs:
It looks like database traffic surged at 6:24 PM UTC, lasting until 6:44 PM UTC. That’s our
Our container quickly consumed 100% of its 1 GB of available memory. Most of the memory was allocated for kernel page caches, which Linux uses to minimize expensive I/O requests.
With a load average consistently over 20 (i.e. > 20 tasks blocked waiting on CPU or I/O), our database operations are going to be very delayed. If our app was experiencing slowdowns around the same time, our database would be a likely suspect.
Armed with that knowledge, what can we do? A high load average can be caused by a bottleneck in terms of I/O or CPU, or both. Detailed CPU and I/O metrics are coming soon. In the meantime, upgrading to a bigger container might help with both our problems:
Our CPU allocation would be bigger, which essentially means we’d run CPU tasks faster.
Our memory allocation would be bigger, which means more memory for caches and buffers, which means faster disk reads (disk writes on the other hand would probably not be faster, since it’s important that they actually hit the disk for durability, rather than sit in a buffer).
Using Container Metrics to Evaluate Scaling
After upgrading our container, let’s run the benchmark again:
Clearly, the kernel is making good use of that extra memory we allocated for the container!
This time around, the benchmark completed faster, finishing in 12 minutes instead of 20, and with a load average that hung around 10, not 20. If we had an app connecting to our database and running actual queries, we’d be experiencing shorter delays when hitting the database.
Now, there’s still room for improvement. In a real-world scenario, you’d have several options to explore next:
Throw even more resources at the problem, e.g., an 8GB container, or bigger. Perhaps more unexpectedly, using a larger database volume would probably help as well: Aptible stores data on AWS EBS volumes, and larger EBS volumes are allocated more I/O bandwidth.
Optimize the queries you’re making against your database. Using an APM tool like New Relic can help you find which ones are draining your performance the most.
Investigate database-level parameter tuning (e.g.
I hope this example gives you an idea of how you can use Container Metrics to keep tabs on your application and database performance, and make informed scaling decisions. If you have any feedback or questions regarding this new feature, please do get in touch with Aptible support!