Logs from HTTP(S) Endpoints can be routed to Log Drains (select this option when creating the Log Drain). These logs will contain all requests your Endpoint receives, as well as most errors pertaining to the inability of your App to serve a response, if applicable.

❗️ The Log Drain that receives these logs cannot be pointed at an HTTPS endpoint in the same account. This would cause an infinite loop of logging, eventually crashing your Log Drain. Instead, you can host the target of the Log Drain in another account or use an external service.


Logs are generated by Nginx in the following format, see the Nginx documentation for definitions of specific fields:

$remote_addr:$remote_port $ssl_protocol/$ssl_cipher $host $remote_user [$time_local] "$request" $status $body_bytes_sent $request_time "$http_referer" "$http_user_agent" "$http_x_amzn_trace_id" "$http_x_forwarded_for";

❗️ The $remote_addr field is not the client’s real IP, it is the private network address associated with your Endpoint. To identify the IP Address the end-user connected to your App, you will need to refer to the X-Forwarded-For header. See HTTP Request Headers for more information.

📘 You should log the X-Amzn-Trace-Id header in your App, especially if you are proxying this request to another destination. This header will allow you to track requests as they are passed between services.


For Log Drains that support embedding metadata in the payload (HTTPS Log Drains and Self-Hosted Elasticsearch Log Drains), the following keys are included:

  • endpoint_hostname: The hostname of the specific Endpoint that serviced this request (eg elb-shared-us-east-1-doit-123456.aptible.in)
  • endpoint_id: The unique Endpoint ID

Configuration Options

Aptible offer a few ways to customize what events get logged in your Endpoint Logs. These are set as Configuration variables, so they are applied to all Endpoints for the given App.


Endpoint Logs will always emit an error if your App container fails Runtime Health Checks, but by default, they do not log the health check request itself. These are not user requests, are typically very noisy, and are almost never useful since any errors for such requests are logged. See Common Errors for further information about identifying Runtime Health Check errors.

Setting this variable to any value will show these requests.

Common Errors

When your App does not respond to a request, the Endpoint will return an error response to the client. The client will be served a page that says This application crashed, and you will find a log of the corresponding request and error message in your Endpoint Logs. In these errors, “upstream” means your App.

📘 If you have a Custom Maintenance Page then you will see your maintenance page instead of This application crashed.


This response code is generally returned when your App generates a partial or otherwise incomplete response. The specific error logged is usually one of the following messages:

(104: Connection reset by peer) while reading response header from upstream
upstream prematurely closed connection while reading response header from upstream

These errors can be attributed to several possible causes:

  • Your Container exceeded the Memory Limit for your Service while serving this request. You can tell if your Container has been restarted after exceeding its Memory Limit by looking for the message container exceeded its memory allocation in your Log Drains.
  • Your Container exited unexpectedly for some reason other than a deploy, restart, or exceeding its Memory Limit. This is typically caused by a bug in your App or one of its dependencies. If your Container unexpectedly exits, you will see container has exited in your logs.
  • A timeout was reached in your App that is shorter than the Endpoint Timeout.
  • Your App encountered an unhandled exception.


This response code is generally returned when your App accepts a connection but does not respond at all or does not respond in a timely manner.

The following error message is logged along with the 504 response if the request reaches the idle timeout. See Endpoint Timeouts for more information.

(110: Operation timed out) while reading response header from upstream

The following error message is logged along with the 504 response if the Endpoint cannot establish a connection to your container at all:

(111: Connection refused) while connecting to upstream

A connection refused error can be attributed to several possible causes related to the service being unreachable:

  • Your Container is in the middle of restarting due to exceeding the Memory Limit for your Service or because it exited unexpectedly for some reason other than a deploy, restart, or exceeding its Memory Limit.
  • The process inside your Container cannot accept any more connections.
  • The process inside your container has stopped responding or running entirely.

Runtime Health Check Errors

Runtime Health Check Errors will be denoted by an error message like the ones documented above and will reference a request path of /healthcheck.

See Runtime Health Checks for more details about how these checks are performed.

Uncommon Errors


This is not a response code that is returned to the client, but rather a 499 “response” in the Endpoint log indicates that the client closed the connection before the response was returned. This could be because the user closed their browser or otherwise did not wait for a response.

If you have any other proxy in front of this Endpoint, it may mean that this request reached the other proxy’s idle timeout.

“worker_connections are not enough”

This error will occur when there are too many concurrent requests for the Endpoint to handle at this time. This can be caused either by an increase in the number of users accessing your system or indirectly by a performance bottleneck causing connections to remain open much longer than usual.

The total concurrent requests that can be opened at once can be increased by Scaling your App horizontally to add more Containers. However, if the root cause is poor performance of dependencies such as Databases, this may only exacerbate the underlying issue.

If scaling your resources appropriately to the load does not resolve this issue, please contact Aptible Support.