Console access to ECS containers

AWS Elastic Container Service (ECS) is a fully managed container orchestration platform. It features deep integration with other AWS services and provides all the functionality to cover the container hosting needs for most organisations. It’s a compelling option for businesses that want to focus on their business rather than building, maintaining and upgrading their container platform.

But it’s not perfect! One fly in the ointment is shell access to containers. This has always been possible, but required some effort on your part to make it possible. I’ll briefly talk about that first before introducing ECS Exec, which now offers a built-in way to interactively debug a task running on ECS.

A quick ECS primer

AWS ECS utilises multiple backends for running containers.

At launch, ECS only supported the EC2 backend which involved the use of EC2 instances as worker nodes on which ECS can run your containers. This requires additional overhead maintaining that fleet of instances and additional costs of operating that infrastructure.

In 2017, AWS announced a new serverless backend called Fargate. This removed the need to run any instances yourself, with AWS entirely managing the backend fleet of instances to run your tasks. At launch Fargate came with a significant price tag which deterred many potential users but a 50% reduction in costs at the start of 2019 made it a far more attractive option.

The key thing to remember is that ECS can use EC2 instances (managed by yourself) or Fargate (fully managed by AWS) as the backend. For most use cases, Fargate is the preferred backend.


When running on the ECS/EC2 backend it was possible to login to the EC2 instance hosting the container you wanted to view the console of, and then use the standard docker commands for starting an interactive shell on the container.

With the ECS/Fargate backend, there is no instance to log in to, so a different approach was required. This second approach can be used for ECS/EC2 and ECS/Fargate. For both backends, the underlying infrastructure is essentially the same, consisting of standard AWS VPC networking and compute primitives. It’s therefore entirely possible to have SSH in your container and then to expose that service either privately or publicly to get direct SSH access to the containers console. The down side of this is the extra overhead of running SSH and having to use a jump host / bastion if the container only has private addressing. Logging and auditing can also become problematic.

ECS Exec

Irrespective of which backend you choose to use, ECS Exec now provides native support for accessing consoles running on your containers without any requirement for access to the underlying instances, SSH, jump/bastion hosts or public addressing.

ECS Exec builds upon Session Manager, another AWS service that provides the same functionality for EC2 instances. We’ve blogged about that previously - Should You Use AWS EC2 Instance Connect to SSH Into Your Instances?

With all the background out of the way, how can we use this new ECS Exec functionality?

If you’re using ECS/EC2 then you’ll need your EC2 instances to be using an ECS optimised AMI released after January 20th 2021 with agent 1.50.2 or higher. For Fargate, you simply need to be on platform 1.4.0 or higher. On your end machine you’ll need a relatively modern version of the AWS CLI, and the session manager plugin.

Under the hood ECS Exec uses the same mechanism as Session Manager, so it’s not surprising that you need to expand the permissions for your task IAM role to allow the container to access those APIs:

   "Version": "2012-10-17",
   "Statement": [
       "Effect": "Allow",
       "Action": [
      "Resource": "*"

In your task definitions you may optionally enable the init process manager for your containers by adding this:

    "containerDefinitions": [
            "linuxParameters": {
                "initProcessEnabled": true

This runs an init process inside the container that forwards signals and reaps processes (like zombie processes) left over during your interactive sessions.

All that remains is to enable ECS Exec support on your service and/or tasks. ECS Exec will only be enabled on future tasks, it can’t be retrospectively enabled. At the time of writing it’s not possible to use the AWS Console to enable ECS Exec so we must use the CLI (or make the appropriate API calls), although you can still create a new service through the console and use the CLI to enable ECS Exec after the service is created. Here’s a example command for enabling ECS Exec on an existing service:

aws ecs update-service --service ${service-name} --cluster ${cluster-name} --enable-execute-command

Once updated, all future tasks will start with the functionality enabled.

Console access!

With the prep work done, we can now access the console on one of our tasks using the AWS CLI:

aws ecs execute-command --cluster ${cluster-name} --task ${task-id} --container ${container-name} --interactive --command /bin/sh

In addition to console access this functionality can also be used to trigger one-off commands inside existing containers that would otherwise require a new temporary container to be used.

Further reading

This quick run through of ECS Exec shows how to get up and running. Before deploying in production I would strongly recommend reading the provided documentation, particularly around the limitations, auditing and security aspects of the feature.

I’m a consultant at The Scale Factory where we empower technology teams to deliver and secure workloads on the AWS cloud, through consultancy, engineering, support, and training. If you’d like to find out how we can support you and your team to run your workload in the cloud using containers, get in touch.