Deploy Fargate Capacity Provider Strategy

Enable Fargate capacity provider on existing cluster

First, we will update our ECS cluster to enable the fargate capacity provider. Because the cluster already exists, we will do it via the CLI as it presently can’t be done via the console on existing clusters.

Using the AWS CLI, run the following command:

aws ecs put-cluster-capacity-providers \
--cluster container-demo \
--capacity-providers FARGATE FARGATE_SPOT \
--default-capacity-provider-strategy \
capacityProvider=FARGATE,weight=1,base=1 \

With this command, we’re adding the Fargate and Fargate Spot capacity providers to our ECS Cluster. Let’s break it down by each parameter:

  • --cluster: we’re simply passing in our cluster name that we want to update the capacity provider strategy for.
  • --capacity-providers: this is where we pass in our capacity providers that we want enabled on the cluster. Since we do not use EC2 backed ECS tasks, we don’t need to create a cluster capacity provider prior to this. With that said, there are only the two options when using Fargate.
  • --default-capacity-provider-strategy: this is setting a default strategy on the cluster; meaning, if a task or service gets deployed to the cluster without a strategy and launch type set, it will default to this. Let’s break the base/weight down to get a better understanding.

The base value designates how many tasks, at a minimum, to run on the specified capacity provider. Only one capacity provider in a capacity provider strategy can have a base defined.

The weight value designates the relative percentage of the total number of launched tasks that should use the specified capacity provider. For example, if you have a strategy that contains two capacity providers, and both have a weight of 1, then when the base is satisfied, the tasks will be split evenly across the two capacity providers. Using that same logic, if you specify a weight of 1 for capacityProviderA and a weight of 4 for capacityProviderB, then for every one task that is run using capacityProviderA, four tasks would use capacityProviderB.

In the command we ran, we are stating that we want a minimum of 1 Fargate task as our base, and after that, for every one task using Fargate strategy, four tasks will use Fargate Spot.

Next, let’s navigate to the service repo and to the fargate directory. This is where we’ll do the rest of the work.

cd ~/environment/ecsdemo-capacityproviders/fargate

Meet the application

The application we are deploying is a simple API that will return the arns of tasks running in the cluster, as well as the provider that they are using. Lastly, the application will tell us the ARN of the container we landed on and the provider that it is using. It’s a simple application that allows us to see in realtime the strategy in action.

Here is what we should see when we hit the load balancer URL after we deploy the application:


Like our previous services, we are using the CDK to deploy. Let’s go ahead and deploy it, and then do some deep dive and review the code!


Review what changes are being proposed:

cdk diff

Deploy the service

cdk deploy --require-approval never

Code Review

Let's dive in

Post deploy

Once the deployment is finished, copy the load balancer URL, and paste it into your browser. The output should look something like this:


You can directly go to the url in the browser to see the json response. Or, if you want to see it on the command line, you can curl the load balancer.

Here is what to run to see the output from the command line:

curl -s <paste-load-balancer-url-here> | jq

The command line output should look something like this:


Whether you or on the browser or using the command line, go ahead and refresh a few times. You should see that as you are routed to different containers via the load balancer on each new request. The containers responding will be running on either Fargate or Fargate Spot capacity providers.

From a cluster administrator point of view, you can also easily check how your tasks are spread across capacity providers with the following CLI command:

aws ecs describe-tasks --cluster container-demo \
                       --tasks \
                         $(aws ecs list-tasks --cluster container-demo --query 'taskArns[]' --output text) \
                       --query 'sort_by(tasks,&capacityProviderName)[].{ 
                                          Id: taskArn, 
                                          AZ: availabilityZone, 
                                          CapacityProvider: capacityProviderName, 
                                          LastStatus: lastStatus, 
                                          DesiredStatus: desiredStatus}' \
                        --output table

The output will be similar to the following:

|                                                                  DescribeTasks                                                                  |
|     AZ     | CapacityProvider  | DesiredStatus  |                                      Id                                        | LastStatus   |
|  us-west-2a|  FARGATE          |  RUNNING       |  arn:aws:ecs:us-west-2:012345678910:task/00fd41c9-6b7b-41a3-8b37-fb2404b58cb8  |  RUNNING     |
|  us-west-2b|  FARGATE          |  RUNNING       |  arn:aws:ecs:us-west-2:012345678910:task/56e5e043-be66-4d18-ac52-7156c2eadd6c  |  RUNNING     |
|  us-west-2c|  FARGATE          |  RUNNING       |  arn:aws:ecs:us-west-2:012345678910:task/9dad79c0-fd66-4785-a11e-a6b4c586157b  |  RUNNING     |
|  us-west-2b|  FARGATE_SPOT     |  RUNNING       |  arn:aws:ecs:us-west-2:012345678910:task/36a51210-a869-4933-b028-c8ee9b3243dd  |  RUNNING     |
|  us-west-2c|  FARGATE_SPOT     |  RUNNING       |  arn:aws:ecs:us-west-2:012345678910:task/3a864ba5-fe60-42f8-83b2-4de8838a99ac  |  RUNNING     |
|  us-west-2a|  FARGATE_SPOT     |  RUNNING       |  arn:aws:ecs:us-west-2:012345678910:task/665ecef8-c0a5-45db-8fcf-f06c02cfe16b  |  RUNNING     |
|  us-west-2c|  FARGATE_SPOT     |  RUNNING       |  arn:aws:ecs:us-west-2:012345678910:task/69361c15-f76d-4765-b1c9-236f1124c28f  |  RUNNING     |
|  us-west-2a|  FARGATE_SPOT     |  RUNNING       |  arn:aws:ecs:us-west-2:012345678910:task/dd0aa6dc-3a5e-4692-9e10-f5d7a169c4d6  |  RUNNING     |
|  us-west-2c|  FARGATE_SPOT     |  RUNNING       |  arn:aws:ecs:us-west-2:012345678910:task/e60a98b4-adca-4f38-98ab-bb44c4890455  |  RUNNING     |
|  us-west-2a|  FARGATE_SPOT     |  RUNNING       |  arn:aws:ecs:us-west-2:012345678910:task/f3e4db9d-effb-41c6-bf9e-527a5cc58603  |  RUNNING     |

To learn all about Fargate Spot check out this blog post


Here’s what we accomplished in this section of the workshop:

  • We updated our ECS Clusters default Capacity Provider strategy, which ensures that if no launch type or capacity provider strategy is set, services will get deployed using the default mix of Fargate and Fargate spot.
  • We deployed a service with multiple tasks, and saw the Capacity Provider choose what type of Fargate task to launch (Fargate vs Fargate Spot)
  • While this was just an example, this could translate to many real world use cases. By simply setting the base and weights between Fargate and Fargate Spot, we can take advantage of the cost savings of Fargate Spot in our every day workloads. Of course, it’s important to understand that Spot tasks can be terminated at any time (for more information, check out the official AWS documentation.), when capacity requirements change. That is why we set the default strategy to be a mix of Fargate and Fargate Spot to ensure that if Spot tasks are terminated, we still have our minimum, desired amount of tasks running on Fargate.
  • The setting we chose was to use a mix of strategies (Fargate and Fargate Spot). You can also stick to one strategy (Fargate or Fargate Spot), and this would be defined when you deploy your service or as the default for the cluster.


Run the cdk command to delete the service (and dependent components) that we deployed.

cdk destroy -f

Next, go back to the ECS Cluster in the console. In the top right, select Update Cluster.


Under Default capacity provider strategy, click the x next to all of the strategies until there are no more left to remove. Once you’ve done that, click Update.


Up Next

In the next section, we’re going to:

  • Add EC2 instances to our cluster
  • Change the strategy to use EC2 as the default Capacity Provider
  • Enable Cluster Auto Scaling
  • Deploy a service, trigger load to the service so desired count exceeds current capacity, and watch as the cluster autoscaling takes action.