In this section, it is important to match the names as described in order for the tutorial to work (so that it matches the format of the manifest in the tutorial repository):
cd ~/environment/secretecs
copilot init
APPNEW=ecsworkshop$(tr -dc a-z0-9 </dev/urandom | head -c 4 ; echo '') #create a short random string to provide a unique value to the application name.
copilot init --app $APPNEW --name todo-app --type 'Load Balanced Web Service' --dockerfile './Dockerfile' --port 4000 --deploy
ecsworkshop
#note this should be unique in your AWS accountLoad Balanced Web Service
todo-app
- this must be left ‘as-is’ for demo purposes./Dockerfile
After a brief moment, you will be prompted to created a local environment.
yes
During this stage of the process, copilot is doing the initial infrastructure setup and preparing to creates a new environment, including creating an ECR repository to store the container build images.
✔ Proposing infrastructure changes for the ecsworkshop environment.
- Creating the infrastructure for the ecsworkshop environment. [create complete] [82.1s]
- An IAM Role for AWS CloudFormation to manage resources [create complete] [19.5s]
- An ECS cluster to group your services [create complete] [12.7s]
- An IAM Role to describe resources in your environment [create complete] [17.9s]
- A security group to allow your containers to talk to each other [create complete] [4.9s]
- An Internet Gateway to connect to the public internet [create complete] [16.6s]
- Private subnet 1 for resources with no internet access [create complete] [16.4s]
- Private subnet 2 for resources with no internet access [create complete] [16.4s]
- Public subnet 1 for resources that can access the internet [create complete] [16.4s]
- Public subnet 2 for resources that can access the internet [create complete] [16.4s]
- A Virtual Private Cloud to control networking of your AWS resources [create complete] [16.6s]
Linking account XXXXXXX and region us-west-2 to application ecsworkshop.
Next, copilot pulls the application image from the ECR repository and builds the application, including VPC, Aurora Serverless DB, and deploys the application to a newly created ECS cluster.
Deployment of the app via copilot goes through the following stages:
- Creating the infrastructure for stack ecsworkshop-test-todo-app [create in progress]
- An Addons CloudFormation Stack for your additional AWS resources [review in progress]
- Service discovery for your services to communicate within the VPC [create complete]
- Update your environments shared resources [update in progress]
- A security group for your load balancer allowing HTTP and HTTPS traffic [create in progress]
- An IAM Role for the Fargate agent to make AWS API calls on your behalf [create complete]
- A CloudWatch log group to hold your service logs [create complete]
- An ECS service to run and maintain your tasks in the environment cluster [not started]
- A target group to connect the load balancer to your service [create complete]
- An ECS task definition to group your containers and run them on ECS [not started]
- An IAM role to control permissions for the containers in your tasks [create complete]
This step in the process takes a few minutes, so let’s dive into what is going on behind the scenes.
Copilot creates a new environment by default called test
which is used throughout the rest of the tutorial. The manifest file created in the project defines everything needed for a load balanced web application. Read the full specification for the “Load Balanced Web Service” type at
https://aws.github.io/copilot-cli/docs/manifest/lb-web-service/
Now lets review the manifest file itself:
# The manifest for the "todo-app" service.
# Your service name will be used in naming your resources like log groups, ECS services, etc.
name: todo-app
# The "architecture" of the service you're running.
type: Load Balanced Web Service
image:
# Docker build arguments.
# For additional overrides: https://aws.github.io/copilot-cli/docs/manifest/lb-web-service/#image-build
build: ./Dockerfile
# Port exposed through your container to route traffic to it.
port: 4000
http:
# Requests to this path will be forwarded to your service.
# To match all requests you can use the "/" path.
path: "/"
# You can specify a custom health check path. The default is "/".
# For additional configuration: https://aws.github.io/copilot-cli/docs/manifest/lb-web-service/#http-healthcheck
# healthcheck: '/'
# You can enable sticky sessions.
# stickiness: true
# Number of CPU units for the task.
cpu: 256
# Amount of memory in MiB used by the task.
memory: 512
# Number of tasks that should be running in your service.
count: 1
# Optional fields for more advanced use-cases.
#
variables: # Pass environment variables as key value pairs.
LOG_LEVEL: info
#
# You can override any of the values defined above by environment.
#environments:
# test:
# count: 2 # Number of tasks to run for the "test" environment.
Copilot utilizes Cloudformation templates to provision infrastructure behind the scenes. The above template is generated when copilot init
is run - but in the case of this tutorial as long as you use the same service name and values, the process will use the file in the repository.
The main values here specify:
Next, we create an Aurora Serverless Postgres Database Cluster via the addons
functionality of copilot.
Any additional AWS resource can be specified here by adding a cloudformation template to the copilot\service-name\addons
directory. This option is also part of the copilot CLI using the copilot storage init
command.
This template also creates the secret to use with the database and enables credential rotation via a Lambda function. It also adds some missing networking configuration that allows the credential rotation lambda to communicate with Secrets Manager. It outputs the secret as an environment variable for our application to read.
The template enables parameters to be passed in from copilot, namely App
, Env
, and Name
.
---
AWSTemplateFormatVersion: 2010-09-09
Parameters:
App:
Type: String
Description: Your application's name.
Env:
Type: String
Description: The environment name your service, job, or workflow is being deployed to.
Name:
Type: String
Description: The name of the service, job, or workflow being deployed.
Next, we add some missing networking components that allow the private subnets to communicate to Secrets Manager via a NAT Gateway.
Resources:
EipA:
Type: AWS::EC2::EIP
Properties:
Domain: vpc
NatGatewayA:
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt EipA.AllocationId
SubnetId:
!Select [
0,
!Split [
",",
{ "Fn::ImportValue": !Sub "${App}-${Env}-PublicSubnets" },
],
]
RouteTableA:
Type: AWS::EC2::RouteTable
Properties:
VpcId: { "Fn::ImportValue": !Sub "${App}-${Env}-VpcId" }
RouteTableAssociationA:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId:
Ref: RouteTableA
SubnetId:
!Select [
0,
!Split [
",",
{ "Fn::ImportValue": !Sub "${App}-${Env}-PrivateSubnets" },
],
]
DefaultRouteA:
Type: AWS::EC2::Route
Properties:
RouteTableId:
Ref: RouteTableA
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NatGatewayA
EipB:
Type: AWS::EC2::EIP
Properties:
Domain: vpc
NatGatewayB:
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt EipB.AllocationId
SubnetId:
!Select [
1,
!Split [
",",
{ "Fn::ImportValue": !Sub "${App}-${Env}-PublicSubnets" },
],
]
RouteTableB:
Type: AWS::EC2::RouteTable
Properties:
VpcId: { "Fn::ImportValue": !Sub "${App}-${Env}-VpcId" }
RouteTableAssociationB:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId:
Ref: RouteTableB
SubnetId:
!Select [
1,
!Split [
",",
{ "Fn::ImportValue": !Sub "${App}-${Env}-PrivateSubnets" },
],
]
DefaultRouteB:
Type: AWS::EC2::Route
Properties:
RouteTableId:
Ref: RouteTableB
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId:
Ref: NatGatewayB
Next, add the constructs needed for database cluster creation, including the secret for the database stored in AWS Secrets Manager, appropriate db subnets, security groups. Finally, we attach the secret to the database cluster so that the database knows to pull credentials from secrets manager.
SecurityGroupfromRDSStackdbCredentialsRotationSecurityGroup:
Type: AWS::EC2::SecurityGroupIngress
Properties:
IpProtocol: tcp
Description: !Ref 'AWS::StackName'
FromPort:
Fn::GetAtt:
- AuroraDBCluster
- Endpoint.Port
GroupId:
Fn::GetAtt:
- ClusterSecurityGroup
- GroupId
SourceSecurityGroupId:
Fn::GetAtt:
- RotationSecurityGroup
- GroupId
ToPort:
Fn::GetAtt:
- AuroraDBCluster
- Endpoint.Port
RotationSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
VpcId: { 'Fn::ImportValue': !Sub '${App}-${Env}-VpcId' }
GroupDescription: !Ref 'AWS::StackName'
SecurityGroupEgress:
- IpProtocol: '-1'
Description: Allow all outbound traffic by default
CidrIp: 0.0.0.0/0
AuroraSecret:
Type: AWS::SecretsManager::Secret
Properties:
Name: !Join [ '/', [ !Ref App, !Ref Env, !Ref Name, 'aurora-pg' ] ]
Description: !Join [ '', [ 'Aurora PostgreSQL Main User Secret ', 'for CloudFormation Stack ', !Ref 'AWS::StackName' ] ]
GenerateSecretString:
SecretStringTemplate: '{"username": "postgres"}'
GenerateStringKey: "password"
ExcludePunctuation: true
IncludeSpace: false
PasswordLength: 16
SecretCredentialPolicy:
Type: 'AWS::SecretsManager::ResourcePolicy'
Properties:
SecretId: !Ref AuroraSecret
ResourcePolicy:
Version: 2012-10-17
Statement:
- Action: 'secretsmanager:DeleteSecret'
Resource: '*'
Effect: Deny
Principal:
AWS: !Join
- ''
- - 'arn:'
- !Ref 'AWS::Partition'
- ':iam::'
- !Ref 'AWS::AccountId'
- ':root'
DBSubnetGroup:
Type: 'AWS::RDS::DBSubnetGroup'
Properties:
DBSubnetGroupDescription: !Ref 'AWS::StackName'
SubnetIds: !Split [ ',', { 'Fn::ImportValue': !Sub '${App}-${Env}-PrivateSubnets' } ]
ClusterSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
SecurityGroupIngress:
- ToPort: 5432
FromPort: 5432
IpProtocol: tcp
Description: 'from 0.0.0.0/0:5432'
CidrIp: 0.0.0.0/0
VpcId: { 'Fn::ImportValue': !Sub '${App}-${Env}-VpcId' }
GroupDescription: RDS security group
SecurityGroupEgress:
- IpProtocol: '-1'
Description: Allow all outbound traffic by default
CidrIp: 0.0.0.0/0
AuroraDBCluster:
Type: 'AWS::RDS::DBCluster'
Properties:
MasterUsername: !Join ['', ['{{resolve:secretsmanager:', !Ref AuroraSecret, ':SecretString:username}}' ]]
MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref AuroraSecret, ':SecretString:password}}' ]]
DatabaseName: 'tododb'
Engine: aurora-postgresql
EngineVersion: '10.7'
EngineMode: serverless
EnableHttpEndpoint: true
StorageEncrypted: true
DBClusterParameterGroupName: default.aurora-postgresql10
DBSubnetGroupName: !Ref DBSubnetGroup
VpcSecurityGroupIds:
- !Ref ClusterSecurityGroup
ScalingConfiguration:
AutoPause: true
MinCapacity: 2
MaxCapacity: 8
SecondsUntilAutoPause: 1000
DeletionPolicy: Delete
SecretAuroraClusterAttachment:
Type: AWS::SecretsManager::SecretTargetAttachment
Properties:
SecretId: !Ref AuroraSecret
TargetId: !Ref AuroraDBCluster
TargetType: AWS::RDS::DBCluster
The output shown here is the environment variable the todo application needs to communicate with the database.
Outputs:
PostgresData: # injected as POSTGRES_DATA environment variable by Copilot.
Description: "The JSON secret that holds the database username and password. Fields are 'host', 'dbname', 'username', 'password'"
Value: !Ref AuroraSecret
This output will expose output as a variable called POSTGRES_DATA
in the container environment. This environment variable is where the todo application will get its credentials to access the database.
Once the copilot process is finished, the last step for this tutorial is to get the LoadBalancer URL from copilot and make a call to the application’s migrate
endpoint to populate the database.
URL=$(copilot svc show --json | jq -r .routes[].url)
curl -s $URL/migrate | jq
This will produce JSON output showing a DROP, CREATE, and UPDATE to populate the database app with an initial todo item.
To view the app, open a browser and go to the Loadbalancer URL ECSST-Farga-xxxxxxxxxx.yyyyy.elb.amazonaws.com
:
This is a fully functional todo app. Try creating, editing, and deleting todos. Using the information output from deploy along with the secrets stored in Secrets Manager, connect to the Postgres Database using a database client or the psql
command line tool to browse the database.
Since this application uses Aurora Serverless, you can also use the query editor in the AWS Management Console - find more information here. All you need is the secret ARN created by Copilot, you can fetch it at the terminal and copy/paste into the query editor dialog box:
aws secretsmanager list-secrets | jq -r '.SecretList[].ARN'
First, let’s test the existing code for any errors.
cd ~/environment/ecsworkshop-secrets-demo
cdk synth
This creates the cloudformation templates which output to a local directory cdk.out
. Successful output will contain (ignore any warnings generated):
Successfully synthesized to /home/ec2-user/environment/ecsworkshop-secrets-demo/cdk.out
Supply a stack id (VPCStack, RDSStack, ECSStack) to display its template.
(Note this is not a required step as cdk deploy
will generate the templates again - this is an intermediary step to ensure there are no errors in the stack before proceeding. If you encounter errors here stop and address them before deployment.)
Then, to deploy this application and all of its stacks, run:
cdk deploy --all --require-approval never --outputs-file result.json
The process takes approximately 10 minutes. The results of all the actions will be stored in result.json
for later reference.
Let’s review whats happening behind the scenes.
The repository contains a sample application that deploys a ECS Fargate Service. The service runs this NodeJS application that connects to a AWS RDS Aurora Serverless Database Cluster. The credentials for this application are stored in AWS Secrets Manager.
First, let’s look at the application context variables:
{
"app": "npx ts-node --prefer-ts-exts bin/secret-ecs-app.ts",
"context": {
"dbName": "tododb",
"dbUser": "postgres",
"dbPort": 5432,
"containerPort": 4000,
"containerImage": "registry.hub.docker.com/mptaws/secretecs"
}
}
Custom CDK context variables are added to the JSON for the application to consume:
dbName
- name of the target database for the tutorialdbUser
- database usernamedbPort
- database portcontainerPort
- port on which the container in the ECS cluster runscontainerImage
- image name that will be deployed from ECRThese values will be referenced by using the function tryGetContext(<context-value>)
throughout the rest of the application.
Next, let’s look at the Cloudformation stacks constructs. The files in lib
each represent a Cloudformation Stack containing the component parts of the application infrastructure.
import { Construct } from 'constructs';
import { App, Stack, StackProps } from 'aws-cdk-lib';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
export interface VpcProps extends StackProps {
maxAzs: number;
}
export class VPCStack extends Stack {
readonly vpc: ec2.Vpc;
constructor(scope: Construct, id: string, props: VpcProps) {
super(scope, id, props);
if (props.maxAzs !== undefined && props.maxAzs <= 1) {
throw new Error('maxAzs must be at least 2.');
}
this.vpc = new ec2.Vpc(this, 'ecsWorkshopVPC', {
ipAddresses: ec2.IpAddresses.cidr('10.0.0.0/16'),
subnetConfiguration: [
{
cidrMask: 24,
name: 'public',
subnetType: ec2.SubnetType.PUBLIC,
},
{
cidrMask: 24,
name: 'private',
subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS,
},
],
});
}
}
The VPC stack creates a new VPC within the AWS account. The CIDR address space for this VPC is 10.0.0.0./16
. It will set up 2 public subnets with NAT Gateways and 2 private subnets with all the appropriate routing information automatically. An interface is setup to pass in the value for maxAzs
which is set to 2 in the main application.
import { App, StackProps, Stack, Duration, RemovalPolicy, CfnOutput } from 'aws-cdk-lib';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as rds from 'aws-cdk-lib/aws-rds';
import * as secretsmanager from 'aws-cdk-lib/aws-secretsmanager';
export interface RDSStackProps extends StackProps {
vpc: ec2.Vpc
}
export class RDSStack extends Stack {
readonly dbSecret: rds.DatabaseSecret;
readonly postgresRDSserverless: rds.ServerlessCluster;
constructor(scope: App, id: string, props: RDSStackProps) {
super(scope, id, props);
const dbUser = this.node.tryGetContext("dbUser");
const dbName = this.node.tryGetContext("dbName");
const dbPort = this.node.tryGetContext("dbPort") || 5432;
this.dbSecret = new secretsmanager.Secret(this, 'dbCredentialsSecret', {
secretName: "ecsworkshop/test/todo-app/aurora-pg",
generateSecretString: {
secretStringTemplate: JSON.stringify({
username: dbUser,
}),
excludePunctuation: true,
includeSpace: false,
generateStringKey: 'password'
}
});
this.postgresRDSserverless = new rds.ServerlessCluster(this, 'postgresRdsServerless', {
engine: rds.DatabaseClusterEngine.AURORA_POSTGRESQL,
parameterGroup: rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql10'),
vpc: props.vpc,
enableDataApi: true,
vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS },
credentials: rds.Credentials.fromSecret(this.dbSecret, dbUser),
scaling: {
autoPause: Duration.minutes(10), // default is to pause after 5 minutes of idle time
minCapacity: rds.AuroraCapacityUnit.ACU_8, // default is 2 Aurora capacity units (ACUs)
maxCapacity: rds.AuroraCapacityUnit.ACU_32, // default is 16 Aurora capacity units (ACUs)
},
defaultDatabaseName: dbName,
deletionProtection: false,
removalPolicy: RemovalPolicy.DESTROY
});
this.postgresRDSserverless.connections.allowFromAnyIpv4(ec2.Port.tcp(dbPort));
new secretsmanager.SecretRotation(
this,
`ecsworkshop/test/todo-app/aurora-pg`,
{
secret: this.dbSecret,
application: secretsmanager.SecretRotationApplication.POSTGRES_ROTATION_SINGLE_USER,
vpc: props.vpc,
vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_WITH_EGRESS },
target: this.postgresRDSserverless,
automaticallyAfter: Duration.days(30),
}
);
new CfnOutput(this, 'SecretName', { value: this.dbSecret.secretName });
}
}
Every 30 days, the secret will be rotated and will automatically configure a Lambda function to trigger the rotation using the single user
method. More information on the lambdas and methods for credential rotation can be found here
Finally, the ECS service stack is defined in lib/ecs-fargate-stack.ts
The ECS Fargate cluster application is created here using the ecs-patterns
library of the CDK. This automatically creates the service from a given containerImage
and sets up the code for a load balancer that is connected to the cluster and is public-facing. The key benefit here is not having to manually add all the boilerplate code to make the application accessible to the world. CDK simplifies infrastructure creation by abstraction.
The stored credentials created in the RDS Stack are read from Secrets Manager and passed to our container task definition via the secrets
property. The secrets unique ARN is passed into this stack as a parameter dbSecretArn
.
import { App, Stack, StackProps, CfnOutput } from 'aws-cdk-lib';
import * as ec2 from 'aws-cdk-lib/aws-ec2';
import * as ecs from 'aws-cdk-lib/aws-ecs';
import * as ecsPatterns from 'aws-cdk-lib/aws-ecs-patterns';
import * as secretsmanager from 'aws-cdk-lib/aws-secretsmanager';
export interface ECSStackProps extends StackProps {
vpc: ec2.Vpc
dbSecretArn: string
}
export class ECSStack extends Stack {
constructor(scope: App, id: string, props: ECSStackProps) {
super(scope, id, props);
const containerPort = this.node.tryGetContext("containerPort");
const containerImage = this.node.tryGetContext("containerImage");
const creds = secretsmanager.Secret.fromSecretCompleteArn(this, 'postgresCreds', props.dbSecretArn);
const cluster = new ecs.Cluster(this, 'Cluster', {
vpc: props.vpc,
clusterName: 'fargateClusterDemo'
});
const fargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this, "fargateService", {
cluster,
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry(containerImage),
containerPort: containerPort,
enableLogging: true,
secrets: {
POSTGRES_DATA: ecs.Secret.fromSecretsManager(creds)
}
},
desiredCount: 1,
publicLoadBalancer: true,
serviceName: 'fargateServiceDemo'
});
new CfnOutput(this, 'LoadBalancerDNS', { value: fargateService.loadBalancer.loadBalancerDnsName });
}
}
Finally, the stacks and the CDK infrastructure application itself are created in bin/secret-ecs-app.ts
, the entry point for the cdk defined in the cdk.json
mentioned earlier.
import { App } from 'aws-cdk-lib';
import { VPCStack } from '../lib/vpc-stack';
import { RDSStack } from '../lib/rds-stack';
import { ECSStack } from '../lib/ecs-fargate-stack';
const app = new App();
const vpcStack = new VPCStack(app, 'VPCStack', {
maxAzs: 2
});
const rdsStack = new RDSStack(app, 'RDSStack', {
vpc: vpcStack.vpc,
});
rdsStack.addDependency(vpcStack);
const ecsStack = new ECSStack(app, "ECSStack", {
vpc: vpcStack.vpc,
dbSecretArn: rdsStack.dbSecret.secretArn,
});
ecsStack.addDependency(rdsStack);
A new CDK app is created const App = new App()
, and the aforementioned stacks from lib
are instantiated. After creating the VPC, the VPC object is passed into the RDS and ECS stacks. Dependencies are added to ensure the VPC is created before the RDS stack.
When creating the ECS stack, the same VPC object is passed along with a reference to the RDS stack generated dbSecretArn
so that the ECS stack can look up the appropriate secret. A dependency is created so that the ECS stack is created after the RDS Stack.
After deployment finishes, the last step for this tutorial is to get the LoadBalancer URL and run the migration which populates the database.
url=$(jq -r '.ECSStack.LoadBalancerDNS' result.json)
curl -s $url/migrate | jq
(Note that the migration may take a few seconds to connect and run.)
The custom method migrate
creates the database schema and a single row of data for the sample application. It is part of the sample application in this tutorial.
To view the app, open a browser and go to the Load Balancer URL ECSST-Farga-xxxxxxxxxx.yyyyy.elb.amazonaws.com
(the URL is clickable in the Cloud9 interface):
This is a fully functional todo app. Try creating, editing, and deleting todo items. Using the information output from deploy along with the secrets stored in Secrets Manager, connect to the Postgres Database using a database client or the psql
command line tool to browse the database.
As an added benefit of using RDS Aurora Postgres Serverless, you can also use the query editor in the AWS Management Console - find more information here. All you need is the secret ARN created during stack creation. Fetch this value at the Cloud9 terminal and copy/paste into the query editor dialog box. Use the database name tododb
as the target database to connect.
aws secretsmanager list-secrets | jq -r '.SecretList[].ARN'