Amazon aws, aws, CodeBuild, CodeCommit, CodePipeline, ECS, elastic beanstalk

AWS Elastic Beanstalk MultiContainer Docker deployment with AWS CodeBuild

In this tutorial I’ll demonstrate how to deploy AWS Elastic Beanstalk multicontainer Docker environment  by using AWS CodeBuild. AWS CodeBuild is a new service that has been announced at re:Invent 2016. It is basically a fully managed, scalable build service that compiles our source code, and produces artifacts.

Today, for our demonstration, I’ll use an application that consists of a nginx proxy server and a flask application server running as Docker containers. Our sample architect for the demo and the services I’ll use are listed below:

  • AWS CodeCommit: I’ll use it as the repository for my source code.
  • AWS CodeBuild:   I’ll use it for creating the Docker image, pushing it to AWS ECR and other build steps.
  • AWS ECR: I’ll use it as the repository for my docker images. Remember that, for configuring AWS Elastic Beanstalk, we need to define our images in our Dockerrun.aws.json file.
  • AWS CodeDeploy: Codepipeline uses it to deploy to AWS ElasticBeanstalk.
  • AWS Elastic Beanstalk: I’ll use it to run my code.
  • Amazon ECS: AWS Elastic Beanstalk uses Amazon ECS to run the docker service for multicontainer deployments.
  • AWS CodePipeline: I’ll use it to automate my deployment steps.

 

Architecture

Before explaining the steps for the pipeline, let’s see the source code directory structure (You can find the code here). 

There are two main folders for flask app( python code, requirements etc) and nginx proxy ( has nginx configuration file in it). There is a Dockerfile that will create my flask app and also _Dockerrun.aws.json file (we’ll rename it later) to configure AWS Elastic Beanstalk.

Dockerfile:

FROM python

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

COPY flask-app/requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt

COPY . /usr/src/app

CMD ["python","flask-app/hello.py"]

 

_Dockerrun.aws.json: Here, we create a volume and map it to the container, so nginx will use our conf file. Also, we link the nginx-container and app container in our configuration.

{
  "AWSEBDockerrunVersion": 2,
  "volumes": [
    {
      "name": "nginx-proxy-conf",
      "host": {
        "sourcePath": "/var/app/current/proxy/conf.d"
      }
    }  
  ],
  "containerDefinitions": [
    {
      "name": "app",
      "image": "111122223333.dkr.ecr.eu-west-1.amazonaws.com/awsome-ecr-repo:latest",
      "essential": true,
      "memory": 128
    },
    {
      "name": "nginx-proxy",
      "image": "nginx",
      "essential": true,
      "memory": 128,
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 80
        }
      ],
      "links": [
        "app"
      ],
      "mountPoints": [
        {
          "sourceVolume": "nginx-proxy-conf",
          "containerPath": "/etc/nginx/conf.d",
          "readOnly": true
        },
        {
          "sourceVolume": "awseb-logs-nginx-proxy",
          "containerPath": "/var/log/nginx"
        }
      ]
    }
  ]
}

Also, there is buildspec.yml file. This file is used by AWS CodeBuild and it defines the phases for our build.

Buildspec.yml: Details are explained later.

version: 0.1

phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - $(aws ecr get-login --region $AWS_DEFAULT_REGION)
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...          
      - docker build -t $IMAGE_REPO_NAME .
      - docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG      
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker image...
      - docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
      - echo renaming _Dockerrun.aws.json file
      - mv _Dockerrun.aws.json Dockerrun.aws.json
      - echo deleting Dockerfile
      - rm Dockerfile
artifacts:
  files:
    - '**/*'

So what are the steps for the pipeline?

1 – Source Step: As soon as I commit and push my code to AWS Codecommit, AWS CodePipeline will get the source code and create the first artifact then move that artifact to the build step.

2 – Build Step: AWS CodeBuild will get the input artifact, and start the build process. This process will login to AWS ECR and builds the image using Dockerfile in the source code. Then in post_build step, it will rename _Dockerrun.aws.json file to Dockerrun.aws.json and remove Dockerfile (AWS Elastic Beanstalk multicontainer docker deployments need dockerrun.aws.json file and there shouldn’t be a Dockerfile). Finally, it will create the artifact using all files in the source directory.

3 – Deploy Stage: In this step, AWS CodeDeploy will use the input artifact and deploy our code to AWS Elastic Beanstalk. AWS Elastic Beanstalk will check the Dockerrun.aws.json file and process it.

Ok, now it’s time to create our pipeline…

  • I start by creating an AWS Elastic Beanstalk application using a sample multicontainer environment.

  • I create the code repo using AWS CodeCommit and connect to it.

  • I add files, commit and push to the repo.

  • I create the build step by using AWS CodeBuild.

Here I select the AWS CodeCommit and select my repo. I also select AWS managed Ubuntu image and select Docker runtime. For build specification, I choose “use the buildspec.yml in the source code root directory” option, since it is in my source code.

For the artifact type, I select S3 and select my codepipeline bucket. I also select the relevant role for the service. I select the zip option for artifact packaging and the first computer requirement. (it’s resources are enough for my build)

As you see in our buildspec.yml file, we used some variables. Here, we add them in advanced setting and finally create our AWS CodeBuild project.

 

  • I create my repo for my docker image using AWS ECR.

  • Since all configurations are ready, I can create our deployment pipeline by using AWS CodePipeline.

I select AWS CodeCommit as source provider, select my repo and branch.

I select AWS CodeBuild as build provider and select my project.

I select AWS Elastic Beanstalk as deployment provider, my application and environment name.

I select the service role and create the pipeline.

As soon as I create it, it starts by pulling the source code and creating the output artifact.

Then it starts to process build step. In this step, AWS CodeBuild will work and create our docker image, push it to AWS ECR and create output artifact.

Let’s see the logs of AWS CodeBuild. (You can also see them in Amazon CloudWatch)

AWS CodeBuild also uses ECS to run our build step. (We had selected AWS managed Ubuntu and Docker runtime)

The build run finished successfully and created the output artifact.

Our docker image with latest tag is ready in our AWS ECR repo.

By the way, our sample application was as below before deployment.

AWS CodePipeline runs the beta step and this will deploy our code to AWS Elastic Beanstalk.

We can see it is deploying.

Deployment is finished successfully.

We can see also the pipeline is succeeded .

If we check our environment URL, we can see our new application is running.

As you can see, we created a fully AWS powered continous delivery pipeline and it works like a charm. As I develop my code and commit it, the pipeline will work and deploy my code to AWS Elastic Beanstalk. If you want you can also read my another post about CD with AWS CodePipeline.

If you have any questions or comments, please feel free to write and don’t forget to share this post please.

Onur SALK

AWS Cloud & DevOps Consultant, AWS Certified Solutions Architect, AWS Community Hero

More Posts - Website

Follow Me:
TwitterFacebookLinkedIn

12 thoughts on “AWS Elastic Beanstalk MultiContainer Docker deployment with AWS CodeBuild

  1. But, is this how people actually implement Docker CI/CD pipelines? I mean, every time a commit is pushed triggering the creation of a new image, the FROM instruction in the Dockerfile has to fetch the base image. And since CodeBuild is stateless, the base image can only live in a remote registry like Docker Hub or ECR. So if the base image is 500MB, that’s 500MB that has to be downloaded on every build.

    How do you manage that?

      1. Alright. I’ve been doing more investigations since and I’ll have to conclude that CodeBuild is optimal for some workflows, but suboptimal for others. Anyway, this was a comprehensive tutorial. Very AWSome blog.

    1. We have pattern to have two codebuilds projects, and two ecr repos. one is the normal build. one is the one-off, expensive (occasionally fetched artifacts).

  2. Getting error in pre_build phase:Error while executing command: $(aws ecr get-login –region $AWS_DEFAULT_REGION). Reason: exit status 255,please suggest me the solution for this and what kind of service roles do i need to attach it with wascodebuild-ecr service role.Thank you.

    1. Hi Shena,

      Can you try with this:

      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Resource": [
                      "arn:aws:logs:eu-west-1:111122223333:log-group:/aws/codebuild/ECR",
                      "arn:aws:logs:eu-west-1:111122223333:log-group:/aws/codebuild/ECR:*"
                  ],
                  "Action": [
                      "logs:CreateLogGroup",
                      "logs:CreateLogStream",
                      "logs:PutLogEvents"
                  ]
              },
              {
                  "Effect": "Allow",
                  "Resource": [
                      "arn:aws:s3:::codepipeline-eu-west-1-*"
                  ],
                  "Action": [
                      "s3:PutObject",
                      "s3:GetObject",
                      "s3:GetObjectVersion"
                  ]
              },
              {
                  "Effect": "Allow",
                  "Resource": [
                      "arn:aws:codecommit:eu-west-1:111122223333:https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/your-repo-name"
                  ],
                  "Action": [
                      "codecommit:GitPull"
                  ]
              },
              {
                  "Effect": "Allow",
                  "Resource": [
                      "arn:aws:s3:::your-bucket-name/*"
                  ],
                  "Action": [
                      "s3:PutObject"
                  ]
              }
          ]
      }
      
      1. same issue. Getting error in pre_build phase:Error while executing command: $(aws ecr get-login –region $AWS_DEFAULT_REGION). Reason: exit status 255

        tried to change eu-west-1 to my region, also facing same error

        1. managed to get it to run by granting ECS and EB permission to the role that codebuild uses. release pipeline successful completed, however, unable to access the app via EB URI. not sure if there’s something wrong with the container.

  3. Hi! This blog is really awesome and I successfully executed all the steps.
    I have one questions – Is it possible to deploy different elasticbeanstalk environment in the same ec2 machine using the multi-container docker environment ?
    E.g I have two environment – dev-my-first-project and stg-my-first-project. And I want to deploy the both in same ec2 machine but in different container.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.