Amazon aws, aws, CodeCommit, CodePipeline

Continuous Delivery with AWS CodePipeline

In today, as IT systems grow, automation is a mandatory to compete in the market. To achieve this,  we need to deliver our software to customers in a fast,secure and reliable way.

In this blog, we’ll demonstrate how to create a continuous delivery pipeline using AWS  in a Dockerized environment. In our scenario, every time when we update our code, our code will be tested, and if it passes, it will be deployed to production server running on top of AWS ElasticBeanstalk. We are going to use the tools listed below:

  • AWS CodeCommit  : We will use it as our code repository.
  • AWS CodePipeline : We will use it to automate our workflow.
  • AWS Lambda : We will use it to run our unit test.
  • AWS ElasticBeanstalk: We will use it as our web application infrastructure.

Let’s start…

AWS CodeCommit

First we create our new repository and name it as “Demo-App-Repository”.

create-repo

Next we need to copy our repository’s URL to connect via ssh.

 

clone-repo

 

To connect to our repository via ssh, we need to create a ssh key pair and modify our ssh-config file. We create a key pair and save it as “demo-app_rsa”.

 

ssh

 

Before adding host definition for our repository, we need to add our ssh public key to our IAM user and copy the ssh key ID.

uploadssh

sshkeyid

Now we can edit our ssh config file ( ~/.ssh/config ) and add a host definition for our repository. Here we use the copied ssh key ID and paste it as “User” and we define “demo-app_rsa” as our identity file.

sshconfig

 

We are ready to commit and push our code to AWS CodeCommit. We initialise a empty git repository in our folder and add the remote repository using the copied ssh URL. Finally we push our code to AWS CodeCommit.

As you see there is  a flask app (hello.py) and a template for it (index.html). In our Docker file, we use centos as base image and install updates and flask. Then we copy our source code to container, open port 5000 and finally run the flask app.

We can see that our code is pushed successfully as master branch.

 

AWS ElasticBeanstalk

It’s time to create our production environment using AWS ElasticBeanstalk.

We create a new application and name it as “Demo App”.

eb-create

We select “Web Server Environment”.

webserver

We choose Docker platform and since this is a demo, we select single instance as environment type.

env-type

We start with the sample application.

app-ver

Finally, we select an available environment URL and create our application ( we set the rest of the sections as default for our demo ).

env-info

And our application is ready. We can continue with creating our Lambda functions.

 

eb-ready

 

AWS Lambda

We start by creating a new lambda function and skip the blueprints. We use the code below and after zipping it we upload it to AWS Lambda ( We need to import flask, so we have to make a package and upload ).

lambda

 

Our function (the code is edited version of the code in here). Basically, it download the files from s3, then unzip and runs a unit test on “hello.py” file. It checks for assertion of “Hello world” and puts job success back to AWS CodePipelin if everything is fine. Otherwise, it puts job failure.  You can find the codes here.

Our last step is creating and configuring our AWS CodePipeline.

AWS CodePipeline

We create a new pipeline.

create-pipelin

We select AWS CodeCommit as our source.

pipeline-source

We select “No Build”, we won’t use Jenkins etc.

pipeline-build

We select AWS Elastic Beanstalk as deployment provider and select our previously created application and environment.

pipeline-beta

We create or select our role.

pipeline-role

After we create the pipeline, we need to edit and add a new stage for our unit testing function. First we click “Edit” and then we click “Stage” and name our stage. Next we click “Action”.

 

pipeline-unittest

 

We select “Invoke” as action category and select our previously created Lambda function.

pipeline-add-action

 

As soon as we create the pipeline it will check the source and deploy it to our AWS Elastic Beanstalk environment.

Now let’s edit our application and commit it. You can see the first version of our application.

 

first-app

We edit our index.html template file, add an image and finally commit our changes.

 

image-added

We push our changes.

changes

Now let AWS CodePipeline runs our workflow.

Source stage fetches the codes from AWS CodeCommit.

source-inprogress

UnitTest stage tests the code by invoking our AWS Lambda code.

unittest-inprogress

Beta stage deploys our code to AWS ElasticBeanstalk.

beta-inprogressFinally our code is deployed and we can see the result.

deployed

deployed-app

As a last step, let’s broke our application, then push and see the result.

broke

As we see, unit test failed because of the assertion. We can see the status of the stage on AWS CodePipeline dashboard.

unittest-failed

 

We can also see the test result in AWS CloudWatch logs.

cloudwatch

So this was how we continuously deliver our code to production using AWS CodePipeline. I hope you find it useful. If you have any question or comment, please feel free to write and don’t forget to share please.

Onur SALK

AWS Cloud & DevOps Consultant, AWS Certified Solutions Architect, AWS Community Hero

More Posts - Website

Follow Me:
TwitterFacebookLinkedIn

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.