In today, as IT systems grow, automation is a mandatory to compete in the market. To achieve this, we need to deliver our software to customers in a fast,secure and reliable way.
In this blog, we’ll demonstrate how to create a continuous delivery pipeline using AWS in a Dockerized environment. In our scenario, every time when we update our code, our code will be tested, and if it passes, it will be deployed to production server running on top of AWS ElasticBeanstalk. We are going to use the tools listed below:
- AWS CodeCommit : We will use it as our code repository.
- AWS CodePipeline : We will use it to automate our workflow.
- AWS Lambda : We will use it to run our unit test.
- AWS ElasticBeanstalk: We will use it as our web application infrastructure.
First we create our new repository and name it as “Demo-App-Repository”.
Next we need to copy our repository’s URL to connect via ssh.
To connect to our repository via ssh, we need to create a ssh key pair and modify our ssh-config file. We create a key pair and save it as “demo-app_rsa”.
Before adding host definition for our repository, we need to add our ssh public key to our IAM user and copy the ssh key ID.
Now we can edit our ssh config file ( ~/.ssh/config ) and add a host definition for our repository. Here we use the copied ssh key ID and paste it as “User” and we define “demo-app_rsa” as our identity file.
We are ready to commit and push our code to AWS CodeCommit. We initialise a empty git repository in our folder and add the remote repository using the copied ssh URL. Finally we push our code to AWS CodeCommit.
As you see there is a flask app (hello.py) and a template for it (index.html). In our Docker file, we use centos as base image and install updates and flask. Then we copy our source code to container, open port 5000 and finally run the flask app.
We can see that our code is pushed successfully as master branch.
It’s time to create our production environment using AWS ElasticBeanstalk.
We create a new application and name it as “Demo App”.
We select “Web Server Environment”.
We choose Docker platform and since this is a demo, we select single instance as environment type.
We start with the sample application.
Finally, we select an available environment URL and create our application ( we set the rest of the sections as default for our demo ).
And our application is ready. We can continue with creating our Lambda functions.
We start by creating a new lambda function and skip the blueprints. We use the code below and after zipping it we upload it to AWS Lambda ( We need to import flask, so we have to make a package and upload ).
Our function (the code is edited version of the code in here). Basically, it download the files from s3, then unzip and runs a unit test on “hello.py” file. It checks for assertion of “Hello world” and puts job success back to AWS CodePipelin if everything is fine. Otherwise, it puts job failure. You can find the codes here.
Our last step is creating and configuring our AWS CodePipeline.
We create a new pipeline.
We select AWS CodeCommit as our source.
We select “No Build”, we won’t use Jenkins etc.
We select AWS Elastic Beanstalk as deployment provider and select our previously created application and environment.
We create or select our role.
After we create the pipeline, we need to edit and add a new stage for our unit testing function. First we click “Edit” and then we click “Stage” and name our stage. Next we click “Action”.
We select “Invoke” as action category and select our previously created Lambda function.
As soon as we create the pipeline it will check the source and deploy it to our AWS Elastic Beanstalk environment.
Now let’s edit our application and commit it. You can see the first version of our application.
We edit our index.html template file, add an image and finally commit our changes.
We push our changes.
Now let AWS CodePipeline runs our workflow.
Source stage fetches the codes from AWS CodeCommit.
UnitTest stage tests the code by invoking our AWS Lambda code.
Beta stage deploys our code to AWS ElasticBeanstalk.
Finally our code is deployed and we can see the result.
As a last step, let’s broke our application, then push and see the result.
As we see, unit test failed because of the assertion. We can see the status of the stage on AWS CodePipeline dashboard.
We can also see the test result in AWS CloudWatch logs.
So this was how we continuously deliver our code to production using AWS CodePipeline. I hope you find it useful. If you have any question or comment, please feel free to write and don’t forget to share please.