Being a DevOps engineer, it’s very common that we use tools like AWS CLI, Docker/ECS, and Ansible to build continuous deployment solutions. It is also common to use tools like JenkinsCI to fully automate the deployment of your applications.
Recently I have experienced that, due to some bizarre and varied reasons, you cannot always use CI. By removing CI from the picture, we introduce other issues (e.g. you cannot run the deployment scripts as-is on any computer). Most of the time you don’t have the right software installed to support execution of the scripts.
Hence you can use docker containers, where you spin up containers with required specs & software pre-installed before running your code. This solves the issue of having the correct tools installed, but leaves us with increased complexity of running the tools. We then must document every single command that needs to be executed for deployment (also updating them).
To solve this, you can use a Makefile, where you can create a proper execution sequence for deploying your app; so devs just need to know how to run
make blah foo=bar. Of course they will have to spin up a container beforehand to prepare the platform for execution and copy the scripts across to run ‘make’.
So it’s clear, both ‘Docker’ and ‘Make’ solve a specific problem. But, what if we combine them? Wouldn’t it be nice to run just one command which takes care of spinning up required containers, uploading your scripts and running the necessary commands automatically inside the container? How?
Let’s take an example: you have few Ansible playbooks for configuring a bunch of hosts (Linux & Windows) and deploying some apps on them.
First step - prepare containers
Let’s say we need Ansible version 2.5 for running the play. We will also need WinRM for managing Windows targets. So, put it all together in requirements file.
Contents of ‘requirements.txt’
ansible==2.5.0 bcrypt==3.1.4 certifi==2018.1.18 cryptography==2.2.2 Jinja2==2.10 ntlm-auth==1.1.0 paramiko==2.4.1 pywinrm==0.3.0 PyYAML==3.12 requests-ntlm==1.1.0 urllib3==1.22 pymssql==2.1.3
Contents of ‘Dockerfile’
FROM ubuntu:16.04 RUN mkdir /data RUN apt-get update -y RUN apt-get install python python-pip moreutils -y COPY requirements.txt . RUN pip install -r requirements.txt WORKDIR /data CMD ["/bin/bash"]
Dockerize the Makefile
Contents of ‘Makefile’
docker_run: ## This target will basically spin up a container and run any command you want it to run. @docker build -f Dockerfile -t ansible_deploy . @docker run -it --network=host -v $(PWD):/data --rm ansible_deploy /bin/bash -c "$(command)" deploy_my_app: ## This target will call target 'docker_run' and pass ansible command as parameter. make docker_run command="ansible-playbook -i inventory/ my_app.yml -u username --ask-pass -b -K -e somevar=$(somevar) | tee >(ts '[%d-%m-%Y %H:%M:%S]' >> _log_$(MAKECMDGOALS).log)"
Run the ‘play’
Open your terminal, cd to the work directory and run
$ make deploy_my_app somevar=foo
What just happened?
When you ran
make deploy_my_app, it called another Make target named
docker_run by passing a parameter ‘command’ which in fact is the command we actually want to run (i.e. Ansible playbook). The
docker_run will first build a container as per your specs in Dockerfile (above) and boot up the container. It will also mount your current-work-directory into the container as volume
/data. So, it can see your Ansible plays. Further it will pass your command to
/bin/bash for final execution. Tada!!
You will also notice that the output of
ansible-playbook command is piped to
tee which puts it into a log file by appending timestamp to each outputted line using
ts. This is very helpful as you can inspect the output even after termination of the container. If you wanna be even more awesome with logs, you can have AWS Cloudwatch Agent or FluentD installed in the container and stream logs (_log*.log files) to your desired location.
I hope this article was helpful to you.