Drone
Continuous Integration
Before I even start writing any code, I wanted to start off with a CI/CD tool. After doing some research, I have landed on using Drone. At first, I will only be using Drone for continuous integration while developing, but will eventually use it for deploying into my “production” system.
IaC All The Things
With this project, I want to start everything right away with defining all infrastructure as code. Ultimately, I want everything to be able to be spun up with the run of a command. So, to start out, I will be using Terraform and Ansible to create my Drone server.
To start off, I need to spin up an EC2 instance to run the Drone contianer on. To create the instance itself, I will use Terraform. For the beginning testing phases, I just need a small linux instance that I can ssh into.
Before creating the Terraform files, I need to set up a security group. For this I need to create a new CloudFormation stack to create the security group. This is what the resource for the security group looks like:
Resources:
InstanceSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow SSH From Anywhere
VpcId:
Fn::ImportValue: !Sub "${VPCStackName}-VPCID"
Tags:
- Key: Name
Value: Drone-Ingress
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 3000
ToPort: 3000
CidrIp: 0.0.0.0/0
As you can see, it’s just a simple rule that allows traffic on ports 22 for SSH, 80 for web traffic to the Drone UI, and 3000 for the Drone Agent. (I will be moving the agent to a different machine in the future, but leaving it on the same host for now for simplicity) Also, the next step will be to enable SSL on the Drone server.
With the stack deployed and the security group in place, it’s time to do some terraforming. I want something simple, so for now I am just going to use a single file. No modules or var files yet.
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
resource "aws_instance" "drone" {
ami = "${data.aws_ami.ubuntu.id}"
instance_type = "t2.micro"
vpc_security_group_ids = ["sg-095bc4bd27480a155"]
subnet_id = "subnet-018fd2db4fdea3d70"
tags = {
Name = "Drone"
}
}
This will create the instance in the VPC and Subnet that are created by my stack and grabs the latest Ubuntu image. For now, this is fairly fragile because it relies on hard-coded values from the VPC and subnet. To get things going, I’m okay with this, but I plan to grab output values from the stacks and inject them into var files in the future.
Then, with a simple:
terraform init
terraform apply
and the instance is created!
Configuration as Code (Ansible)
Now that the infrastructure is created, it’s time to configure the instance to host the Drone server. I don’t want to have to manually configure and run commands if I ever have to spin this up, so Ansible to the rescue! With Ansible, I can write simple yaml files that define the configuration that I want on my instance and it runs commands over SSH to make it so.
In the playbook, the first thing that I want to do is to install the necessary packages to run Docker.
tasks:
- name: Install required system packages
apt: name={{ item }} state=latest update_cache=yes
loop: [ 'apt-transport-https', 'ca-certificates', 'curl', 'gnupg-agent', 'software-properties-common', 'python3-pip']
- name: Add Docker GPG apt Key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add Docker Repository
apt_repository:
repo: deb https://download.docker.com/linux/ubuntu bionic stable
state: present
- name: Update apt and install docker-ce
apt: update_cache=yes name=docker-ce state=latest
- name: Install Docker Module for Python
pip:
name: docker
The first 4 tasks are the exact same setup that described in the Docker installation doccuments. The last task is to install the docker pip package that will be use by Ansible later. Running the following command:
ansible-playbook -i ./hosts playbook.yml
connects to the Drone server and runs the commands necessary to install the packages I’ve defined.
Now that we have the necessary packages installed, we can start to define the Docker images that we want to pull and configure them.
Ansible has a task type called docker_container
that uses the docker.py
package we grabbed above to run the necessary commands to pull and run the containers we define.
- name: run drone server
docker_container:
name: drone-server
image: "drone/drone:1"
state: started
restart_policy: always
ports:
- "80:80"
- "443:443"
volumes:
- "/var/lib/drone:/data"
This pulls the docker-server
image and runs the container with the defined parameters.
With all of this in place and a run of the ansible-playbook
command, we now have a configured instance that has pulled and run the container.
This is all great, but now we need to configure the Drone server itself according to the installation documentation
In this documentation, there are certain configuration parameters that are passed in through ENV variables.
We can define these variables in a vars_file
like so:
DRONE_GITHUB_CLIENT_ID: <client_id>
DRONE_GITHUB_CLIENT_SECRET: <client_secret>
DRONE_RPC_SECRET: <rpc_secret>
DRONE_SERVER_HOST: ec2-35-165-44-139.us-west-2.compute.amazonaws.com
DRONE_SERVER_PROTO: http
then, we can use this file in the playbook import the values. Here is the result in that docker task with the previous configuration:
- name: run drone server
docker_container:
name: drone-server
image: "drone/drone:1"
state: started
restart_policy: always
ports:
- "80:80"
- "443:443"
volumes:
- "/var/lib/drone:/data"
env:
DRONE_GITHUB_CLIENT_ID: "{{ DRONE_GITHUB_CLIENT_ID }}"
DRONE_GITHUB_CLIENT_SECRET: "{{ DRONE_GITHUB_CLIENT_SECRET }}"
DRONE_RPC_SECRET: "{{ DRONE_RPC_SECRET }}"
DRONE_SERVER_HOST: "{{ DRONE_SERVER_HOST }}"
DRONE_SERVER_PROTO: "{{ DRONE_SERVER_PROTO }}"
DRONE_USER_CREATE: username:justinbushy,admin:true
Now that this is up and configured, we need a “runner” or agent to run Drone jobs.
For now, I am going to put the runner on the same instance as the server.
The pattern is very similar to the docker_container
above, just with slightly different parameters.
- name: run drone runner
docker_container:
name: drone-runner
image: "drone/drone-runner-docker:1"
state: started
restart_policy: always
ports:
- "3000:3000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
env:
DRONE_RPC_PROTO: "{{ DRONE_SERVER_PROTO }}"
DRONE_RPC_HOST: "{{ DRONE_SERVER_HOST }}"
DRONE_RPC_SECRET: "{{ DRONE_RPC_SECRET }}"
DRONE_RUNNER_CAPACITY: "2"
As you can see, we are able to use the same var file which is really nice. Now, we have a fully functioning Drone server and agent that can start to take jobs!!
Building The Drone File
With the infrastructure in place and the GitHub project set up, we can start building out the drone file (a yaml file that defines the pipeline) for the project. To start off with, I am going to just simply run the tests in the project.
To start, I created the .drone.yml
file in the root of the project and added the following:
kind: pipeline
name: default
steps:
- name: test
image: elixir:latest
commands:
- cd ./remote_coach
- mix local.rebar --force
- mix local.hex --force
- mix deps.get
- mix test
Now as soon as I push a commit to my branch, the drone server will get a webhook notification from GitHub and will run this pipeline against my branch. On the first run it fails :X I knew this was going to happen because we need a database for these tests to run.
So, for the next step, we need the Drone pipeline to spin up a Postgres container that the tests can use.
Very similar to what the docker-compose
file does.
Drone does this through the use of services
.
These are basically containers that will be used as services in the pipeline.
The Postgres service is defined like so:
services:
- name: database
image: postgres
ports:
- 5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: remote_coach_dev
With a commit and push, the pipeline runs again.
hmm.. still failing.
After looking into the output, it seems that the elixir server is looking for the database on localhost
.
After looking at the phoenix configuration, I see that I forgot to update the test
environment configuration to use the same environment variables.
So with the following update in test.exs
:
# Configure your database
config :remote_coach, RemoteCoach.Repo,
username: System.get_env("PGUSER"),
password: System.get_env("PGPASSWORD"),
database: System.get_env("PGDATABASE"),
hostname: System.get_env("PGHOST"),
pool: Ecto.Adapters.SQL.Sandbox
And with a push and a commit, the pipeline runs and is finally green. With the Drone server in place, it’s now time to start building the application. I plan to start small and simple for now, get some baseline functionality in, and then start making it ready for a “production” environment.
Thanks for reading!