Programming

Web Application using GitHub Actions and Kubernetes on Digital Ocean

The way we develop and deploy Web Applications has changed a lot in the last 10 years. Typically, such a project would require skills in three fields :

  • Knowledge of a programming language to build a web applications.
  • Knowledge of system administration, as to manage the platform that will host the application.
  • Knowledge of automation software, to automate the compilation and deployment of the applications.

Most people involved in such project will typically be an expert in one of the skills, and possibly dabbled in the other two other fields. Finding someone skilled in all three fields is rare, and often costly. The ideal solution would be to enable the developer to focus on writing the software, and using services to automate the rest. Self-managed application platforms, such as Kubernetes or Heroku have gained an enormous following, allowing a developer to host his application with little knowledge of the hosting platform. Sadly, the developers were left to build their automations with these platforms. Two years ago, GitHub launched GitHub Actions, a solution meant to automate all software workflow. We decided to try it out.

Please note that this is not an in-depth tutorial on how to get a web application on the net. The goal of this post is an exploration into a modern software workflow using Kubernetes and Github Actions.

The Case Study

This post explores the workflow of a simple web service that acts as a proxy between Duplicati, a backup software and Healthchecks.io, a monitoring and alerting platform. The code for the project can be found here in GitHub.

This type of application is also known as a Proxy

The specification of the web service is simple, receive reports from machines completing a Duplicati backup and alert the Healthchecks platform of the success or failure of those backups.

The Hosting

The application is hosted on Digital's Ocean Kubernetes platform. This isn't necessarily an endorsement of the platform, the choice was motivated by low cost and the availability of command-line tools. People new to Kubernetes should start here.

Components of a Kubernetes Cluster (img src: kubernetes.io)

One of the easiest solution to deploy software on Kubernetes is to use Helm. A series of Yaml file cleverly named Helm Charts are used to describe the deployment. For example, version X of the application should be deployed on Y nodes and distributed using Z method of network ingress. A simple example would be:

Chart.yaml:

apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: healthchecks-proxy
version: 0.1.0

values.yaml:

replicaCount: 1

image:
  repository:  adenau/healthchecks-proxy
  tag: 0.1
  pullPolicy: IfNotPresent

service:
  type: NodePort
  port: 8000
  nodeport: 32080

In this case, version 0.1 of the application, stored in a docker repository, will be deployed on a 1 node cluster using the simplest ingress solution of opening the ports on cluster itself. Complete helm charts can be found under the deploy directory of the source repository.

The Deployment

Automation is handled by GitHub actions, which can be easily found in the workflow directory of the source repository. First step is to build a docker image every time code is pushed to our master branch. Upon commit, a fresh Linux environ is spawned from a docker image and the action is executed. Like with Helm, the automation code of GitHub actions is described using Yaml files.

push-master.yml:

name: Push to Master

on:
  push:
    branches:
      - master
    tags:
      - '!refs/tags/*'

jobs:
  build:
    runs-on: ubuntu-latest
    steps:

    - name: Checkout the code
      uses: actions/checkout@v1

    - name: Set up Python 3.7
      uses: actions/setup-python@v1
      with:
        python-version: 3.7

    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install pipenv
        pipenv install --deploy --ignore-pipfile  
    - name: Test with pytest
      run: |
        pip install pytest
        pipenv run pytest
    - name: Login to DockerHub
      env:
        USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
        PASSWORD: ${{ secrets.DOCKERHUB_PASSWORD }}
      run: docker login -u $USERNAME -p $PASSWORD

    - name: Build Docker Image
      run: docker build . --file Dockerfile --tag adenau/healthchecks-proxy:unstable

    - name: Push to DockerHub
      run: docker push adenau/healthchecks-proxy:unstable

The first steps are to checkout the application code and to setup any build dependencies we might have : python, pip, pipenv and pytest. Test are run using PyTest to ensure the sanity of the build. The next step is to login to docker, build the image and upload it. Secrets are used to keep confidential information, such as password, out of the source repository. This step can be sped up if a docker image is prepared with dependencies pre-installed.

release.yml:

name: Release

on:
  release:
    types: [published]

jobs:
  release:

    runs-on: ubuntu-latest
    steps:

    - name: Checkout the code
      uses: actions/checkout@v1
 
    - name: Set up Python 3.7
      uses: actions/setup-python@v1
      with:
        python-version: 3.7
 
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install pipenv
        pipenv install --deploy --ignore-pipfile  
    - name: Install Doctl
      run: |
        curl -L https://github.com/digitalocean/doctl/releases/download/v1.23.1/doctl-1.23.1-linux-amd64.tar.gz  | tar xvz
        sudo mv doctl /usr/local/bin
    - name: Install helm
      run: curl -L https://git.io/get_helm.sh | bash
 
    - name: Login to DockerHub
      env:
        USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
        PASSWORD: ${{ secrets.DOCKERHUB_PASSWORD }}
      run: docker login -u $USERNAME -p $PASSWORD
 
    - name: Build Docker Image
      env:
        USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
      run: docker build . --file Dockerfile --tag $USERNAME/healthchecks-proxy:${GITHUB_REF:10}

    - name: Push to DockerHub
      run: docker push adenau/healthchecks-proxy:${GITHUB_REF:10}

    - name: Prep for Kubeconfig
      run: mkdir $HOME/.kube

    - name: Save DigitalOcean kubeconfig
      env:
        DIGITALOCEAN_ACCESS_TOKEN: ${{ secrets.DO_ACCESSTOKEN }}
      run: |
        doctl auth init –-access-token $DIGITALOCEAN_ACCESS_TOKEN
        doctl kubernetes cluster kubeconfig save kaon
    - name: Init helm
      run: helm init --client-only

    - name: Push to Kubernetes cluster
      run: helm upgrade --set image.tag=${GITHUB_REF:10} --set version=${GITHUB_REF:10} healthchecks-proxy $GITHUB_WORKSPACE/deploy/healthchecks-proxy

Automation of the release is more elaborate as the application is both compiled and deployed to Digital Ocean. The beginning of the automation is quite similar to the commit: checkout code, get dependencies, build image and upload to docker repository. To deploy the code, the Digital Ocean command-line tools are  installed to retrieve the Kubernetes configuration file, which contains both authentification and location url of the cluster. Finally, the release is pushed to the server using a helm upgrade command.

Does it work?

Using Kubernetes and Github actions, can developers do all the work of developing, deploying, hosting and managing a web application? Yes and no.

At a small scale, these systems work beautifully. They enable developers to push out rapid prototypes to the market, allowing for an idea to be validated before investing too much in staff.  

However, as a project growths, experts specializing in system administration and automation are needed to deal with scalability issues. Network ingress gets complicated fast in an stateful application, especially when you have millions of daily users. In addition, none of the systems discussed today allow for scalable use of persistent data storage, such as databases. Again, with a large number of users, staff dedicated to maintaining these databases will be preferable. Organisation that are lucky enough to growth past a certain level of maturity will often dedicate system administratif staff to security, change management and compliance.

As for  automation, more effort will be require as the application matures. Once a certain complexity is reached, an automation expert will be the perfect resource to manage  GitHub actions.

In Conclusion

What does this mean for the viability of managed hosting platform like Kubernetes or automation systems like GitHub actions? In the beginning, they are wonderful platforms to get started. Lone developers (or small groups) should fully take advantage of them as it will keep them focused on what is important: writing code. I would not hesitate to use GitHub Actions for my next development project, as they automate so much of the tedious setup work of a proper pipeline.

These tools also provide a path to scale up, which is critically important for growing projects. However, they don't replace expertise required when operating at a larger scale.

Author image

About Alexandre Denault

Veteran software developer and holds a PhD in Computer Science. Avid computer enthusiast (geek) who has been dabbling with technology ever since his Dad brought home an 8086 computer.
  • Canada
You've successfully subscribed to Technodabbler
Great! Next, complete checkout for full access to Technodabbler
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Error! Billing info update failed.