Kubernetes + Rails (NGINX & Unicorn) on GCE

06 03/06/16 Comments Leave your thoughts

For our web products we have deployed using Chef and OpsWorks in production for a while now. Locally we have been using Vagrant for development. It has worked well for the most part, but we’ve had a few issues with Dev, Staging and Production environments being just enough different to cause issues. We have tried very hard to make sure our Chef scripts setup dev and staging right, but it isn’t perfect.

Lately, we have been experimenting with Docker for development instead. We made the switch from Vagrant to Docker knowing we would transition to Docker for our Staging and Production environments soon. This should help us keep all our environments in sync.

Last week I spent some time setting up a demo project with Docker and used Kubernetes to help manage the cluster. I haven’t figured everything out just yet, but here is where I’m with this new setup.

The Kubernetes setup


This basic setup isn’t exactly production ready, however it covers a few key things:

  • The application layer is load balanced and can be horizontally scaled easily.
  • Static files are served from the application layer’s NGINX server.
  • NGINX and Unicorn are communicating over a Unix Socket, and not HTTP.
  • Managed cluster without much works (thanks to Kubernetes)!

Some code

The project files are out on GitHub.


This houses the rails code. Most of it is just the basic rails setup. The main files that we need to look at are:


Lines 16 – 21 handle the setup to make sure NGINX and Rails play nice in the same pod.

VOLUME ["/tmp"]

This will create a sharable volume to allow NGINX and Unicorn to share the Unix socket

RUN chmod +x /my_project/init.sh 
RUN chmod +x /my_project/kubernetes-post-start.sh

The first scripts will be run as a part of building the container. The second will be run during the Kubernetes ReplicationController lifecycle’s postStart. We need to make sure they have the proper file permissions.

CMD ["sh", "/my_project/init.sh"]


To make sure that unicorn and NGINX can communicate we will use the shared volume “/tmp” in config/unicorn.rb with listen "/tmp/unicorn.sock", :backlog => 64 on line 20


Lastly, to make sure the precompiled assets are accessible in the NGINX container we need to copy the public dir from the web container after during Kubernetes’ postStart lifecycle. This script tries to do three things, but the DB commands aren’t working at this time. This script really only copies the public dir.


The NGINX setup is straight forward. The Dockerfile simply copies the nginx.conf into the container. The two main things in the nginx.conf are:

  • using the /tmp dir to listen for the unicorn.sock.
  • using shared volume /my_project to allow NGINX to host public static files.

Shared volumes

In the docker-compose.yml file we setup a “volumes_from” in the “nginx” container. That is nice and easy. Using kubernetes-post-start.sh and the emptyDir volume we are able to recreate the “volumes_from”.


Once you have kubectl command line tool setup, the deploy is pretty straight forward.

Building Docker images

In this example, we are going to build and push to Google’s Container Engine. You could use DockerHub or another Docker image repository instead, and it should work fine (given the proper modifications).

Note pushing rails-image can take a long time and use a lot of bandwidth. This actually times out sometimes. Just run it again if it does that. According to this GitHub issue setting a QoS on your router should help, not sure what to do if you can’t set the QoS.

  1. docker build -t gcr.io/[your GCE project]/rails-image:v1 web/.
  2. gcloud docker push gcr.io/[your GCE project]/rails-image:v1
  3. docker build -t gcr.io/[your GCE project]/nginx-image:v1 nginx/.
  4. gcloud docker push gcr.io/[your GCE project]/nginx-image:v1

All we did was build both rails-image and nginx-image. We applied the v1 label to each image and pushed them to GCR.

Deploy to Google Container Engine

Next we want to setup the infrastructure and deploy our site.

  1. gcloud container clusters create kubernetes-rails --num-nodes 2 --machine-type g1-small
  2. kubectl run db --image=postgres --port=5432
  3. kubectl expose rc db
  4. Update kubernetes/web-controller.yaml to use your GCE project name
  5. kubectl create -f kubernetes/web-controller.yaml
  6. Using kubectl get pods wait for the 2 pods to change to Running. This can take a few minutes.
  7. kubectl create -f kubernetes/web-service.json

Step 1 sets up a 2 node cluster using the g1-small machine type. 2 and 3 setup the blank DB and expose the DB to other Pods in the cluster. Then 5 sets up the ReplicationController which handles how the Pods will work. The kubernetes/web-controller.yaml also defines the Pod spec (the Pod setup).

Lastly, we expose the web layer using the web-service.json file. This sets up the public Network Load balancer and exposes port 80 and 443 and points it to the “www” app (our web layer Pods).


Database Setup

At this point the app will crash since the database isn’t fully setup. We need to connect to our pod, and create the database.`kubectl get pods`

  1. kubectl get pods
  2. kubectl exec -it [pod id] -c web bash
  3. run rake db:create and rake db:migrate

Now you have a working (very simple) Dockerized Rails app out in GCE. Try scaling our your Pods:

kubectl scale rc www-v1 --replicas=1

kubectl scale rc www-v1 –replicas=4

There are all sorts of great things that Kubernetes will do for you, go check out the docs. Later I will write a post on how rolling updates work for pushing your code updates.

Next Steps

This is a great first step, however, there is much to be done for security’s sake.  To be more robust and production ready, we want to automate creation and migration of the database. Having to manually run those commands just breaks the whole awesomeness that is Kubernetes. I haven’t attempted it, but this blog post seems to be on the right track.


Tags: , ,

Categorized in:

About Joe Kratzat
I'm a passionate software developer with diverse technical abilities such as networking, support and development, using a wide range of development technologies, tools and practices. Basically I love to explore and learn new things.

See All Posts By Joe