Kubernetes on Google Cloud Platform: Nginx-Ingress and TLS from letsencrypt with cert-manager (using helm)

NOTE: 1 year after i wrote this, I found it in my “drafts”. I don’t recall why i haven’t published it. Since 1 year has passed, I decided to publish AS-IS.

There are only guides but it took forever (aka more than 1 day) to set everything up and working. Here a list of the things I did to make everything working.

The result is that:

  • Nginx is used as ingress, (nginx-ingress) and an ingress resource is used to route to your service
  • TLS for the connection, issued by letsencrypt
  • static IP and DNS set up to your domain

Let’s go step by step.

0. Assumptions:

It’s assumed that you have a Kubernetes cluster with services running, Helm installed and working.

Setup a static IP and DNS

From the Google console create a static IP (give a name you like), not it down somewhere.
Create a DNS record that points to that IP, in my case it’s api.k8s.chino.io

Helm Templates

The tricky part is to have a correct helm configuration (then generate Kubernetes instructions correctly). In the templates folder, I’ve this ingress.yaml file

(note that your deployment must use ClusterIP and the port is the targetPort, in my case is 8000 and not 80)

then I created a certificate.yaml

and this in values.yaml

Briefly, this will create an ingress for the service that resolves the url set in the values. Plus creates a certificate, using the letsencrypt prod system (you can use staging for test environment, we go on this later on).

Install nginx-ingress

First of all install nginx-ingress using helm, set it to use your static ip.

helm install --name nginx-ingress --set controller.service.loadBalancerIP=YOURSTATICIP stable/nginx-ingress

Install cert-manager

First, create the issuer by using this yaml file

(the value letsencrypt-prod is used in the values.yaml and links to this one)

Then launch the cert manager with helm

helm install --name cert-manager stable/cert-manager

this will take care of generating the certificate.

Launch your helm

launch the helm that you updated at the beginning. everything should be working and having the TLS enabled.


Sentry.io Relases + Docker + Fabric + Git

Ok, the title is not explicative but match quite well with Google.
We (chino.io/consenta.me) switched from the old but dismissed OpBeat to sentry.io . Although functionalities are similar, the setup is not so straightforward as it was in OpBeat, and the docs miss useful script such as the one of fabric.
Here I will show you how I created a Fabric script to tag version in GIT (git tag) and use it to create Releases in Sentry and also connect it to the git commit. Everything integrated with Django and Docker (if you use Docker you can’t rely on the “revision” 'release': raven.fetch_git_sha(os.path.abspath(os.pardir) since the docker will not have the folder.git.

First, in Fabric, I created a function to add a new tag for every release. The version number are of this format X.Y , every release on test increases Y, thus 0.1, 0.2, 0.3; releases in production increases X and reset Y, thus: 0.3->1.0->2.0 etc

I then modified the settings of Raven to get the version from the environment file. I also created a function for fabric to update the value in the file.env.

In this way Sentry automatically gets the version when deployed, the version is the same as the Git tag.

Last, I’ve added the call to bind to the release the git commits. This is not too clear from the documentation.

You have to change the value accordingly to your setup.

  • YOUR_ORGANIZATION is in the Organization settings, the first value
  • SENTRY_AUTH (which is load from the env) is the auth created from here
  • YOUR_REPOSITORY is under organization -> repositories https://sentry.io/settings/<your_organization>/repos/ copy the title.
  • YOUT_PROJECT is the project name.

Now, when I release the code I do (among many other things such as committing to git and building docker):

  • tag the deployment
  • update the .env file and the release with the tag from above
  • bind the commits to the release


This took me a couple of hours to figure it out. OpBeat used to have a single fabric script that handled everything and a easier integration with Git Repo. Now I achieved pretty much the same with sentry


That’s all folks

Django – Iframe – Internt explorer : problems SEC7111

I’m recently making the use of Iframe and postmessages for running a project. I run into problems while testing it for Internet Explore (not a news).

The fact is, IE is pretty bad also at stating errors, the only things that it says is that the form where blocked for security reason saying


I initially thought of the X-Frame-Option, and with django you can fix it by annotating the view with @xframe_options_exempt. This works but not when you POST to a view within the Iframe. This beacuse Django uses CSRF cookie while IE blocks cookie of  a third party.

The soluition is pretty easy: THERE’S NO SOLUTION. As explained in this ticket.  The best one seems to be the one of not using Iframe. Or to remove CSRF for that specific view.

A thing that took me forever to solve this problem is the fact that django can’t show you the page 403 since it’s protected for Iframing (you need to rewrite the 403handler, maybe the 403csrfhandler if existsts) and then IE tells you that the page can’t be displayed for security reasons, which at first sights it’s impossible to grasp the reason.



“Beautiful” Django widget for Multi Selection

Left: django default widget / Right: final result

To be honest django is terrific, but in order to be general enough it lacks some look and feel and other stylistics things. One of the problem with forms, which generally work great, is with multi selections. You can have an item list or a checkbox list, like in the 90s. I decided to build a widget to render in a nice fashion the multi-selection case. It took me more than expected, roughly an afternoon, but I run into various problem and I’d to hack a bit the widgets. One of the biggest problem was to access the model.object in the widget since i want to display more data than just the label. Another problem that stucked me for a while was the fact that with crispy form the widget_template overiding seems not to be working (issue here).

Since I want to write less code as possible, the ingredients are:

  • use Class Based view
  • use Model Forms

And the final solution I made allowed me to cut ~50% of the code. Less code you write less bug you make.

The code, once made, is not complex. However, getting there took some time. Let’s start from the view.

rewriting the get_context  function is done for being able to pass a queryset to the formThis allows us to have two benefits:

  • we are able to display in the form field only the data that we want and not the entire list of items present in the database
  • we can pass the queryset into the widget part, usually widgets do not have access to the context

To do so, we have to modify a bit the Form

As you can see we set the queryset of a inner field and also to as widget.qs value. Note that the widget of the field is linked to the brand new widget I just made.

For the widget, I had to extend and overwrite the get_context function in order to load a specifc value from the context. This is a bit of hack, since widget should not know the request or context data, but I need it!

Finally, in the templates (that you see in the form variable) I made the bootstrap panels where I displayed the widget and other information (directly from the object).

The first template is pretty standard

The second one has a piece of code to load from the qs  variable the correct item that is displayed within the widget.

To do so I had to create a filter to get the item from the list

I also add some JS to make the whole panel green when selected.

For full code write a comment here and I’ll provide it.

A full set of the gist is here


This is the result


  • It works
  • It’s (somehow) better than the plain one and quite reusable
  • It took longer than expected to implement it

Route53 and email (forwarding)

AWS is great, it has a ton of services to do whatever you had in mind and even things that you may not even thought about.

One of the services they offer is Route53, the DNS manager . I did use it to map to my loadbalancer. I moved the DNS from namecheap to AWS . The problem is that Route53 does not handle email (not forward nor anything).

There are several solutions, but the one I found the easiset is the one that involves mailgun . It allows you to forward emails to another email of yours (e.g. your gmail account), for free (right now). It should be even possible to use it as mail provider, but I never investigate that part.

To setup the email forwarding with mailgun:

  • subscribe to the service
  • create a domain, use the full domain as name without the www (don’t use the subdomain as suggested, read their docs for more info)
  • follow the DNS setup as explained by their webpage.
  • Once set up, create a route
    • Expression type: custom
    • raw expression: match_recipient(“.*@YOURDOMAIN.COM”)
    • actions: forward – YOUR EMAIL
  • Test the route with the tool at the bottom

Note that you need the raw expression beacuse in mailgun the routes are cross domain.

Docker Alpine for Django, DRF, UWSGi, Postgres and many more

I’m running a couple of project using docker as container engine. Most of them are python related project, which uses django, django rest framework, uwsgi, postgres and packages related to cryptography and much more.

Using the plain python:3 takes up a huge space, so i switched to python:3-alpine

with some problems in adding packages since they do not compile or run.

FROM python:3-alpine 
COPY requirements.txt . 
RUN set -e; \
 apk add --no-cache --virtual .build-deps \
 gcc \
 libc-dev \
 linux-headers \
 python3-dev \
 libffi-dev \
 openssl-dev \
 make \
 ; \
 apk add --no-cache postgresql-dev; \
 pip install --no-cache-dir uwsgi; \
 pip install --no-cache-dir -r requirements.txt; \
 apk del .build-deps; 


But I found a solution after wasting hours here and there..


  • that the postgresql-dev is not installed  as virtual.
  • If you want to have uwsgi use the internal routing (e.g., to avoid logs on health checks you have to install (not virtual)
    pcre pcre-dev

    and then use

     pip install --no-cache-dir -I uwsgi;

It install packages and then deletes them from the image. In the end the image is still pretty large 186MB but before it used to be around 7 times more .. (python:3 starts at ~690 MB).


Automated deployment of a docker on ECS 

ECS is nice, but has plenty of drawbacks (i’m using it since few weeks, and compared to kubernetes it’s a pain in that place..). Deploying a new release is one of the problem, you have to go to the website, add a new task, update the service and so on so forth.

Since I’m lazy and I hate wasting time for automated task, I created a script.

$(aws ecr get-login --no-include-email --region your_region )
docker build -t  your_pacakge .
docker tag your_pacakge your_ecs_repository_url/your_package
docker push your_ecs_repository_url
taskDef=$(aws ecs register-task-definition --cli-input-json file://ecs/task.json | jq -r '.taskDefinition.taskDefinitionArn')
aws ecs update-service --cluster  --service  --task-definition $taskDef 


  • the ecs/task.json is the json you get in the JSON tab when you create the task, copy that one and remove all the fields that have as value null.
  • it requires jq or you have to find a way to parse the output
  • the docker commands URL and TAGS are in the repository page of ecs, you can copy from them.

Similar approach works also with fabric , except the login part (fabric wraps shells, so command are executed one and not saved), thus execute the login in the console before running the fab command.