Docker nginx container with GeoIp database

This is a Docker nginx container that includes the MaxMind GeoIP Country database. It injects the X-Origin-Country-Code and X-Origin-Country-Name headers into http requests to the backend with the country code of the requester.

Note: This container needs some more configuration before it actually runs, like the backend “backend” needs to exist. But you are most likely to copy/paste parts of this code into your own project anyway :)

Life Update 2017, part 2

So it appears I no longer have commitment issues. #nina

nina

(Is this how I’m supposed to do this, Kalle?)

Docker PHP container with New Relic

I created a small example Docker container that runs PHP with the New Relic agent installed. The New Relic agent on the Docker host cannot monitor things inside the containers, so we need to install the agent inside each Docker container.

The Dockerfile looks like this:

# My PHP base container: https://github.com/karelbemelmans/docker-php-base
FROM karelbemelmans/php-base:7.1
MAINTAINER mail@karelbemelmans.com

# You should override this when you run the container!
# It will get appended to the New Relic appname in the entrypoint scripts e.g. my-php-container-local
ENV environment local

# Install New Relic
RUN set -x && DEBIAN_FRONTEND=noninteractive \
  && wget -O - https://download.newrelic.com/548C16BF.gpg | apt-key add - \
  && echo "deb http://apt.newrelic.com/debian/ newrelic non-free" > /etc/apt/sources.list.d/newrelic.list \
  && apt-get update \
  && apt-get install -y newrelic-php5 \
  && newrelic-install install \
  && rm -rf /var/lib/apt/lists/*

# We need to copy the New Relic config AFTER we installed the PHP extension
# or we get warnings everywhere about the missing PHP extension.
COPY config/newrelic.ini /usr/local/etc/php/conf.d/newrelic.ini

# Generate an example PHP file in the webroot
RUN echo '<?php phpinfo();' > /var/www/html/index.php

# Our entrypoint script that also modifies the New Relic config file
COPY entrypoint.sh /
CMD ["/entrypoint.sh"]

The entrypoint.sh scripts:

#!/bin/bash -e

# Update the New Relic config for this environment
echo "newrelic.appname=my-php-container-${environment}" >> /usr/local/etc/php/conf.d/newrelic.ini

# Proceed with normal container startup
exec apache2-foreground

Besides that you need to create the config/newrelic.ini file too with your license id (and probably more options):

# First of all enable the extension
extension=newrelic.so

# Our license key is required
newrelic.license="REPLACEME"

# Enable this if you configured your account for High Security
#newrelic.high_security=true

Source code on Github: https://github.com/karelbemelmans/docker-php-newrelic

Note: This container is built on top of my PHP 7 Base Docker container, which you might also find useful.

Common misconceptions about the public cloud

I’m currently a big fan of public clouds, mostly because of the Infrastructure as a Service tools they offer: by uploading a simple JSON or YAML template I can create infrastructure services and network services that scale accross multiple physical and even geographical locations, and automatically install all the applications that run on top of them.

But when I talk to people in other companies I still hear a lot of misconceptions about how all of this works and what it costs. I will try to get rid of the biggest ones in this blog post.

Disclaimer: While most of the things I’m going to address are not unique to a specific public cloud, I will be using AWS as a reference since that’s the public cloud I’m currently most familiar with.

Sidenote: Have a look at my CloudFormation templates on Github for some examples to use on Amazon Web Services.

Misconception: You have no idea about costs on a public cloud

False.

1. Set limits

While the public cloud offers you a virtually endless amounts of resources to use, you can, and MUST, set limits on everything. E.g. when you are creating an Auto Scaling Group (a service that creates and destroys instances depending on the resources needed) you always set an upper and a lower limit for the number of instances it can create when it executes a scaling action.

2. Set warnings

Pretty trivial to point out, but you can track your costs on a daily basis, with warnings if a certain threshold has been reached. But it’s your job to monitor those costs and act upon them if they are not what you expected them to be.

A big aide in this is using tags for your resources. Tags allow you to group resources together easily in the billing overview. Tags could include Environment (e.g. prod, staging, test, …), CostCenter (a common used tag for grouping resources per department), Service (e.g. Network, VPC, Webservers) and whatever tag you want to use. The key really is “more is better” when it comes to tags since that allows you to refine to a very low level.

3. Simulate your costs well

Before moving to the public cloud it’s perfectly doable to create a cost similation of what the setup will cost. AWS offers you the Simple Monthly Calculator and TCO Calculator. It is however YOUR job to do this as detailed as possible, with taking storage usage and bandwidth usage into account to make this a good estimate.

4. Don’t keep things running 24 / 7 if they don’t need to

On AWS you pay per hour you use a resource, e.g. a virtual server. On Google Compute Engine you even pay per minute, so destroying resources when you don’t need them is a must to keep costs down.

Using Infrastructure as a Code you can create templates that will build your infrastructure, networking and application setups, as I’ve stated above already. But this also allows you to create identical stacks for a staging, QA or development environment whenever you need it, which you can destroy again when you are done using them.

A simple example would be a QA environment, identical to the production environment, that only runs during office hours, since nobody will be using it outside of those.

If you provide enough input parameters for your IaaC templates you can even optimize costs more: production runs in 2 physical locations, QA could run in only one, since it does not require the same level of high availability.

Misconception: But AWS crashes all the time :(

One of AWS’s slogans is actually Design for faillure.

Hardware crashes, there is no hosting company that will give you a 100% uptime guarantee on any single piece of hardware. Even with the most redundant setup there are still weak links in the chain that can break. What you need to do on the public cloud is to make sure that your application can handle faillure

Examples:

  • run Auto Scaling instances behind a load balancer instead of a single host (yes you will need to redesign your application for this)
  • run in multiple datacenters and even in multiple geographical regions
  • get a good understanding of which service from the public cloud is offered as high-available and what you still have to do yourself

Designing your application for the public cloud will be a challenge, but there are enough example cases already that shows you how to do this. And with container technology becoming more mature every month this suddenly got a whole lot easier to achieve.

Misconception: We don’t want vendor lock-in to public cloud provider X

In all the applications I have created or moved to the cloud there was very little vendor lock-in besides the IaaC tool we used (and you can even use something like Terraform to remove that lock-in). Things that all applications use but are not specific to any public cloud:

  • Linux virtual servers
  • Docker containers
  • MySQL/PostgreSQL databases
  • memcache and/or Redis
  • NFS storage
  • DNS

The main thing of the application is still the Docker container, where your actual code runs. On AWS this will run on ECS with ALB’s, but you can just as well run the containers on Google Compute Engine or Microsoft Azure with the equivalents of those systems, it will not require change in your application code at all.

But… if you want to make most use of public cloud provider X you will need to develop for it. On AWS you would e.g. make your application run on ECS and use S3, SNS and SQS to glue things together. But once you do this you will realise how powerfull and virtually without limits the public cloud is.

I hope you found this blog post usefull, feel free to leave a comment below.

Life Update 2017

Time for a little personal update again.

Home

I’ve been decorating my apartment. Turned out to be a lot of fun and I’ve developed quite an interested for home decoration now.

Living Room

Work

I still work with computers, most of them in the cloud. Also still in the gambling / sports betting business. Moving more into a leading / managing position though, the whole use the 20 years of experience to guide the team in the right direction.

Co-worker

Gym

I still train 4 days a week, most of them at SATS, a few days each month at Sweden Barbell Club.

Sweden Barbell Club

Life

I’m trying to step up my wardrobe. Invest in some expensive clothes that will last you a long time if you take good care of them. Never been a big fan of the whole Primark throwaway culture anyway.

Hobbies

I bought the smallest Marshall amp there is, so I can play some guitar again at home. It has a great sound at a really low volume, but can go loud if needed. Perfect for an apartment.

Marhall

Women

Same old, same old.

Commitment by Cyanide and Happiness