Common misconceptions about the public cloud

I’m currently a big fan of public clouds, mostly because of the Infrastructure as a Service tools they offer: by uploading a simple JSON or YAML template I can create infrastructure services and network services that scale accross multiple physical and even geographical locations, and automatically install all the applications that run on top of them.

But when I talk to people in other companies I still hear a lot of misconceptions about how all of this works and what it costs. I will try to get rid of the biggest ones in this blog post.

Disclaimer: While most of the things I’m going to address are not unique to a specific public cloud, I will be using AWS as a reference since that’s the public cloud I’m currently most familiar with.

Sidenote: Have a look at my CloudFormation templates on Github for some examples to use on Amazon Web Services.

Misconception: You have no idea about costs on a public cloud

False.

1. Set limits

While the public cloud offers you a virtually endless amounts of resources to use, you can, and MUST, set limits on everything. E.g. when you are creating an Auto Scaling Group (a service that creates and destroys instances depending on the resources needed) you always set an upper and a lower limit for the number of instances it can create when it executes a scaling action.

2. Set warnings

Pretty trivial to point out, but you can track your costs on a daily basis, with warnings if a certain threshold has been reached. But it’s your job to monitor those costs and act upon them if they are not what you expected them to be.

A big aide in this is using tags for your resources. Tags allow you to group resources together easily in the billing overview. Tags could include Environment (e.g. prod, staging, test, …), CostCenter (a common used tag for grouping resources per department), Service (e.g. Network, VPC, Webservers) and whatever tag you want to use. The key really is “more is better” when it comes to tags since that allows you to refine to a very low level.

3. Simulate your costs well

Before moving to the public cloud it’s perfectly doable to create a cost similation of what the setup will cost. AWS offers you the Simple Monthly Calculator and TCO Calculator. It is however YOUR job to do this as detailed as possible, with taking storage usage and bandwidth usage into account to make this a good estimate.

4. Don’t keep things running 24 / 7 if they don’t need to

On AWS you pay per hour you use a resource, e.g. a virtual server. On Google Compute Engine you even pay per minute, so destroying resources when you don’t need them is a must to keep costs down.

Using Infrastructure as a Code you can create templates that will build your infrastructure, networking and application setups, as I’ve stated above already. But this also allows you to create identical stacks for a staging, QA or development environment whenever you need it, which you can destroy again when you are done using them.

A simple example would be a QA environment, identical to the production environment, that only runs during office hours, since nobody will be using it outside of those.

If you provide enough input parameters for your IaaC templates you can even optimize costs more: production runs in 2 physical locations, QA could run in only one, since it does not require the same level of high availability.

Misconception: But AWS crashes all the time :(

One of AWS’s slogans is actually Design for faillure.

Hardware crashes, there is no hosting company that will give you a 100% uptime guarantee on any single piece of hardware. Even with the most redundant setup there are still weak links in the chain that can break. What you need to do on the public cloud is to make sure that your application can handle faillure

Examples:

  • run Auto Scaling instances behind a load balancer instead of a single host (yes you will need to redesign your application for this)
  • run in multiple datacenters and even in multiple geographical regions
  • get a good understanding of which service from the public cloud is offered as high-available and what you still have to do yourself

Designing your application for the public cloud will be a challenge, but there are enough example cases already that shows you how to do this. And with container technology becoming more mature every month this suddenly got a whole lot easier to achieve.

Misconception: We don’t want vendor lock-in to public cloud provider X

In all the applications I have created or moved to the cloud there was very little vendor lock-in besides the IaaC tool we used (and you can even use something like Terraform to remove that lock-in). Things that all applications use but are not specific to any public cloud:

  • Linux virtual servers
  • Docker containers
  • MySQL/PostgreSQL databases
  • memcache and/or Redis
  • NFS storage
  • DNS

The main thing of the application is still the Docker container, where your actual code runs. On AWS this will run on ECS with ALB’s, but you can just as well run the containers on Google Compute Engine or Microsoft Azure with the equivalents of those systems, it will not require change in your application code at all.

But… if you want to make most use of public cloud provider X you will need to develop for it. On AWS you would e.g. make your application run on ECS and use S3, SNS and SQS to glue things together. But once you do this you will realise how powerfull and virtually without limits the public cloud is.

I hope you found this blog post usefull, feel free to leave a comment below.

Life Update 2017

Time for a little personal update again.

Home

I’ve been decorating my apartment. Turned out to be a lot of fun and I’ve developed quite an interested for home decoration now.

Living Room

Work

I still work with computers, most of them in the cloud. Also still in the gambling / sports betting business. Moving more into a leading / managing position though, the whole use the 20 years of experience to guide the team in the right direction.

Co-worker

Gym

I still train 4 days a week, most of them at SATS, a few days each month at Sweden Barbell Club.

Sweden Barbell Club

Life

I’m trying to step up my wardrobe. Invest in some expensive clothes that will last you a long time if you take good care of them. Never been a big fan of the whole Primark throwaway culture anyway.

Hobbies

I bought the smallest Marshall amp there is, so I can play some guitar again at home. It has a great sound at a really low volume, but can go loud if needed. Perfect for an apartment.

Marhall

Women

Same old, same old.

Commitment by Cyanide and Happiness

My favorite developer tools for OSX

Macbook Pro

Having the right tools on your computer is the key to work fast and efficient, without having to waste much time on repetitive and boring actions. Here’s a list of the current tools I use on my Macbook:

iTerm 2

The standard Terminal application in OSX is quite limited so we need a better one:

iTerm 2

iTerm2 is a replacement for Terminal and the successor to iTerm. It works on Macs with macOS 10.8 or newer. iTerm2 brings the terminal into the modern age with features you never knew you always wanted.

iTerm 2 is simply the standard terminal application for OSX. You can split windows with simple commands (cmd-d and cmd-shift-d), easy copy/paste text (just select select, no copy command needed) and click on hyperlinks without having to copy the text first (hold down CMD when you hover about the link) and many more handy features.

Website: https://www.iterm2.com/

Homebrew

Homebrew

Homebrew installs the stuff you need that Apple didn’t.

Homebrew is literally the first thing you install on a new Mac after installing iTerm2. It’s a command line package manager that you can best compare to yum or apt-get and has about every little piece of GNU and open source software available.

Homebrew in action

Some of the other tools in the blog post will also be installed using Homebrew.

Website: http://brew.sh/

ShiftIt

ShiftIt is an application for OSX that allows you to quickly manipulate window position and size using keyboard shortcuts. It intends to become a full featured window organizer for OSX.

I hate having windows that are not using up the full width or half width of my screen. ShiftIt offers me a few simple keyboard shortcuts to maximize or move windows to fill up my whole or part of my screen, without ever having to use my mouse.

  • ctrl-alt-cmd-m: maximize current window
  • ctrl-alt-cmd-ARROWKEY: scale windows to take up half a screen size, attached to the side of the screen which arrow key you press. Then use ctrl-alt-cmd-EQUAL and ctrl-alt-cmd-MINUS to stretch them out a bit.
  • ctrl-alt-cmd-NUMBERKEY: scale windows to 1/4th of your screen in a corner, the corner depends on the number you press

Website: https://github.com/fikovnik/ShiftIt

Homebrew install: brew cask install shiftit

1Password

1Password

I never ever use a password for more than 1 site, and neither should you. But since remembering tons of passwords is nearly impossible we need a password manager. For me that password manager is 1Password: it stores encrypted passwords in a file that you can sync on iCloud, Dropbox, on a network share or just copy around manual.

1Password has native apps for almost all platforms (Windows, OSX, iOS, Android) and a browser extension for all the popular browsers making filling in password forms easy. The latest version of 1Password even includes a 2FA system.

1Password is not free software, but it’s worth every cent.

Website: https://1password.com/

One year in Sweden

I moved to Stockholm a little over a year ago now, time to have a quick look back at how that went and what I have learned about Stockholm and Sweden.

Stockholm

Stockholm is high-tech

People call Stockholm “The Silicon Valley of Europe”, and I have to agree with that statement. There are hundreds of startups here, with Spotify probably the most famous of them all, who attract a lot of highly educated people from all over the world. If you work as software developer or IT specialist, finding a job in Stockholm really is no problem.

That is only true if you are from inside the EU though. People from outside the EU are currently having a hard time to get accepted to work in Sweden, which is not a bad thing seeing the housing problem we currently have here (see further down).

Pro-immigrant and pro-equality site The Local writes a lot about troubles from immigrants (which I am too), often with an overly sentimental tone on how hard it is for the poor fellows to integrate into the Swedish cities and job market. It’s just as hard as for normal Swedes as well, who pay the same rent price or have to go through 20 years being in a rental queue, but they prefer to quietly ignore that fact.

Stockholm is expensive

While it’s pretty easy to find a job in software development or IT here, the opposite is true for finding a place to live. There is a big shortage of housing in all of Sweden, especially in the bigger cities like Stockholm, which is the result of years of bad planning by the housing administration.

What this means in practice is that rent prices are extremely high, if you even manage to find a place at all. Buying a place is actually quite doable, but that requires you to have saved at least 25% of the purchase price before banks will give you a loan. (The official amount is 15% but in practice you should aim for 25% to get a decent deal from banks).

Stockholm is fit

I don’t think there are a lot of cities that have as much gyms per inhabitants as Stockholm does. In the center of the town there is a gym or a CrossFit box or some other kind of sport place on every street corner. Staying fit is also well integrated into the company culture so starting late or leaving early for a gym class is accepted everywhere.

Stockholm is all about equality

Gender, race, income, religion… Sweden makes a big case about treating everyone as equals, and they love it when the world acknowledges this. Every week there’s at least one newspaper or website in the world writing about Sweden and how equal stuff is here.

But they sometimes push it a little bit too far, like the gender equal snow cleaning. It has something to do with the fact that cleaning snow on the streets for the cars (= mostly men use those) has the same priority as cleaning snow on the pavement next to schools (= mostly women use those). I think it’s more an issue of setting sane priorities in general, but what do I know?

Stockholm is lonely

Swedes tend to be quiet and reserved people and that makes it hard to make new friends here. There are also no casual chats on the subway or bus or in the elevator here, everybody just keeps to themselves. And I, as an introvert technology lover, like that.

But when it comes to dating there are still bars, gyms and Tinder like the rest of the world. Swedish people tend to losen up when they get drunk, that really is the key to meeting new people here.

Happy Swedes

Sidenote: Alcohol is expensive here and sold in government-controlled stores only, where you have to be at least 20 years old to buy stuff.

Stockholm is home.

Yep. I like it here and I’m going to stay.

Deploying a Hugo website to Amazon S3 using AWS CodeBuild

A month ago I blogged about using Bitbucket Pipelines as a deployment tool to deploy my Hugo website to AWS S3. It was a fully automated setup that deployed a new version of the site every time I pushed a commit to the master branch of the git repo.

Lately I’ve been moving more things to AWS, as having everything on AWS makes it easier to integrate stuff, including my Hugo blog. Let me show you how I set up the build process on AWS.

CodeCommit

Firstly I moved my git repo from the public, free Bitbucket server to AWS CodeCommit. There really is nothing special to say about that: CodeCommit is simply git on AWS (details on pricing)

The only thing I want to stress, again, is that you should not use your admin user to push code but create a new IAM user with limited access so it can only push code and nothing more. The CodeCommit page will guide you with that, up to the point of creating SSH keys.

The AWS Managed Policy AWSCodeCommitFullAccess should be all the access needed, there is no need to write your own policy.

CodeBuild

Secondly, I needed a replacement for Bitbucket Pipelines: AWS CodeBuild. Launched in December 2016, CodeBuild is almost exactly the same build system as Bitbucket Pipelines (and Travis CI, and GitLab templates, and so many other Docker-driven build systems) and there is just one thing you need to create yourself: a build template.

Here’s what I used as buildspec.yml for building and deploying my Hugo blog:

version: 0.1

environment_variables:
  plaintext:
    AWS_DEFAULT_REGION: "YOUR_AWS_REGION_CODE"
    HUGO_VERSION: "0.17"
    HUGO_SHA256: "f1467e204cc469b9ca6f17c0dc4da4a620643b6d9a50cb7dce2508aaf8fbc1ea"

phases:
  install:
    commands:
      - curl -Ls https://github.com/spf13/hugo/releases/download/v${HUGO_VERSION}/hugo_${HUGO_VERSION}_Linux-64bit.tar.gz -o /tmp/hugo.tar.gz
      - echo "${HUGO_SHA256}  /tmp/hugo.tar.gz" | sha256sum -c -
      - tar xf /tmp/hugo.tar.gz -C /tmp
      - mv /tmp/hugo_${HUGO_VERSION}_linux_amd64/hugo_${HUGO_VERSION}_linux_amd64 /usr/bin/hugo
      - rm -rf /tmp/hugo*
  build:
    commands:
      - hugo
  post_build:
    commands:
      - aws s3 sync --delete public s3://BUCKETNAME --cache-control max-age=3600

The Docker image I used was the standard Ubuntu Linux 14.04 one since I don’t require any custom software during my build plan.

For more complex jobs you can provide your own Docker image to run the build process in. Make sure it includes libc, otherwise AWS will not be able to run it. Sadly this will exclude most alpine-based images, but for a build process that probably shouldn’t be a big issue.

Instead of using an IAM user by providing the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in my build template, I used the CodeBuild IAM role to define my access to the S3 bucket. CodeBuild will generate this role for you when creating a build plan, just add this custom IAM policy to that role:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:List*",
                "s3:Put*",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::BUCKETNAME",
                "arn:aws:s3:::BUCKETNAME/*"
            ],
            "Effect": "Allow"
        }
    ]
}

Replace BUCKETNAME with the name of your S3 bucket.

Some remarks

Right now deployment is a manual action: I log into the AWS CodeBuild site and push the Run build button. CodeBuild has no easy “Build on new commits” option, but you can of course use AWS Lambda to build that yourself. I will do that soon for my blog, and then I’ll update this post with the Lambda I used.

If you are looking for a complete pipeline system like GoCD, AWS CodePipeline is what you need.