Configure Telenet ipv6 on pfSense 2.2.x

It took me some time to figure this out, but thanks to @iworx his input I managed to get ipv6 up and running with my Telenet business modem (the modem-only, non-wifi version) with my pfSense router.

The pfSense version I used was 2.2.2.

Here are 2 screenshots of the interface pages. The one thing that fixed it was disabling the “Block Bogon networks” on the WAN interface page, do not forget to uncheck it!

Screenshot 1 Screenshot 2

That’s all.

Reverse proxy configuration for Drupal 7 sites

Update 13/7/2015: If you’re doing this for your Drupal 7 site, you should probably also read this blog post about updating your varnish and Apache / nginx configuration for proper logging the real ip address of visitors.

A common mistake I see a lot of times by developers that have a varnish server (or other type of content cache) in front of their Drupal 7 site, is that they forget to add these lines to their settings.php:

// reverse proxy support to make sure the real ip gets logged by Drupal
$conf['reverse_proxy'] = TRUE;
$conf['reverse_proxy_addresses'] = array('127.0.0.1');
$conf['reverse_proxy_header'] = 'HTTP_X_FORWARDED_FOR';

The default.settings.php does contain a very clear comment about why and when to use it:

/**
 * Reverse Proxy Configuration:
 *
 * Reverse proxy servers are often used to enhance the performance
 * of heavily visited sites and may also provide other site caching,
 * security, or encryption benefits. In an environment where Drupal
 * is behind a reverse proxy, the real IP address of the client should
 * be determined such that the correct client IP address is available
 * to Drupal's logging, statistics, and access management systems. In
 * the most simple scenario, the proxy server will add an
 * X-Forwarded-For header to the request that contains the client IP
 * address. However, HTTP headers are vulnerable to spoofing, where a
 * malicious client could bypass restrictions by setting the
 * X-Forwarded-For header directly. Therefore, Drupal's proxy
 * configuration requires the IP addresses of all remote proxies to be
 * specified in $conf['reverse_proxy_addresses'] to work correctly.
 *
 * Enable this setting to get Drupal to determine the client IP from
 * the X-Forwarded-For header (or $conf['reverse_proxy_header'] if set).
 * If you are unsure about this setting, do not have a reverse proxy,
 * or Drupal operates in a shared hosting environment, this setting
 * should remain commented out.
 *
 * In order for this setting to be used you must specify every possible
 * reverse proxy IP address in $conf['reverse_proxy_addresses'].
 * If a complete list of reverse proxies is not available in your
 * environment (for example, if you use a CDN) you may set the
 * $_SERVER['REMOTE_ADDR'] variable directly in settings.php.
 * Be aware, however, that it is likely that this would allow IP
 * address spoofing unless more advanced precautions are taken.
 */
# $conf['reverse_proxy'] = TRUE;

If you don’t configure this header when you have varnish, all your Drupal request will have 127.0.0.1 (= the ip adddress of the varnish server) as the source ip address for connection attempts. You can easily see this in the webserver and watchdog logs.

This might not seem a big deal, but Drupal also has something called ‘flood protection’. This protection bans users by ip address if they have made too many failed logins in a period of time (the default is 50 failed logins over 1 hour).

And what do you think happens when all your users come from the same ip and the flood protection gets triggered? Yup, everyone gets banned.

Elasticsearch backup script with snapshot rotation

Edit 2015/10/16: Added the example restore script.

Edit 2015/3/31: It seems there is also a python script called curator that is intended to be a housekeeping tool for elasticsearch. While curator is being a more complete tool, my script below works just as well and doesn’t need python installed. Use whatever tool you prefer.

Elasticsearch 1.4 has an easy way to make backups of an index: snapshot and restore. If you use the filesystem way, you can just make a snapshot, rsync/scp/NFS export the files to another host and restore them from those files.

Setup the snapshot repository

Setup the snapshot repository location:

curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
  "type": "fs",
  "settings": {
    "location": "/mount/backups/my_backup",
    "compress": true
  }
}'

Take snapshots

A backup script you can run on cron would be as simple as this:

#!/bin/bash
SNAPSHOT=`date +%Y%m%d-%H%M%S`
curl -XPUT "localhost:9200/_snapshot/my_backup/$SNAPSHOT?wait_for_completion=true"

While it’s very easy to set up this backup, there is currently no logrotate included to remove old snapshots. I wrote a small script using the jq program that keeps the last 30 snapshots and deletes anything older:

#!/bin/bash
#
# Clean up script for old elasticsearch snapshots.
# 23/2/2014 karel@narfum.eu
#
# You need the jq binary:
# - yum install jq
# - apt-get install jq
# - or download from http://stedolan.github.io/jq/

# The amount of snapshots we want to keep.
LIMIT=30

# Name of our snapshot repository
REPO=my_backup

# Get a list of snapshots that we want to delete
SNAPSHOTS=`curl -s -XGET "localhost:9200/_snapshot/$REPO/_all" \
  | jq -r ".snapshots[:-${LIMIT}][].snapshot"`

# Loop over the results and delete each snapshot
for SNAPSHOT in $SNAPSHOTS
do
 echo "Deleting snapshot: $SNAPSHOT"
 curl -s -XDELETE "localhost:9200/_snapshot/$REPO/$SNAPSHOT?pretty"
done
echo "Done!"

Restore snapshots

Get a list of all the snapshots in the snapshot repository:

curl -s -XGET "localhost:9200/_snapshot/my_backup/_all?pretty"

From that list pick the snapshot id you want to restore and then make a script like this:

#!/bin/bash
#
# Restore a snapshot from our repository
SNAPSHOT=123

# We need to close the index first
curl -XPOST "localhost:9200/my_index/_close"

# Restore the snapshot we want
curl -XPOST "http://localhost:9200/_snapshot/my_backup/$SNAPSHOT/_restore" -d '{
 "indices": "my_index"
}'

# Re-open the index
curl -XPOST 'localhost:9200/my_index/_open'

My home network & server park

Every guy that’s into IT and / or development probably has some kind of way too complex computer setup at home. Why? Because we can.

Here’s my current one as of March 2015.

Servers

I’ve had more server setups at home than in my datacenter, but nowadays it’s been simplified a lot, all thanks to virtualisation. There are basically 2 components left now:

  1. A storage server
  2. A virtualisation hypervisor

The Synology storage server, the Dell hypervisor and the HP Procurve 1910-16G switch.

My current storage server is the Synology DS1513+. I’ve been a Synology fan for a some years now and this NAS is of the same high quality as the ones I had before: solid hardware, not too expensive and with a very complete interface on top of the Linux OS. It’s running in (software) raid 5 mode with 5 Seagate 4TB disks and gets a nice 90MB/sec write speed, which is more than enough for a home network.

The virtualisation hypervisor is a Dell R210 with 16GB RAM running VMWare ESXi 5.5. The single-server license for maximum 2 CPU’s and unlimited RAM is free nowadays, so I didn’t really feel like setting up a xen or KVM as ESXi has such a sweet GUI. And besides that, I have never seen an ESXi hypervisor crash, ever, after using it for more than 6 years now.

The hypervisor has no disks installed as ESXi boots from a 1GB USB disk that’s plugged in a USB slot on the inside of the server. It then mounts an NFS share from the Synology and uses that for storage.

Network

I currently have 4 switches in my appartement:

  • An HP Procurve 1810-8G, installed on my desktop table. It’s a fanless gigabit switch, so it’s nice and quiet.
  • An HP Procurve 1910-16G, for my second bedroom dataroom. It connects my storage server, the hypervisor and the wifi access point. The Synology has a double 2x1GB trunk: one for the storage network, one for the normal internal LAN network. The Synology works out of the box with the dynamic LACP aggregation setup the HP has, the ESXi 5.5 server can only work with static LACP (as I don’t use a distributed switch).
  • A second HP Procurve 1810-8G, placed near my modem, is the central switch for the appartement. It’s got 2 2x1GB trunks to the previous switches. Why? Because I can ofcourse.
  • And finally a cheap 10/100Mbit switch for my Samsung smart tv and the Apple TV, they don’t need much bandwidth so this old one servers just fine.

For my router / firewall I use a pfSense box that’s also virtualised on the ESXi, just like the rest of the servers. pfSense is a user-friendly FreeBSD distribution that has a ton of features.

You can also get pre-installed on some dedicated hardware devices, I currently have a few of these boxes running at my client’s offices.

fSense 2.2.1 GUI dashboard page

Using a separate VLAN I can connect my modem to the pfSense box without it needing to be attached directly to my modem. It’s a simple setup with an untagged port on the HP switch near the router and a virtual switch inside ESXi that has been tagged in the same vlan.

Virtual servers

Besides the pfSense box, I’m currently running:

  • A Plesk shared hosting server, used as my PHP development box.
  • An Atlassion Bamboo deployment server for my sites and a few customers. It uses the Plesk shared hosting server as a CI environment to run automated tests on before deploying. This server also runs a Jenkins server, but I haven’t really done serious stuff with it yet.
  • An Asterisk server. Just for fun really, I don’t actually use it.
  • A Puppet server for my homelan. Yup, totally overkill but fun to test with :)
  • A Zabbix / nagios monitor server that monitors my monitor servers in the datacenter.
  • A few Linux and BSD boxes, mostly to check out some other distributions.

I used to run a few Windows boxes on the server, for browser testing mostly, but nowadays I just use the modern.ie images on VMWare Fusion on my Macbook. I bought the 16GB RAM version and running multiple Windows vm’s at the same time is really no problem at all.

I used to run all my vm’s on VirtualBox, but I couldn’t get some USB devices to work (e.g. the Belgian eID reader). VMWare Fusion had no problem with it.

The best way to run a Drupal 7 cron via drush

Edit 2015/4/8: You also want to read this article from Mattias on preventing cronjobs to overlap.

Edit 2015/3/18: Check this article on how to install drush via composer, which is currently the preferred way to install.

This is the best way to run your Drupal 7 drush cron command from crontab:

# Run cron every 15 minutes, quiet
*/15 * * * * drush --quiet --root=$HOME/htdocs --uri=http://www.example.org cron

# --quiet  means there is no output, so no mail every time cron runs
# --root   is a nicer way than first do a "cd $dir; drush cron"
# --uri    is needed so certain modules (like xmlsitemap and media)
#          don't generate urls like http://default/ but use the full
#          site url http://www.example.org

A more extensive cron with a custom SHELL, PATH and MAILTO would be:

# If there is output, mail it to this address
MAILTO=user@example.org

# Add more paths
PATH=$PATH:$HOME/.extra/bin

# Make sure we are using bash as shell (or any other shell)
SHELL=/bin/bash

# Run cron every 15 minutes, quiet
*/15 * * * * drush --quiet --root=$HOME/htdocs --uri=http://www.example.org cron

# Run an import every night, output goes to MAILTO
0 2 * * * drush --root=$HOME/htdocs --uri=http://www.example.org run-import

That’s all.