All posts by Oliver Tappin

Server emails not sending or being received as spam?

Email clients are constantly improving their spam filtering, demanding higher authentication from servers to be able to successfully receive emails. This is great for the receiver, as they don’t have to worry about all the spam emails filling their inbox anymore, but not so great for the server administrator who’s genuine email alerts from the server are being marked as spam, or worse, not being received at all.

One solution is to use a transaction mail API such as MailGun or Amazon Simple Email Service (SES), both of which provide a free service up to a specified limit (even after that, it’s quite cheap) but the alternative is to make your server more genuine looking to potential receivers.

Let’s start off by getting a basic benchmark from Mail Tester. This is a great little tool to test your server’s email capabilities, whether you’re sending alerts, newsletters or password reset links. From the command line, we can send a test email directly via the command line to the email address Mail Tester has generated for us.

First, let’s create a basic text file which will contain the content of our email. Head to your home directory and create a new file using your favourite text editor:

The content I’ve used is:

Now, we can send the email to the test address:

If you don’t have the mail command, you can install it using:

Let’s head over to Mail Tester and check our score:

If you scroll down a little more, you can open up the list items to see further details:

Great, now we have a starting point, we can go through and start to improve our email authentication. Follow the instructions carefully through each item and do as much as you can. There are some very quick and easy wins you can do around DNS records for SPF, DKIM and DMARC. Something as small as this, which can be done in under an hour can drastically change the results of your authentication.

And we can see what categories we’ve made better:


In my instance, I just needed my server to send emails successfully to Gmail. Since Gmail’s spam filters are quite strict, I wasn’t able to see any emails being received whatsoever. Now with a score of 5.7, those emails get sent straight to my inbox. That’s enough for me.

Installing Nagios with Nginx, PHP-FPM and Nagiosgraph on Ubuntu 16.04

Update Your System

Always ensure your system is up-to-date before installing new packages by running:

Please note, this guide assumes you already have Nginx and PHP-FPM installed.


For the sake of clarity, the server I’m installing this on has the following versions of Ubuntu, Nginx and PHP-FPM:

Nagios User and Group

We’ll start by creating a new user and group specific to Nagios:

Required Dependencies

Next, we’ll install the dependencies we need:

Install Nagios Core

Then, we want to install Nagios Core. To get the latest version, you’ll have to check the Nagios website to see what the latest stable release is, and change the version number accordingly:

At the time of writing, this is v4.3.2. Once downloaded, we need to extract the archive:

Now this has been extracted, we can delete the archive and change into the extracted directory:

Before we actually go ahead and install Nagios, we need to configure it. We’re going to install Nagios with postfix, which we’ve already installed. To do this, we run the following command:

If you read the output, which should be rather verbose, it should let you know where everything is and how to continue:

As it says, review the options above, and if they look okay, run the following command to compile the main program and CGIs:

Installing from source always scares me at this point, as there’s quite a lot going on in the console. Don’t worry, this is normal. If this finishes and ends with ‘Enjoy’, then you’re doing well.

Now, we can get down to actually installing Nagios along with some initial scripts and sample configuration files:

In order for us to make external commands via the web interface to Nagios, we’ll need to sort out permissions. To do this, we’ll add the web server user to our nagcmd group:

Nagios Plugins

Just like Nagios Core, you’ll need to consult the Nagios website to find the latest version of the Nagios Plugins download:

At the time of writing, this is v2.2.1. Once downloaded, we need to extract the archive:

Now this has been extracted, we can delete the archive and change into the extracted directory:

Before installing Nagios Plugins, we’ll need to configure it:

Now, compile Nagios Plugins:

Then, install it:

To finish off our installation, we’ll need to copy over the check_* files to our Nagios configuration, so we can use them later for monitoring:

Install NRPE

Find the latest stable version of NRPE from the NRPE downloads page. If you’re starting to see a trend, you’ll be right. Please note, you may need to manually download the file from their website, as Sourceforce don’t seem to provide direct links. Go ahead and change the version number in the following command if necessary:

At the time of writing, this is v3.1.1. Once downloaded, we need to extract the archive:

If the above command fails with:

Then you may need to manually upload the tar.gz file, after downloading it from the Sourceforce website. Once downloaded to your machine, you can use scp or rsync to upload the file to your server:

After transferring the correct file to the server, go ahead and run the tar command again, as described above.

Now this has been successfully extracted, we can delete the archive and change into the extracted directory:

Before installing NRPE, we’ll need to configure it:

Now build and install NRPE:

Enable the NRPE service:

If you have xinetd installed, you’ll need to configure your /etc/xinetd.d/nrpe file to only include the IP address of the Nagios Server. For more information, Google it.

Now that Nagios, Nagios Plugins and NRPE are installed, we need to configure Nagios.

Configure Nagios

Open the main Nagios configuration file in your favourite text editor:

And remove the following comment:

Pro tip: to search in vim, use the / command following by the string you want to search for, and hit enter.

Then save and exit.

Next, we want to create the directory where all our server configuration files will live in:

Configure Nagios Contacts

Open the Nagios contacts configuration in your favourite text editor:

And change the default email address to your email address:

Then save and exit.

Configure check_nrpe Command

Let’s go ahead and add a new command to our Nagios configuration using our favourite text editor:

And add the following to the end of the file:

Then, save and exit. This allows you to use the check_nrpe command in your Nagios service definitions.

Configure Nginx

Up until now, you may have found most installation guides show you how to install Nagios, Nagios Plugins and NRPE. However, very few show you how to set this up on an Nginx and PHP-FPM environment.

We’ll start by installing the fcgiwrap dependancy, so we can run the Nagios CGI scripts:

Next, you’ll need to create a new virtual host for Nagios. Depending on your current setup, these are usually found in /etc/nginx/sites-available. If that’s the case, create a new configuration file for Nagios using your favourite text editor:

And use the following Nginx virtual host configuration:

Then, save and quit. Once that’s done, go ahead and add the symlink needed for the sites-enabled directory:

Please note, the above configuration assumes a few things. First, you have a self-signed certificate installed and a dhparam.pem file. If you don’t have these, you can generate them with the following commands:

You’ll notice this also includes location blocks for nagiosgraph, which we will install later within this guide, so don’t worry about those for now.

Finally, you’ll need to change the nagioshost.local server name to the actual virtual host URL you’ll be hosting this from.

Next, start the Nagios service and restart Nginx to take the configuration files into effect:

If there are no errors, then we can go ahead and add Nagios to our startup commands:

Now, we can go ahead and visit the public URL we’ve setup for Nagios. At this point, if you see a 404, 502 or other error, check your Nginx and virtual host error logs to see what isn’t happy.

If it’s all looking good, and your public URL looks something like this:

Nagios Core

Configuring Nagios Hosts

The final part to our configuration is setting up our hosts. Let’s make a new file for our server configuration:

And enter the following:

Then, save and quit. With the above configuration file, Nagios can work out if your host is up or down. If this is enough for you, then great. Otherwise, we’ll go ahead and add some more configuration files to check additional services such as ping and SSH. Simply add the following at the end of the same file:

Install Nagiosgraph

If you want to go one step further, you can install Nagiosgraph. This enables you to view pretty graphs which show the history of your statistics. To install Nagiosgraph, download the latest version from the Sourceforce website:

At the time of writing, this is v1.5.2. Once downloaded, we need to extract the archive:

Now this has been successfully extracted, we can delete the archive and change into the extracted directory:

Before installing Nagiosgraph, we can make some pre-flight checks:

Please be aware: because we’re using Nginx, expect to get some errors. If everything else is okay, you should end up with an output such as:

If all looks good, we can go ahead run the installation:

Tidying up

Now that everything’s up and running, so we can go ahead and do some tidying up. Simply remove the folders from your home directory which we downloaded:

And we’re done. Enjoy!

Upgrading your development machine to Sierra on macOS

A shiney new operating system? Let’s install it! Hold up. If you’re working on a development machine which you rely on to do work with, think about the dependencies that might break. Luckily, I’ve already done the hard work for you and figured out all the things you need to do for your development machine to be in good working order. There’s just a few things to do once you’ve upgraded.


Firstly, we need to go ahead and tell homebrew that we’ve upgraded to Sierra by running an update and upgrade. This will check for the latest packages that are built for the new operating system you’re on. Simply run the following (please note, this may take a while):

After that’s complete, check everything has installed successfully and that there are no errors by looking at the output from above and, if necessary, by running:

If you encounter any issues, homebrew should be fairly vocal with how you go about fixing issues. If you’re lucky, it will even give you the command to run.


After the upgrade, you’ll notice all your virtual hosts have disappeared! Not a great start, but luckily macOS keeps the original files you appended with ~orig and the previous versions (with all your old data in) appended with ~previous. You can go ahead and reinstate those files using the commands below:

The above command only copies over your previous versions of the httpd-vhosts.conf and httpd-ssl.conf file. If you’re aware you’ve changed more than these two files for your extra set up, go ahead and copy those over as well. You can see a list of these files by running:

When you’re ready, go ahead and test the configuration files are set up in the right way by running the Apache configuration test:

You may come across some issues, but this command should show you the file and line number where it occurred. Since there may be changes made in the httpd configuration files from your previous version (depending on what you had before and how out of date the syntax was), you might find it easier to start your httpd.conf file from fresh, and make the necessary changes you need for your local set up manually. For more information on how to set up a virtual host in Apache on MacOSX, check out my other post.

When you finally get the a-ok from the Apache configuration test, go ahead and restart Apache using the following command:


This one took me a while to work out. When trying to access your database from ‘localhost’, it doesn’t work. But when you try ‘’, it does. It can also show up as a ‘2002 Socket error’.

This is because MySQL places the socket in a specific location and macOS thinks it should be somewhere else. MySQL puts it in /tmp and macOS looks for it in /var/mysql. The socket is a type of file that allows mysql client/server communication. To fix the error, we simply create a system link, so both parties can be happy:

And that’s it! If you have any further issues, feel free to comment and I’ll happily suggest a solution and update this post.

Create and apply a patch file with Git

If you’re no stranger to working on multiple projects throughout the day, you’re probably going to be working on multiple tasks within the same project as well. In a fast-paced environment (like your usual digital agency), you’ll probably find yourself in a position where you need to stash your current changes to complete another task which has a higher priority.
Sometimes you’re able to git stash your changes and use git stash apply to put them back when you’re ready however, sometimes it might be necessary for someone else to continue the work you started. If you’re not working on your own change/feature/hotfix branch (using git flow) then it might be easier to send over a patch file to the lucky developer tasked to finish your work.
In the same way git stash will stash your unstaged files, the contents of your git diff will essentially be your patch file.
To create a patch file from git diff simply run the following command:

And to put those changes back into your unstaged area, use the patch command:

This is a very quick and dirty way of creating and applying patches however, Git comes with some useful commands to dry run, check and apply patches for you. Using the native commands, you can do the following:

Create a patch

See what changes a patch will make:

Check a patch:

Apply and sign off a patch:

The perfect deployment system

If you’ve worked in an agency that have their processes sorted, you’ll have come across some sort of automated or well managed deployment system. It’s definitely a step up from manually uploading files via FTP (or even editing them directory from the server via FTP). Whilst this makes my toes curl, we all had to start somewhere.

From manually uploading files, the next step is to start working with version control systems (VCSs). SVN and Git are two very well known VCSs which are used to save all your changes to a repository. Some use this as their deployment by simply cloning the repository on the server and pulling (generally through a staging/production branch) directly via the command line. This takes away the annoyance of having to upload all your files by only deploying changed files, but it’s still not the right way to go about it.

There are third party solutions out there such as DeployHQ and DeployBot (formally known as Dploy), both of which are great tools and enable you to set up deployments via a hook on a specific branch or by manually deploying (either by adding some text in the commit or pushing the ‘deploy’ button on their website). Personally, I have used both of these systems and found DeployBot has much more to offer as a deployment service however, there’s still a few bottle necks (one of them described in this post).

All of these systems use SFTP/FTP to upload files to the server. Although there are other types of deployments you can set up yourself (such as SSH deployments) the FTP versions are the only ones that actually upload the files for you.

Something which I think these systems have missed out on is the use of a command called rsync. For more information about this command, take a look at the manual page by typing in man rsync from your command line. Directly from the manual, the part we want to take advantage of is:

The rsync remote-update protocol allows rsync to transfer just the differences between two sets of files across the network connection, using an efficient checksum-search algorithm described in the technical report that accompanies this package.

How great is that?

This means if we’re deploying a file we already have on the server (chances are, we do, since we’re generally updating files as well as uploading new ones), we’re only deploying the difference in the file, not the whole file itself. If we had a 90MB file and changed a few words, rsync will be able to upload a much smaller amount of bytes compared to the whole 90MB file, which FTP will not be able to do. Since FTP is pretty slow anyway (other copy commands such as SCP will be much faster than FTP), rsync seems like a great solution for deploying web applications.

Let’s take DeployBot’s setup as it currently is and develop from there to make the perfect deployment system. If you’re not familiar with DeployBot, take a look at their guide on deploying a Laravel app to Digital Ocean and read from ‘Configuring DeployBot’.

If we take the technology from rsync and the setup DeployBot already has, we’re heading in the right direction. One of the biggest downfalls I found using DeployBot is the fact it has to re-upload the entire project via FTP. On a large site, that can take quite a while. It does create revision directories and uses a symlink to change the public directory to the new revision (when it’s been deployed, and when it’s ready).

Rather than having to re-upload the entire project, why not just copy the previous revision directory (which will be quite fast on a good server – much faster than uploading everything again) and then use the rsync command to deploy the differences for that particular revision?

If we were to deploy something to our server from our local version, we would use rsync like this:

With all that in mind, if DeployBot were to copy the previous release, upload changes to that directory via the rsync command and continue with their usage of system links and containers, we’d find it to be perfect deployment system.

Quick rollback via system link in DeployBot

I’ve been using DeployBot (formally known as Dploy) to deploy complex websites which require compiling Sass, minifying and uploading assets to Amazon S3 using Gulp.js, installation of Composer dependancies and a number of other configuration and cache settings to be changed.

DeployBot does quite a good job of managing this infrastructure. Although the build tools are still in beta, you can’t fault them for their support.

As per this blog post they mention:

You could always rollback by triggering a deployment with a previous commit selected. However, when things go wrong you want to react fast. We hope this button will go a long way to help you save time and not make a mistake in a rush.

The problem is, if you have quite a large site (that requires compiling before it even starts uploading the files), rollbacks are painfully slow. There’s nothing worse than a live site in a state that the client (or even yourself) isn’t happy with. Whatever the reason, you’d want rollbacks to be instantaneous.

DeployBot don’t currently support ‘quick’ rollbacks via symlink changes alone (since these can be quite complicated). At the moment, rollbacks are done just as any other deployment – code gets compiled, uploaded to a new release, all the files are uploaded to the release and if it’s successful, the release will switch the $BASE/current symlink. The other issue is if you have code that needs to be executed before/after deployments such as database migrations. This is where things can get a bit tricky, and you may want to stick with DeployBot’s current rollback solution.

I’ll leave that to you work out for your own applications but on a basic level, I’ve come up with a quick shell script that makes this ‘quick’ rollback possible:

Be sure to change the $base variable to your correct path.

Please check my GitHub account for the latest version of this code.

This works by first checking to see if the current deployment is a rollback. If it is, we check through all the current releases that are stored on the server and comparing the .revision file with the current commit. If it matches, the symlink is changed.

I’ve put this code in a new server (selecting ‘Shell’ as the type) upon deployment. Adding this code alone will quickly check to see if the rolled back release is available, switch it and ends with an exit code (1). Your application would have reverted back instantly.

Panic over. Go make a cuppa.

Whilst you’re making your tea, since these deployments are running in parallel, the Atomic FTP server is currently deploying the rollback from scratch.

After speaking to DeployBot, one of the concerns they had with this script is you would have two deployments running in parallel, so you couldn’t be sure the shell deployment ran first. Generally speaking, your Atomic FTP deployment will take a good while to deploy, and you’d want it to carry on deploying anyway – this is more as a fallback until the actual deployment is finished.

However, if you would prefer to have your deployments running in series, simply set up a second environment. The first being your shell deployment with the quick rollback and the second being your Atomic FTP deployment. Head over to the ‘overview’ of your deployments which should show your environments. Click the settings button on your shell environment and you should be in the ‘Servers & settings’ section. Scroll down to ‘Triggers’ and you’ll see something like this:


Simply check the ‘Deploy another environment’ box and choose your Atomic FTP environment. Be sure to name your shell environment just the environment you wish to choose. In this case, I’ve named my shell environment ‘Staging’ and my Atomic FTP environment ‘Staging (Atomic FTP)’. This is so the manual commit trigger can be picked up by adding [deploy: staging] to your commit message.

And there we have it, instant rollbacks. Neato.

Generating autoload_classmap.php files in Zend Framework 2

Application speed is a critical subject which gets worse as your site experiences more users. Even if you’re running your application on a super-server, it’s going to be much quicker and more cost effective to optimise your website first.

One way to help optimise your website (if you’re running a Zend Framework 2 application) is to use the class map autoloader. This helps to increase your website’s speed by not having to load your class by changing strings about using the namespace given. For something as simple as loading a class, there’s a much better way of going about it: the class map autoloader.

To automatically generate your class map files, we’re going to use the classmap_generator.php file which comes shipped with Zend Framework 2. I’ve created a great little one-liner for you:

for module in $(ls module); do; php vendor/bin/classmap_generator.php -l module/$module; done;

which can be run in the main directory of your Zend Framework 2 project.

Within your Module.php file within each module, find the getAutoloaderConfig() method and ensure you have entered the class map autoloader configuration. You’ll find something like this:

And should replace it with this:

And that’s it! You should now be using class map autoloading.

El Capitan upgrade – Virtual hosts not working?

After upgrading to El Capitan, Apple’s new Operating system (version 10.11), I found all my virtual hosts weren’t working anymore!

This is due to the upgrade creating a new /etc/apache2/httpd.conf file and overwriting your old one. Luckily, they don’t quite remove your previous configuration file, they back it up for you under /etc/apache2/httpd.conf~previous.

If you’ve installed El Capitan and are not sure how to fix this, I’ve written a quick one-liner that will get you back on your feet. Simply open Terminal (From a Finder window, go to Applications > Utilites > Terminal) and paste in the following line:

And there you have it. Back up and running in no time at all.

Let’s go into that command a little deeper so you can understand what we’re doing:

rm -rf /etc/apache2/httpd.conf

This line is simply removing the new configuration file Apple has created for us. We have a backup of it already and we’re going to change the name of our backed up file to this one, so we don’t need it anymore. Please be sure you have already upgraded to El Capitan otherwise you may not have a backup file. To check if you have a backup file available, simply run ls -la /etc/apache2/httpd.conf~previous and you should get something back like this: -rw-r--r--  1 root  wheel  21002  4 Jul 17:41 /etc/apache2/httpd.conf~previous – if not, do not continue or you may remove your only configuration file.

The second line:

mv /etc/apache2/httpd.conf~previous /etc/apache2/httpd.conf

simply renames your backed up configuration file to the one we just deleted.

Lastly we want to restart Apache by running:

sudo apachectl restart

which will re-cache your configuration files which we have just updated.

And there you have it, an easy switch over but something which I found rather annoying during the upgrade. Hopefully this tutorial helps you out if you’ve run into the same problem.

Installing PHP 5.5 and MySQL on Mac OSX

Yosemite comes preinstalled with Apache and PHP, we want to use Homebrew to install PHP to ensure we can get the latest updates by running a simple command, brew update.

First off, we need to install Homebrew. You can follow the latest installation online via their website which I would recommend but for the sake of being lazy, I’ve pasted it below. Please remember this command correct as from the date of this post:

After we’ve installed Homebrew, we want to go ahead and use it! Simple run these commands to install PHP 5.5:

That’s the easy part, Homebrew does most of the work for you. If you’ve every tried to install anything from source, you’ll know what I’m talking about.

Finally, to get you set up, there’s a few configuration tweaks to sort out. As mentioned above, Yosemite comes preinstalled with Apache and PHP but we want to use Homebrew’s version.

There are instructions that are displayed after the installation from Homebrew which you can always get back by running brew info php55 but I’ll spell out the important parts for you.

To get Apache (which is preinstalled) to use Homebrew’s version of PHP, simply edit your httpd.conf file:

Find and replace the line loading the module:

Always remember to restart Apache when changing any configuration files:

Setting up a LAMP server on AWS EC2

Amazon Web Services have grown quite rapidly over the last few years. This is why they’re generally the most popular scalable hosting that large companies use to ensure their applications are up and running with no pitfalls.

Amazon provide something called ‘free tier’ so you can get started straight away and get stuck in. Go ahead, set up an account and have a play about. It’s free for the first year, either cancel your account after before that point or, if you’ve gotten far enough, it’s pretty cheap anyway.

Once you’re logged in via SSH, use the following commands to set up your LAMP server. Please read the code line by line to ensure you have copied and pasted every command in correctly, and that you understand what each one does (using the helpful comments).

This code will install a simple LAMP server on AWS installing PHP 5.5, MySQL, Intl, Git, APC and ElasticSearch.

If you’re looking for specific packages to install, take a look at Amazon’s package list: