Posted on

Create a LEMP stack in AWS with EC2 (Linux, Nginx, MariaDB and PHP 7.2)

Hopefully, this quick run-through will help you get a LEMP server up and running quite quickly in AWS. If you have any questions, feel free to ask in the comments.

Important! Please, choose the required data center from the top right hand corner. Remember to do this for any configuration changes or new instances when logging into the AWS console.

Follow the navigation to start launching a new EC2 instance:
AWS > Compute > EC2 > Instances > Launch Instance

Select the Amazon Linux Candidate we want to use:
Amazon Linux 2 LTS Candidate 2 AMI (HVM), SSD Volume Type – ami-921423eb

And the following flavour with our desired specifications:
General Purpose: t2.xlarge

Click on the ‘next’ buttons in the bottom right hand corner until you get to ‘Configure Security Group’. Leave all other configuration options as default.

The SSH rule should be added already, but you will need to add the 443 and 80 port open from anywhere. Name this security group something relevant with a similar description, so it can be reused for other instances.

Finally, click ‘Review and launch’.

When prompted, create a new key pair and download this to your machine. This will be in the form of a .pem file which you should add to your .ssh directory. For now, this is the quickest way to configure a role to allow you to access the server:

cp ~/Downloads/aws.pem ~/.ssh/.
chmod 400 /path/aws.pem

Where aws.pem is the name of your key pair. The usage of CI means no one else should need access to the server, and less human intervention will prevent any configuration from changing. Cloud setups should be kept extremely automated.

If you wish for this to change, consider creating an IAM role, such as, ‘DevOps’, and create and add the necessary users, which will all have different key pairs, AWS logins and API keys. Use different user accounts per employee, to make it easy to revoke them where and when necessary.

Once your key pair is downloaded, you will be able to finally launch your instance. Head back to the ‘Instances’ page and wait for your instance to complete setup. The instance should be in a ‘Running’ state and the status check should show as ‘Initializing’.

When the status checks finalizes and shows ‘2/2 checks passed’, you will be able to use the public IVP4 address of your instance. Use this to login via SSH using the ec2-user user:

ssh -i ~/.ssh/aws.pem [email protected]

Where is your public IPV4 address of the instance.

Please note, if you are unable to ping or access this IP address, your security group may be misconfigured.

Once logged into the box, run the following commands to configure:

# Update Linux and install essential packages
sudo yum -y update
sudo yum install -y gcc make

# Install the latest epel repository
sudo yum install –y

# Install php72 using amazon-linux-extras
sudo amazon-linux-extras install php7.2 nginx1.12

# Install Nginx and PHP-FPM
sudo yum install -y nginx php-fpm

# Configure Nginx
sudo vi /etc/nginx/nginx.conf

Find the following line:

include /etc/nginx/conf.d/*.conf;

Add this line below it:

include /etc/nginx/sites-available/*.conf;

Come out of the file, and create the sites-available directory:

sudo mkdir /etc/nginx/{sites-available,sites-enabled}

Now, create a new virtual host entry:

sudo vi /etc/nginx/sites-available/my-site.conf

Paste in the following for a virtual host setup:

server {
  listen 80;
  listen [::]:80;
  return 301 https://$server_name$request_uri;

server {
  listen 443 ssl http2 default_server;
  listen [::]:443 ssl http2 default_server;
  root /var/www/my-site;

  ssl_certificate "/etc/pki/nginx/server.crt";
  ssl_certificate_key "/etc/pki/nginx/private/server.key";
  ssl_session_cache shared:SSL:1m;
  ssl_session_timeout 10m;
  ssl_ciphers HIGH:!aNULL:!MD5;
  ssl_prefer_server_ciphers on;

  location / {
    try_files $uri $uri/ /index.php$is_args$args;

  location ~ \.php$ {
    try_files $uri /index.php = 404;
    fastcgi_pass unix:/var/run/php-fpm/www.sock;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name;
    include fastcgi_params;

For non-dev environments, the SSL certificate will need to be properly generated via a third party (such as CloudFlare or Let’s Encrypt). This will then need to be uploaded to the server(s) and the location of the key changed in the virtual host entry. After applying any changes, Nginx will need to be restarted for these changes to take effect.

Please be sure to change the server_name directive, and set up the relevant DNS (pointing to the load balancer or Elastic IP). After that, create the virtual host symlink:

sudo ln -s /etc/nginx/sites-available/my-site.conf /etc/nginx/sites-enabled/my-site.conf

Finally, we will need to create a server key for the SSL binding. For the key generation, use an empty passphrase, and leave the challenge password blank:

sudo mkdir -p /etc/pki/nginx/private
cd /etc/pki/nginx/
sudo ssh-keygen -f private/server.key
sudo openssl req -new -key private/server.key -out server.csr
sudo openssl x509 -req -days 365 -in server.csr -signkey private/server.key -out server.crt
sudo openssl rsa -in private/server.key -out private/server.key
# PHP-FPM Configuration
sudo vi /etc/php-fpm.d/www.conf

And change the matching keys to the following values:

listen = /var/run/php-fpm/www.sock

user = nginx
group = nginx

listen.owner = nginx = nginx
listen.mode = 0664
# Create directory for new virtual host
sudo mkdir /var/www/my-site

# Add the ec2-user to the nginx group and correct permissions
sudo usermod -a -G nginx ec2-user
sudo chown -R ec2-user:nginx /var/www
sudo chmod 2775 /var/www
find /var/www -type d -exec sudo chmod 2775 {} \;
find /var/www -type f -exec sudo chmod 0664 {} \;

# Add Nginx and PHP-FPM to the startup commands, and start the services
sudo systemctl enable nginx.service
sudo systemctl enable php-fpm.service

Feel free to comment out the default virtual host after you have confirmed everything is working as expected, which can be done in:

sudo vi /etc/nginx/nginx.conf

Once complete, Nginx and PHP-FPM should be running. You can install your website at /var/www/my-site to get started.

To install MariaDB, we’ll need to tell AWS to use the newer repositories:

# Create the MariaDB yum repository for v10.3
sudo vi /etc/yum.repos.d/MariaDB.repo

# And paste in the following:

# MariaDB 10.3 CentOS repository list
name = MariaDB
baseurl =

# Install Nginx, PHP-FPM and MariaDB sudo yum install -y nginx php-fpm MariaDB-server MariaDB-client

For other repositories, you can check their website directly or use their repository generator.

Then, we can go ahead and install MariaDB:

sudo yum install -y MariaDB-server MariaDB-client

We can then go ahead and add the root user and any other necessary users:


# Follow the on-screen instructions to add the root password, remove the test database, disallow remote access and reload privileges

# Log in to MariaDB (using your new root password) for the SQL commands (show below this command) to work:
mysql -uroot -p

# Add your new database(s)

# Add users and necessary privileges
CREATE USER 'dbuser'@'localhost' IDENTIFIED BY 'dbpass';
GRANT ALL PRIVILEGES ON `my_db`.* TO 'dbuser'@'localhost';

# Don't forget to exit when you're done by typing 'exit' or using ctl > d

To enable mysql on startup, use:

sudo systemctl enable mysql

At a later stage, when the website is fully deployed, an image should be taken to allow us to recreate boxes at the click of a button, as well as being able to set up auto-scaling in the future. This will drastically reduce the time needed to re-create an instance, and could be used in an emergency situation if an instance is unresponsive.

Please note, it is not recommended to take an image when the instance is running, as Amazon is not able to guarantee the integrity of the file system on the created image. When creating images, please bear in mind the instance will be shut down. If taking images on a working production environment, the instance should be taken out of the load balancer before working on it.

If you have any problems or need to troubleshoot, check your Nginx configuration by using:

sudo systemctl status nginx.service

And using the Nginx error log to check for errors when accessing the web server:

sudo tail -f /var/log/nginx/error.log

Once a configuration file has changed, you will need to restart nginx with:

sudo systemctl restart nginx
Posted on

Server emails not sending or being received as spam?

Email clients are constantly improving their spam filtering, demanding higher authentication from servers to be able to successfully receive emails. This is great for the receiver, as they don’t have to worry about all the spam emails filling their inbox anymore, but not so great for the server administrator who’s genuine email alerts from the server are being marked as spam, or worse, not being received at all.

One solution is to use a transaction mail API such as MailGun or Amazon Simple Email Service (SES), both of which provide a free service up to a specified limit (even after that, it’s quite cheap) but the alternative is to make your server more genuine looking to potential receivers.

Let’s start off by getting a basic benchmark from Mail Tester. This is a great little tool to test your server’s email capabilities, whether you’re sending alerts, newsletters or password reset links. From the command line, we can send a test email directly via the command line to the email address Mail Tester has generated for us.

First, let’s create a basic text file which will contain the content of our email. Head to your home directory and create a new file using your favourite text editor:

vi ~/message.txt

The content I’ve used is:


This is a test message.

Thank you.

Now, we can send the email to the test address:

mail -s 'Test message' [email protected] < message.txt

If you don’t have the mail command, you can install it using:

sudo apt-get install mailutils

Let’s head over to Mail Tester and check our score:

If you scroll down a little more, you can open up the list items to see further details:

Great, now we have a starting point, we can go through and start to improve our email authentication. Follow the instructions carefully through each item and do as much as you can. There are some very quick and easy wins you can do around DNS records for SPF, DKIM and DMARC. Something as small as this, which can be done in under an hour can drastically change the results of your authentication.

And we can see what categories we’ve made better:


In my instance, I just needed my server to send emails successfully to Gmail. Since Gmail’s spam filters are quite strict, I wasn’t able to see any emails being received whatsoever. Now with a score of 5.7, those emails get sent straight to my inbox. That’s enough for me.

Posted on

Installing Nagios with Nginx, PHP-FPM and Nagiosgraph on Ubuntu 16.04

Update Your System

Always ensure your system is up-to-date before installing new packages by running:

sudo apt-get update && sudo apt-get upgrade

Please note, this guide assumes you already have Nginx and PHP-FPM installed.


For the sake of clarity, the server I’m installing this on has the following versions of Ubuntu, Nginx and PHP-FPM:

[email protected]:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
[email protected]:~# nginx -v
nginx version: nginx/1.10.0 (Ubuntu)
[email protected]:~# php -v
PHP 7.0.18-0ubuntu0.16.04.1 (cli) ( NTS )
Copyright (c) 1997-2017 The PHP Group
Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies
    with Zend OPcache v7.0.18-0ubuntu0.16.04.1, Copyright (c) 1999-2017, by Zend Technologies

Nagios User and Group

We’ll start by creating a new user and group specific to Nagios:

sudo useradd nagios
sudo groupadd nagcmd
sudo usermod -a -G nagcmd nagios

Required Dependencies

Next, we’ll install the dependencies we need:

sudo apt-get -y install build-essential libgd2-xpm-dev openssl libssl-dev xinetd unzip postfix

Install Nagios Core

Then, we want to install Nagios Core. To get the latest version, you’ll have to check the Nagios website to see what the latest stable release is, and change the version number accordingly:

cd ~
curl -L -O

At the time of writing, this is v4.3.2. Once downloaded, we need to extract the archive:

tar xvf nagios-*.tar.gz

Now this has been extracted, we can delete the archive and change into the extracted directory:

rm -f nagios-*.tar.gz
cd nagios-*

Before we actually go ahead and install Nagios, we need to configure it. We’re going to install Nagios with postfix, which we’ve already installed. To do this, we run the following command:

./configure --with-nagios-group=nagios --with-command-group=nagcmd --with-mail=/usr/sbin/sendmail

If you read the output, which should be rather verbose, it should let you know where everything is and how to continue:

Creating sample config files in sample-config/ ...

*** Configuration summary for nagios 4.3.2 2017-05-09 ***:

General Options:
 Nagios executable: nagios
 Nagios user/group: nagios,nagios
 Command user/group: nagios,nagcmd
 Event Broker: yes
 Install ${prefix}: /usr/local/nagios
 Install ${includedir}: /usr/local/nagios/include/nagios
 Lock file: ${prefix}/var/nagios.lock
 Check result directory: ${prefix}/var/spool/checkresults
 Init directory: /etc/init.d
 Apache conf.d directory: /etc/httpd/conf.d
 Mail program: /usr/sbin/sendmail
 Host OS: linux-gnu
 IOBroker Method: epoll

Web Interface Options:
 HTML URL: http://localhost/nagios/
 CGI URL: http://localhost/nagios/cgi-bin/
 Traceroute (used by WAP):

Review the options above for accuracy. If they look okay,
type 'make all' to compile the main program and CGIs.

As it says, review the options above, and if they look okay, run the following command to compile the main program and CGIs:

sudo make all

Installing from source always scares me at this point, as there’s quite a lot going on in the console. Don’t worry, this is normal. If this finishes and ends with ‘Enjoy’, then you’re doing well.

Now, we can get down to actually installing Nagios along with some initial scripts and sample configuration files:

sudo make install
sudo make install-commandmode
sudo make install-init
sudo make install-config

In order for us to make external commands via the web interface to Nagios, we’ll need to sort out permissions. To do this, we’ll add the web server user to our nagcmd group:

sudo usermod -G nagcmd www-data

Nagios Plugins

Just like Nagios Core, you’ll need to consult the Nagios website to find the latest version of the Nagios Plugins download:

cd ~
curl -L -O

At the time of writing, this is v2.2.1. Once downloaded, we need to extract the archive:

tar xvf nagios-plugins-*.tar.gz

Now this has been extracted, we can delete the archive and change into the extracted directory:

rm -f nagios-plugins-*.tar.gz
cd nagios-plugins-*

Before installing Nagios Plugins, we’ll need to configure it:

./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl

Now, compile Nagios Plugins:

sudo make

Then, install it:

sudo make install

To finish off our installation, we’ll need to copy over the check_* files to our Nagios configuration, so we can use them later for monitoring:

cp /usr/lib/nagios/plugins/check_* /usr/local/nagios/libexec
chown -R nagios:nagios /usr/local/nagios/libexec

Install NRPE

Find the latest stable version of NRPE from the NRPE downloads page. If you’re starting to see a trend, you’ll be right. Please note, you may need to manually download the file from their website, as Sourceforce don’t seem to provide direct links. Go ahead and change the version number in the following command if necessary:

cd ~
curl -L -O

At the time of writing, this is v3.1.1. Once downloaded, we need to extract the archive:

tar xvf nrpe-*.tar.gz

If the above command fails with:

tar: This does not look like a tar archive

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now

Then you may need to manually upload the tar.gz file, after downloading it from the Sourceforce website. Once downloaded to your machine, you can use scp or rsync to upload the file to your server:

scp ~/Downloads/nrpe-3.1.1.tar.gz [email protected]:~/nrpe-3.1.1.tar.gz

After transferring the correct file to the server, go ahead and run the tar command again, as described above.

Now this has been successfully extracted, we can delete the archive and change into the extracted directory:

rm -f nrpe-*.tar.gz
cd nrpe-*

Before installing NRPE, we’ll need to configure it:

./configure --enable-command-args --with-nagios-user=nagios --with-nagios-group=nagios --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu

Now build and install NRPE:

sudo make all
sudo make install
sudo make install-init
sudo make install-config

Enable the NRPE service:

sudo systemctl enable nrpe.service

If you have xinetd installed, you’ll need to configure your /etc/xinetd.d/nrpe file to only include the IP address of the Nagios Server. For more information, Google it.

Now that Nagios, Nagios Plugins and NRPE are installed, we need to configure Nagios.

Configure Nagios

Open the main Nagios configuration file in your favourite text editor:

sudo vi /usr/local/nagios/etc/nagios.cfg

And remove the following comment:


Pro tip: to search in vim, use the / command following by the string you want to search for, and hit enter.

Then save and exit.

Next, we want to create the directory where all our server configuration files will live in:

sudo mkdir -p /usr/local/nagios/etc/servers

Configure Nagios Contacts

Open the Nagios contacts configuration in your favourite text editor:

sudo vi /usr/local/nagios/etc/objects/contacts.cfg

And change the default email address to your email address:

email   [email protected]   ; <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******

Then save and exit.

Configure check_nrpe Command

Let’s go ahead and add a new command to our Nagios configuration using our favourite text editor:

sudo vi /usr/local/nagios/etc/objects/commands.cfg

And add the following to the end of the file:

define command {
    command_name check_nrpe
    command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$

Then, save and exit. This allows you to use the check_nrpe command in your Nagios service definitions.

Configure Nginx

Up until now, you may have found most installation guides show you how to install Nagios, Nagios Plugins and NRPE. However, very few show you how to set this up on an Nginx and PHP-FPM environment.

We’ll start by installing the fcgiwrap dependancy, so we can run the Nagios CGI scripts:

apt-get install -y fcgiwrap

Next, you’ll need to create a new virtual host for Nagios. Depending on your current setup, these are usually found in /etc/nginx/sites-available. If that’s the case, create a new configuration file for Nagios using your favourite text editor:

sudo vi /etc/nginx/sites-available/nagios.conf

And use the following Nginx virtual host configuration:

server {
    listen 80;
    server_name nagioshost.local;
    return 301 https://nagioshost.local$request_uri;

server {
    listen 443 ssl;

    server_name nagioshost.local;

    ssl_certificate /etc/nginx/ssl/nginx.crt;
    ssl_certificate_key /etc/nginx/ssl/nginx.key;

    ssl_session_cache shared:SSL:20m;
    ssl_session_timeout 180m;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    ssl_prefer_server_ciphers on;

    ssl_dhparam /etc/nginx/ssl/dhparam.pem;

    add_header Strict-Transport-Security "max-age=31536000";

    access_log /var/log/nginx/nagioshost.local-access.log;
    error_log /var/log/nginx/nagioshost.local-error.log;

    auth_basic "Private";
    auth_basic_user_file /etc/nginx/.htpasswds/nagios;

    root /var/www/vhosts/nagioshost.local;
    index index.php index.html;

    location / {
        try_files $uri $uri/ index.php /nagios;

    location /nagios {
        alias /usr/local/nagios/share;
        location ~ \.php$ {
            include snippets/fastcgi-php.conf;
            fastcgi_param SCRIPT_FILENAME $request_filename;
            fastcgi_param AUTH_USER $remote_user;
            fastcgi_param REMOTE_USER $remote_user;
            fastcgi_pass unix:/run/php/php7.0-fpm.sock;
        location ~ \.cgi$ {
            root /usr/local/nagios/sbin;
            rewrite ^/nagios/cgi-bin/(.*)\.cgi /$1.cgi break;
            include /etc/nginx/fastcgi_params;
            fastcgi_param SCRIPT_FILENAME $request_filename;
            fastcgi_param AUTH_USER $remote_user;
            fastcgi_param REMOTE_USER $remote_user;
            fastcgi_pass unix:/var/run/fcgiwrap.socket;

    location ~ ^/nagiosgraph/cgi-bin/(.*\.cgi)$ {
        alias /usr/local/nagiosgraph/cgi/$1;
        include /etc/nginx/fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $request_filename;
        fastcgi_param AUTH_USER $remote_user;
        fastcgi_param REMOTE_USER $remote_user;
        fastcgi_pass unix:/var/run/fcgiwrap.socket;

    location /nagiosgraph {
        alias /usr/local/nagiosgraph/share;

    location ~ \.php$ {
        include snippets/fastcgi-php.conf;
        fastcgi_pass unix:/run/php/php7.0-fpm.sock; 

Then, save and quit. Once that’s done, go ahead and add the symlink needed for the sites-enabled directory:

ln -s /etc/nginx/sites-available/nagios.conf /etc/nginx/sites-enabled/nagios.conf

Please note, the above configuration assumes a few things. First, you have a self-signed certificate installed and a dhparam.pem file. If you don’t have these, you can generate them with the following commands:

openssl dhparam 2048 -out /etc/nginx/ssl/dhparam.pem
openssl genrsa -out nginx.key 4096
openssl req -out nginx.csr -key nginx.key -new -sha256
openssl x509 -req -days 365 -in nginx.csr -signkey nginx.key -out nginx.crt -sha512

You’ll notice this also includes location blocks for nagiosgraph, which we will install later within this guide, so don’t worry about those for now.

Finally, you’ll need to change the nagioshost.local server name to the actual virtual host URL you’ll be hosting this from.

Next, start the Nagios service and restart Nginx to take the configuration files into effect:

systemctl start nagios
systemctl restart nginx

If there are no errors, then we can go ahead and add Nagios to our startup commands:

sudo ln -s /etc/init.d/nagios /etc/rcS.d/S99nagios

Now, we can go ahead and visit the public URL we’ve setup for Nagios. At this point, if you see a 404, 502 or other error, check your Nginx and virtual host error logs to see what isn’t happy.

If it’s all looking good, and your public URL looks something like this:

Nagios Core

Configuring Nagios Hosts

The final part to our configuration is setting up our hosts. Let’s make a new file for our server configuration:

vi /usr/local/nagios/etc/servers/localhost.cfg

And enter the following:

define host {
    use                      linux-server
    host_name                localhost
    alias                    Localhost
    max_check_attempts       5
    check_period             24x7
    notification_interval    30
    notification_period      24x7

Then, save and quit. With the above configuration file, Nagios can work out if your host is up or down. If this is enough for you, then great. Otherwise, we’ll go ahead and add some more configuration files to check additional services such as ping and SSH. Simply add the following at the end of the same file:

define service {
    use                    generic-service
    host_name              komodo
    service_description    PING
    check_command          check_ping!100.0,20%!500.0,60%

define service {
    use                      generic-service
    host_name                komodo
    service_description      SSH
    check_command            check_ssh
    notifications_enabled    0

Install Nagiosgraph

If you want to go one step further, you can install Nagiosgraph. This enables you to view pretty graphs which show the history of your statistics. To install Nagiosgraph, download the latest version from the Sourceforce website:

curl -OL

At the time of writing, this is v1.5.2. Once downloaded, we need to extract the archive:

tar xvf nagiosgraph-*.tar.gz

Now this has been successfully extracted, we can delete the archive and change into the extracted directory:

rm -f nagiosgraph-*.tar.gz
cd nagiosgraph-*

Before installing Nagiosgraph, we can make some pre-flight checks:

./ --check-prereq

Please be aware: because we’re using Nginx, expect to get some errors. If everything else is okay, you should end up with an output such as:

checking required PERL modules
  RRDs... ***FAIL***
checking optional PERL modules
  GD... ***FAIL***
  Nagios::Config... ***FAIL***
checking nagios installation
  found nagios exectuable at /usr/local/nagios/bin/nagios
  found nagios init script at /etc/init.d/nagios
checking web server installation
  apache not found in any of:

*** one or more problems were detected!

If all looks good, we can go ahead run the installation:

./ --layout standalone --prefix /usr/local/nagiosgraph

Tidying up

Now that everything’s up and running, so we can go ahead and do some tidying up. Simply remove the folders from your home directory which we downloaded:

cd ~
rm -rf nagios-* nagios-plugins-* nagiosgraph-* nrpe-*

And we’re done. Enjoy!

Posted on

Upgrading your development machine to Sierra on macOS

A shiney new operating system? Let’s install it! Hold up. If you’re working on a development machine which you rely on to do work with, think about the dependencies that might break. Luckily, I’ve already done the hard work for you and figured out all the things you need to do for your development machine to be in good working order. There’s just a few things to do once you’ve upgraded.


Firstly, we need to go ahead and tell homebrew that we’ve upgraded to Sierra by running an update and upgrade. This will check for the latest packages that are built for the new operating system you’re on. Simply run the following (please note, this may take a while):

brew update && brew upgrade

After that’s complete, check everything has installed successfully and that there are no errors by looking at the output from above and, if necessary, by running:

brew doctor

If you encounter any issues, homebrew should be fairly vocal with how you go about fixing issues. If you’re lucky, it will even give you the command to run.


After the upgrade, you’ll notice all your virtual hosts have disappeared! Not a great start, but luckily macOS keeps the original files you appended with ~orig and the previous versions (with all your old data in) appended with ~previous. You can go ahead and reinstate those files using the commands below:

sudo cp /etc/apache2/httpd.conf~previous /etc/apache2/httpd.conf
sudo cp /etc/apache2/extra/httpd-vhosts.conf~previous /etc/apache2/extra/httpd-vhosts.conf
sudo cp /etc/apache2/extra/httpd-ssl.conf~previous /etc/apache2/extra/httpd-ssl.conf

The above command only copies over your previous versions of the httpd-vhosts.conf and httpd-ssl.conf file. If you’re aware you’ve changed more than these two files for your extra set up, go ahead and copy those over as well. You can see a list of these files by running:

ls -lah /etc/apache2/extra

When you’re ready, go ahead and test the configuration files are set up in the right way by running the Apache configuration test:

sudo apacehctl -t

You may come across some issues, but this command should show you the file and line number where it occurred. Since there may be changes made in the httpd configuration files from your previous version (depending on what you had before and how out of date the syntax was), you might find it easier to start your httpd.conf file from fresh, and make the necessary changes you need for your local set up manually. For more information on how to set up a virtual host in Apache on MacOSX, check out my other post.

When you finally get the a-ok from the Apache configuration test, go ahead and restart Apache using the following command:

sudo apachectl restart


This one took me a while to work out. When trying to access your database from ‘localhost’, it doesn’t work. But when you try ‘’, it does. It can also show up as a ‘2002 Socket error’.

This is because MySQL places the socket in a specific location and macOS thinks it should be somewhere else. MySQL puts it in /tmp and macOS looks for it in /var/mysql. The socket is a type of file that allows mysql client/server communication. To fix the error, we simply create a system link, so both parties can be happy:

sudo mkdir /var/mysql
 sudo ln -s /tmp/mysql.sock /var/mysql/mysql.sock

And that’s it! If you have any further issues, feel free to comment and I’ll happily suggest a solution and update this post.

Posted on

Create and apply a patch file with Git

If you’re no stranger to working on multiple projects throughout the day, you’re probably going to be working on multiple tasks within the same project as well. In a fast-paced environment (like your usual digital agency), you’ll probably find yourself in a position where you need to stash your current changes to complete another task which has a higher priority.
Sometimes you’re able to git stash your changes and use git stash apply to put them back when you’re ready however, sometimes it might be necessary for someone else to continue the work you started. If you’re not working on your own change/feature/hotfix branch (using git flow) then it might be easier to send over a patch file to the lucky developer tasked to finish your work.
In the same way git stash will stash your unstaged files, the contents of your git diff will essentially be your patch file.
To create a patch file from git diff simply run the following command:

git diff > changes.patch

And to put those changes back into your unstaged area, use the patch command:

patch -p0 < changes.patch

This is a very quick and dirty way of creating and applying patches however, Git comes with some useful commands to dry run, check and apply patches for you. Using the native commands, you can do the following:

Create a patch

git format-patch master --stdout > changes.patch

See what changes a patch will make:

git apply --stat changes.patch

Check a patch:

git apply --check changes.patch

Apply and sign off a patch:

git am --signoff < changes.patch
Posted on

The perfect deployment system

If you’ve worked in an agency that have their processes sorted, you’ll have come across some sort of automated or well managed deployment system. It’s definitely a step up from manually uploading files via FTP (or even editing them directory from the server via FTP). Whilst this makes my toes curl, we all had to start somewhere.

From manually uploading files, the next step is to start working with version control systems (VCSs). SVN and Git are two very well known VCSs which are used to save all your changes to a repository. Some use this as their deployment by simply cloning the repository on the server and pulling (generally through a staging/production branch) directly via the command line. This takes away the annoyance of having to upload all your files by only deploying changed files, but it’s still not the right way to go about it.

There are third party solutions out there such as DeployHQ and DeployBot (formally known as Dploy), both of which are great tools and enable you to set up deployments via a hook on a specific branch or by manually deploying (either by adding some text in the commit or pushing the ‘deploy’ button on their website). Personally, I have used both of these systems and found DeployBot has much more to offer as a deployment service however, there’s still a few bottle necks (one of them described in this post).

All of these systems use SFTP/FTP to upload files to the server. Although there are other types of deployments you can set up yourself (such as SSH deployments) the FTP versions are the only ones that actually upload the files for you.

Something which I think these systems have missed out on is the use of a command called rsync. For more information about this command, take a look at the manual page by typing in man rsync from your command line. Directly from the manual, the part we want to take advantage of is:

The rsync remote-update protocol allows rsync to transfer just the differences between two sets of files across the network connection, using an efficient checksum-search algorithm described in the technical report that accompanies this package.

How great is that?

This means if we’re deploying a file we already have on the server (chances are, we do, since we’re generally updating files as well as uploading new ones), we’re only deploying the difference in the file, not the whole file itself. If we had a 90MB file and changed a few words, rsync will be able to upload a much smaller amount of bytes compared to the whole 90MB file, which FTP will not be able to do. Since FTP is pretty slow anyway (other copy commands such as SCP will be much faster than FTP), rsync seems like a great solution for deploying web applications.

Let’s take DeployBot’s setup as it currently is and develop from there to make the perfect deployment system. If you’re not familiar with DeployBot, take a look at their guide on deploying a Laravel app to Digital Ocean and read from ‘Configuring DeployBot’.

If we take the technology from rsync and the setup DeployBot already has, we’re heading in the right direction. One of the biggest downfalls I found using DeployBot is the fact it has to re-upload the entire project via FTP. On a large site, that can take quite a while. It does create revision directories and uses a symlink to change the public directory to the new revision (when it’s been deployed, and when it’s ready).

Rather than having to re-upload the entire project, why not just copy the previous revision directory (which will be quite fast on a good server – much faster than uploading everything again) and then use the rsync command to deploy the differences for that particular revision?

If we were to deploy something to our server from our local version, we would use rsync like this:

rsync -avz --delete /var/www/handle/. [email protected]:/var/www/handle/.

With all that in mind, if DeployBot were to copy the previous release, upload changes to that directory via the rsync command and continue with their usage of system links and containers, we’d find it to be perfect deployment system.

Posted on

Quick rollback via system link in DeployBot

I’ve been using DeployBot (formally known as Dploy) to deploy complex websites which require compiling Sass, minifying and uploading assets to Amazon S3 using Gulp.js, installation of Composer dependancies and a number of other configuration and cache settings to be changed.

DeployBot does quite a good job of managing this infrastructure. Although the build tools are still in beta, you can’t fault them for their support.

As per this blog post they mention:

You could always rollback by triggering a deployment with a previous commit selected. However, when things go wrong you want to react fast. We hope this button will go a long way to help you save time and not make a mistake in a rush.

The problem is, if you have quite a large site (that requires compiling before it even starts uploading the files), rollbacks are painfully slow. There’s nothing worse than a live site in a state that the client (or even yourself) isn’t happy with. Whatever the reason, you’d want rollbacks to be instantaneous.

DeployBot don’t currently support ‘quick’ rollbacks via symlink changes alone (since these can be quite complicated). At the moment, rollbacks are done just as any other deployment – code gets compiled, uploaded to a new release, all the files are uploaded to the release and if it’s successful, the release will switch the $BASE/current symlink. The other issue is if you have code that needs to be executed before/after deployments such as database migrations. This is where things can get a bit tricky, and you may want to stick with DeployBot’s current rollback solution.

I’ll leave that to you work out for your own applications but on a basic level, I’ve come up with a quick shell script that makes this ‘quick’ rollback possible:

if [ 1 -eq "$rollback" ]; then
    for release in $releases/*; do
        revision=$(cat "$release/.revision")
        if [ "$revision" = "$commit" ]; then
            rm -f "$releases/current"
            ln -s "$release" "$releases/current"
            mv -f "$releases/current" "$base/"
            echo "Release found on server, switching symlink."
            exit 0
    echo "Release not found, continuing with deployment."
    echo "Deployment is not a rollback, continuing with deployment."

Be sure to change the $base variable to your correct path.

Please check my GitHub account for the latest version of this code.

This works by first checking to see if the current deployment is a rollback. If it is, we check through all the current releases that are stored on the server and comparing the .revision file with the current commit. If it matches, the symlink is changed.

I’ve put this code in a new server (selecting ‘Shell’ as the type) upon deployment. Adding this code alone will quickly check to see if the rolled back release is available, switch it and ends with an exit code (0). Your application would have reverted back instantly.

Panic over. Go make a cuppa.

Whilst you’re making your tea, since these deployments are running in parallel, the Atomic FTP server is currently deploying the rollback from scratch.

After speaking to DeployBot, one of the concerns they had with this script is you would have two deployments running in parallel, so you couldn’t be sure the shell deployment ran first. Generally speaking, your Atomic FTP deployment will take a good while to deploy, and you’d want it to carry on deploying anyway – this is more as a fallback until the actual deployment is finished.

However, if you would prefer to have your deployments running in series, simply set up a second environment. The first being your shell deployment with the quick rollback and the second being your Atomic FTP deployment. Head over to the ‘overview’ of your deployments which should show your environments. Click the settings button on your shell environment and you should be in the ‘Servers & settings’ section. Scroll down to ‘Triggers’ and you’ll see something like this:


Simply check the ‘Deploy another environment’ box and choose your Atomic FTP environment. Be sure to name your shell environment just the environment you wish to choose. In this case, I’ve named my shell environment ‘Staging’ and my Atomic FTP environment ‘Staging (Atomic FTP)’. This is so the manual commit trigger can be picked up by adding [deploy: staging] to your commit message.

And there we have it, instant rollbacks. Neato.

Posted on

Generating autoload_classmap.php files in Zend Framework 2

Application speed is a critical subject which gets worse as your site experiences more users. Even if you’re running your application on a super-server, it’s going to be much quicker and more cost effective to optimise your website first.

One way to help optimise your website (if you’re running a Zend Framework 2 application) is to use the class map autoloader. This helps to increase your website’s speed by not having to load your class by changing strings about using the namespace given. For something as simple as loading a class, there’s a much better way of going about it: the class map autoloader.

To automatically generate your class map files, we’re going to use the classmap_generator.php file which comes shipped with Zend Framework 2. I’ve created a great little one-liner for you:

for module in $(ls module); do; php vendor/bin/classmap_generator.php -l module/$module; done;

which can be run in the main directory of your Zend Framework 2 project.

Within your Module.php file within each module, find the getAutoloaderConfig() method and ensure you have entered the class map autoloader configuration. You’ll find something like this:

public function getAutoloaderConfig()
    return array(
        'Zend\Loader\StandardAutoloader' => array(
            'namespaces' => array(
                __NAMESPACE__ => __DIR__ . '/src/' . __NAMESPACE__,

And should replace it with this:

public function getAutoloaderConfig()
    return array(
        'Zend\Loader\ClassMapAutoloader' => array(
            __DIR__ . '/autoload_classmap.php',
        'Zend\Loader\StandardAutoloader' => array(
            'namespaces' => array(
                __NAMESPACE__ => __DIR__ . '/src/' . __NAMESPACE__,

And that’s it! You should now be using class map autoloading.

Posted on

El Capitan upgrade – Virtual hosts not working?

After upgrading to El Capitan, Apple’s new Operating system (version 10.11), I found all my virtual hosts weren’t working anymore!

This is due to the upgrade creating a new /etc/apache2/httpd.conf file and overwriting your old one. Luckily, they don’t quite remove your previous configuration file, they back it up for you under /etc/apache2/httpd.conf~previous.

If you’ve installed El Capitan and are not sure how to fix this, I’ve written a quick one-liner that will get you back on your feet. Simply open Terminal (From a Finder window, go to Applications > Utilites > Terminal) and paste in the following line:

rm -rf /etc/apache2/httpd.conf && mv /etc/apache2/httpd.conf~previous /etc/apache2/httpd.conf && sudo apachectl restart

And there you have it. Back up and running in no time at all.

Let’s go into that command a little deeper so you can understand what we’re doing:

rm -rf /etc/apache2/httpd.conf

This line is simply removing the new configuration file Apple has created for us. We have a backup of it already and we’re going to change the name of our backed up file to this one, so we don’t need it anymore. Please be sure you have already upgraded to El Capitan otherwise you may not have a backup file. To check if you have a backup file available, simply run `ls -la /etc/apache2/httpd.conf~previous` and you should get something back like this: -rw-r--r--  1 root  wheel  21002  4 Jul 17:41 /etc/apache2/httpd.conf~previous – if not, do not continue or you may remove your only configuration file.

The second line:

mv /etc/apache2/httpd.conf~previous /etc/apache2/httpd.conf

simply renames your backed up configuration file to the one we just deleted.

Lastly we want to restart Apache by running:

sudo apachectl restart

which will re-cache your configuration files which we have just updated.

And there you have it, an easy switch over but something which I found rather annoying during the upgrade. Hopefully this tutorial helps you out if you’ve run into the same problem.

Posted on

Installing PHP 5.5 and MySQL on Mac OSX

Yosemite comes preinstalled with Apache and PHP, we want to use Homebrew to install PHP to ensure we can get the latest updates by running a simple command, brew update.

First off, we need to install Homebrew. You can follow the latest installation online via their website which I would recommend but for the sake of being lazy, I’ve pasted it below. Please remember this command correct as from the date of this post:

ruby -e "$(curl -fsSL"

After we’ve installed Homebrew, we want to go ahead and use it! Simple run these commands to install PHP 5.5:

brew tap homebrew/dupes
 brew tap josegonzalez/homebrew-php
 brew install freetype jpeg libpng gd zlib openssl unixodbc
 brew install php55

That’s the easy part, Homebrew does most of the work for you. If you’ve every tried to install anything from source, you’ll know what I’m talking about.

Finally, to get you set up, there’s a few configuration tweaks to sort out. As mentioned above, Yosemite comes preinstalled with Apache and PHP but we want to use Homebrew’s version.

There are instructions that are displayed after the installation from Homebrew which you can always get back by running brew info php55 but I’ll spell out the important parts for you.

To get Apache (which is preinstalled) to use Homebrew’s version of PHP, simply edit your httpd.conf file:

sudo vi /etc/apache2/httpd.conf

Find and replace the line loading the module:

LoadModule php5_module /usr/local/opt/php55/libexec/apache2/

Always remember to restart Apache when changing any configuration files:

sudo apachectl restart