Tuesday 29 March 2016

Python: pip install error: Unable to find vcvarsall.bat

This error implies that you do not have the necessary compiler installed on your computer - in order to quickly find out the relevant compiler we need we can run the following from the python interpreter:
import sys
print (sys.version)
My output was something like:
3.5.1 (v3.5.1:37a07cee5969, Dec  6 2015, 01:54:25) [MSC v.1900 64 bit (AMD64)]
We are interested in the MSC version (the version of compiler that compiled the python interpreter) - in this case it's v.1900 - with some quick googling we can soon tell that is Visual C++ 2015.

So we should download Visual Studio 2015, click 'Custom Install' and also ensure that 'Common Tools' is selected in the drop down window under Visual C++ e.g.:

Finally restart the cmd window and attempt to install your package again with pip.

Wednesday 23 March 2016

Installing a minimalistic desktop environment on Fedora

Unfortunately in RHEL and it's derivatives the typical (or documented) way to install a desktop environment is through something like:

yum groupinstall 'Server with GUI'


yum groupinstall "Desktop" "Desktop Platform" "X Window System" "Fonts"

I find that there is a lot of unneeded bloat that is bundled in these groups - so this post will look at identifying the core packages that are needed to run a basic instance of X on the 'minimal' Fedora installation.

We can take a look at packages included in the groups with something like:

yum groupinfo "Fonts"

I will start by installing some important packages:

yum install xorg-x11-server-Xorg // This is the core xorg package.

yum install xorg-x11-server-common

yum install xorg-x11-drv-fbdev // This provides the driver for Linux kernel framebuffer devices that use X.

yum install xorg-x11-xinit

yum install xorg-x11-xauth

yum install dwm // A lightweight window manager with very few dependencies.

yum install lightdm // A lightweight display manager

Once installed we should generate our xorg.conf with:

cd /tmp
xorg :0 -configure
cp xorg.conf.new /etc/X11/xorg.conf

after attempting to launch X (startx) I got an error message complaining that no screens were found and it bombed out - after reviewing the logs (/var/log/Xorg*) I noticed the following error:

open /dev/dri/card0: No such file or directory

So lets probe for a video device:

yum install pciutils
lspci | grep VGA

00:02.0 VGA compatible controller: InnoTek Systemberatung GmbH VirtualBox Graphics Adapter

I later found that the virtualbox graphics adapter is compatible with VESA - so we install the relevant drivers:

yum install xorg-x11-drv-vesa

and change the 'Driver' variable from 'fbdev' to 'vesa' under the device section in /etc/X11/xorg.conf.

I then got another error complaining about lack of OpenGL support (and again in bombed out):

AIGLX: Screen 0 is not DRI2 capable
AIGLX: reverting to software rendering
AIGLX error: dlopen of /usr/lib64/dri/swrast_dri.so failed

To resolve we install the mesa driver package:

yum install mesa-dri-drivers

I also got a load of messages complaining about references to input devices that were not detected - so I installed the following package that allowed X to detect devices such as the mouse etc.

yum install xorg-x11-drivers

Although this time I didn't get any errors in the main logfile and there was no log file generated within:


 After a lot of messing around it trying to get it to work I finally (and rather annoyingly) realized that lightdm service has not been started! So to fix:

systemctl start lightdm
systemctl enable lightdm

and then finally we install the openbox window manager:

yum install openbox

and launch X:


I noticed even though I enabled the lightdm service upon reboot it was not booting into the GUI - so I checked the default runlevel (target) with:

systemctl get-default

Which came back with mulit-user.target. I then checked the available targets and noticed that 'graphical.target' was not currently enabled / available:

systemctl list-units --type=target --all

So to enable we should issue:

systemctl enable graphical.target

and then ensure it is the default target on boot:

systemctl set-default graphical.target

Monitoring disk i/o with CentOS

There are many different tools that can be used to help us measure disk i/o - although for this post I will be looking at two specific tools that both have their merits.

iotop (yum install iotop)

iotop is a real-time command line utility used to measure i/o performance for specific processes and has a similar interface to that of top.

iostat (yum install sysstat)

iostat comes as part of the sysstat package and is a great tool to check i/o against a specific disk for example:

iostat -x 2

The above command displays the disk i/o stats in 2 second intervals.

The %iowait variable is the time the system is waiting on the disk.

Managing devices with CentOS (procfs, sysfs, udev, lsusb and lspci)

This post will look at some common components that relate to device management under CentOS and other commonly used distro's.


procfs (/proc) contains a series of files that describe the kernel's current view of the system (for example cpu information) - allowing applications and users to retrieve information about the system.

Files within the proc directory are not standard text or binary files - rather they are referred to as 'virtual files' (this is because they are continually updated) and you will also notice that the file size of them is 0 bytes even though there is sometimes a large amount of information within them.

The majority of files are read-only - although there are some files that can be manually edited - for example /proc/sys/net/ipv4/ip_forward - which allows you to turn on IP forwarding - although changes made here will not persist a reboot and you will need to use something like sysctl to ensure the change is applied after reboots.


The sys filesystem (/sys) is a combination of the proc, devfs, and devpty file systems that provides users with a hierarchy that enumerates devices and busses attached to the system. It can make identifying hardware components quicker than simply looking in the devfs.


udev is a component that is used to dynamically create (and remove) files within the devfs (/dev) upon boot and also any hotpluggable devices. It also provides the ability of change block device names - for example when you would like to change an interface name.


lsusb is a utility that allows us to quickly get an overview of all usb devices connected to the system - we can also get more detailed information by running:

lsusb -v /dev/sdc

or see more information about a specific device with:

lsubs -D /dev/sdc


This utility outputs a list of PCI devices attached to the system - you can produce a more verbose output with the -v switch e.g.:

lspci -v

dbus in Linux

dbus is a library that provides communication (over a bus) between two applications. There are two main busses:

systemwide message bus (as the name suggest allows system-wide communication) - Typically implemented as the 'messagebus' service on most distributions  and the 'user login session' bus.

It can be debugged with the following:


Tuesday 22 March 2016

Understanding the X11 configuration

X / X11 is the defacto display server for the vast majority of linux distributions.

There are large selection of window managers available that sit upon X - for example: XFCE, Gnome and KDE to name a few.

There are several main configuration files:

- /etc/X11/xorg.conf.d/* : Contains a group of configuration files (must be .conf) that are parsed as part of / in addition to the xorg.conf file.

- /etc/X11/xorg.conf (sometimes /etc/xorg.conf) : The main configuration file that is read upon starting the X server.

** Note: xorg.conf is not created / needed by default in some distro's - e.g. later versions of Ubuntu.

Defaults and other configuration provided by other vendors can be accessed from /usr/share/X11/xorg.conf.d/*.

In order to generate your xorg.conf file you can use the following command:

Xorg :0 -configure


X -configure

A typical xorg.conf comprises of the following:

- Modules: This section defines the relevant modules that should be loaded in order to run X e.g. glx (which is the module that provides support for OpenGL)

- Extensions: This section defines extensions - simply provide additional functionality to the X system e.g. XEvIE which allows you to intercept mouse and keyboard events.

- Files: This section defines files - such as fonts that are loaded with the X server.

- ServerFlags: This section allows you to modify the behavior of the X server e.g. Option "AutoAddDevices" "false" (which disables device hotplugging.)

- InputDevice: Defines the drivers and configuration for any input devices - mouse, keyboard etc.

- Monitor: This section defines settings such as the monitor frequency - ** Be careful here as tweaking these settings could damage your monitor - refer to your hardware vendor's manual for the relevant settings!

- Device: This is where settings relating to your graphics card are defined e.g. type, video mode etc.

- Screen: This is where settings such as the screen resolution for your monitor(s) is defined.

** Changes you make to xorg.conf should generally (with most ditro's) persist! **

Monday 21 March 2016

Using service dependencies with NAGIOS Core

NAGIOS allows you to define service dependencies - that simply allow you to associate other hosts (or services) that a host (or service) relies on.

A typical example of this is when you would like to monitor a system that has an application frontend and database backend with two nodes in a cluster - you might not want to receive an alarm / notification (regarding the system specifically) if BOTH database servers have gone down - not simply if one of them has gone down.

Another real-world example (which I will demonstrate below) is if you are monitoring a series of externally hosted websites on your local premise and the internet connection drops - you might not want a huge series of alerts to come through - rather an alert the internet connection has gone down and to suppress any alerts for the websites.

We should firstly define the host / service that others will depend on e.g.:

define host{
        use             generic-host
        host_name       Internet Uplink
        alias           InternetUplink
        check_command   check_ping!200.0,70%!400.0,90%
        max_check_attempts 3
        contacts        MyContact
We should then define a service dependency definition:

define servicedependency{
        host_name Internet Uplink
        service_description Check Internet Connection
        dependent_host_name MyExternalWebsite
        dependent_service_description External Web Site
        execution_failure_criteria n
        notification_failure_criteria w,u,c

The two statements: execution_failure_criteria and notification_failure_criteria are really important here and define how notifications will be handled and how the relationship of the failures between the two hosts will affect each other.

The 'execution_failure_criteria' directive states when the dependency should be actively checked - when the parent is in specific states - for example the above snippet states that the dependency should always be checked (n) no matter what the state of the parent node is.

The 'notification_failure_criteria' directives defines when notifications of the dependency should be suppressed given the current state of the parent node. In this example alerts of the dependency will be suppressed when the parent node is in the following states - critical (c), unknown (u) and warning (w.)

Friday 18 March 2016

Working with IPTables on CentOS 7

With the release of CentOS 7 iptables has been dropped by default and in it's place is firewalld - if (like me) you prefer iptables you can restore it you can do the following:

Disable the stop and disable the firewalld service:

systemctl stop firewalld
systemctl mask firewalld

and install and enable iptables:

yum install iptables-services
systemctl enable iptables
systemctl start iptables

service iptables save

To view iptables rules we must use the -L switch along with the -t switch specifying the table name - typically:

sudo iptables -L -t nat
sudo iptables -L -t filter
sudo iptables -L -t mangle
sudo iptables -L -t raw
sudo iptables -L -t security

Although you are likely to make use of the filter and nat tables predominantly.

IPTables are broken down into tables (as see above e.g. 'nat', 'filter' etc) and then into chains (e.g. 'INPUT', 'OUTPUT', 'FORWARD'.


FILTER - This table is used for the basic input / output traffic out and into the firewall - it is comprised of three chains: INPUT (for ingress traffic to the host), OUTPUT (for egress traffic from the host) and FORWARD (traffic from one NIC to another on the local host)

NAT - This table is (as the name suggests) performing NAT'ing on traffic - it is comprised of three chains - PREROUTING (This is where NAT'ing is performed before being routed (also known as (D)estination NAT) a typical example of this is where you want to NAT some internet IP's to local IP's on your LAN. The next is POSTROUTING where the NAT'ing will be performed after routing (also called (S)ource NAT - the more common NAT method) and is commonly used when you wish to provide internal users on a LAN access to the internet. And finally the OUTPUT chain - which deals with NAT traffic generated on the local host.

MANGLE - This table is specifically for packet alteration - for example applying QoS bits to a UDP / TCP header.

RAW - A much less commonly used table it is specifically for configuring exemptions for connection tracking.

The following command appends (-A) a new rule into the 'INPUT' chain in the 'filter' table where traffic equals TCP/80 and the connection state matches either 'NEW' or 'ESTABLISHED' and finally permits the rule:
iptables -t filter -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT

Or say we want to insert the same rule put higher up (above a deny all statement for example) the chain - we can do this with the -I switch:

 iptables -t filter -I INPUT 1 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT

SNAT Example

iptables -t nat -A POSTROUTING -o eth0 -s -j MASQUERADE
iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth1 -s -o eth0 -j ACCEPT

Port Forwarding Example

OpenVPN Example

Thursday 17 March 2016

Managing Physical and Logical Volumes with LVM in CentOS

Physical Volumes

When creating a physical volume (from disks) the disks MUST have no partition tables (be it MBR or GPT) - see my post about removing GPT/MR partition tables for more info.

To create the physical volume we can issue:

sudo pvcreate /dev/sda 
sudo pvcreate /dev/sdb

We can also create a physical volume from partitions e.g.

sudo pvcreate /dev/sda1 
sudo pvcreate /dev/sdb1

and then review them with:

sudo pvdisplay

To get an overview of disks that can be used we can issue:

sudo lvmdiskscan


sudo pvscan

Logical Volumes

Now that we have created our physical volumes we can now add them to our logical volumes - but we must first create a volume group with:

vgcreate myVolumeGroup /dev/sda /dev/sdb /dev/sdc

We can now create a (100GB) logical volume:

lvcreate -L 100G myVolumeGroup

(outputs - lvol0)

We can then partition the logical disk like so:

fdisk /dev/myVolumeGroup/lvol0

create a partition and then create the filesystem:

mke2fs -j -t ext4 /dev/myVolumeGroup/lvol0

We can now dynamically resize if we wish to increase the disk size - if we wanted to do this  we would firstly add a new disk to the logival volume group:

vgextend myVolumeGroup /dev/sdp

and then resize the logical disk:

lvextend -L 500G /dev/myVolumeGroup/lvol0

then resize / recreate the partition with fdisk:

fdisk /dev/myVolumeGroup/lvol0

and finally use resize2fs to resize the file system:

resize2fs /dev/myVolumeGroup/lvol0

Erasing / copying MBR and GPT partition tables within Linux


The MBR typically takes up the first 512 bytes of your disk - it is broken down as follows:

446 bytes – Bootstrap.
64 bytes – Partition table.
2 bytes – Signature.

(446 + 64 + 2 = 512 bytes.)

We can make use of the dd command to copy your MBR to another drive:

dd if=/dev/sda of=/dev/sdb bs=512 count=1

or even delete your MBR:

dd if=/dev/zero of=/dev/sdc bs=512 count=1


GPT addresses a lot of MBR's shortcomings - specifically the partition size / disk sizes.

You can erase a GPT partition table using gdisk:

gdisk /dev/sda

Command (? for help): x 

Expert command (? for help): z
About to wipe out GPT on /dev/sdx. Proceed? (Y/N): y
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Blank out MBR? (Y/N): y

You can copy a GPT partition table using the sgdisk (apt-get install gdisk):

sgdisk -R /dev/sda /dev/sdb
sgdisk -G /dev/sdb

 Note: The second command generalizes the disk and partition GUIDs. 

Automating stopping and starting of AWS EC2 instances using Amazon Data Pipeline

AWS provides a service called Data Pipeline that can be used to schedule compute resources such as EC2 machines among other things.

This tutorial will demonstrate how to keep a VM running within a specific timeframe - for example the VM should be available from 9am in the morning and then shutdown at 5.30pm and the next day come back online at 9am.

This methodology can in some scenerios save some money - although bare in mind that (according to Amazon) when a command is executed a micro VM instance is launched to process the command - and this is equivilent to 100 mins of runtime compared to just leaving the other VM turned on (assuming it was a micro vm instance.) So ensure that if you are doing it for cost saving that there is a considerable time between starting and stoping the VM.

We should start by by creating a new role:

Roles >> 'Create New Role'

Role Name: DataPipelineDefaultResourceRole'
Role Type: Amzon EC2 Role for Data Pipeline

and then hit 'Create Role'.

We should then create a new policy:

Policy >> Create Policy >> 'Create Your Own Policy'

Policy Name: DataPipelineDefaultResourceRole_EC2_Policy

Policy Document:

     "Version": "2012-10-17",
     "Statement": [
               "Effect": "Allow",
               "Action": [
               "Resource": [

Then click 'Validate Policy' and finally 'Create Policy'.

Then go back to the policies list and find you newly created policy >> select it and click 'Policy Actions' >> Attach >> select the role we create above: 'DataPipelineDefaultResourceRole'.

Proceed by going back to 'Roles' >> 'Create New Role':

Role Name: DataPipelineDefaultRole
AWS Service Role: AWS Data Pipeline

and hit 'Create.'

We should now go to the Amazon Data Pipeline console >> Create Pipeline >>

Name: Start <VM Name>
Source: 'Build using a template' >> 'Run AWS CLI command'
AWS CLI Command: aws ec2 start-instances --instance-ids i-04868gk78 --region eu-west-1
Logging: (optional)
IAM Roles >> Custom: Pipeline Role = DataPipelineDefaultRole and EC2 Instance Role: DataPipelineDefaultResourceRole.

And hit 'Activate' !

Repeat the process for a 'Shutdown' Event.

Managing the system with systemd

Systemd was introduced as a replacement for init - due to certain shortcomings like it's startup times and offer an improved API and lower memory footprint (I am not going to get into a pro's and con's argument - there are plenty of resources online that do this very well.)

Popular Linux distributions like Debian (8), Fedora and CentOS (7) have now adopted systemd as default.

The run-level concept (although now called 'targets' ) still applies and you are able to view the configuration for each level by running something like:

ls -l /usr/lib/systemd/system/runlevel*

Although these files are actually symbolic links pointing to the newly formatted names:

lrwxrwxrwx. 1 root root 15 Mar  8 15:13 /usr/lib/systemd/system/runlevel0.target -> poweroff.target
lrwxrwxrwx. 1 root root 13 Mar  8 15:13 /usr/lib/systemd/system/runlevel1.target -> rescue.target
lrwxrwxrwx. 1 root root 17 Mar  8 15:13 /usr/lib/systemd/system/runlevel2.target -> multi-user.target
lrwxrwxrwx. 1 root root 17 Mar  8 15:13 /usr/lib/systemd/system/runlevel3.target -> multi-user.target
lrwxrwxrwx. 1 root root 17 Mar  8 15:13 /usr/lib/systemd/system/runlevel4.target -> multi-user.target
lrwxrwxrwx. 1 root root 16 Mar  8 15:13 /usr/lib/systemd/system/runlevel5.target -> graphical.target
lrwxrwxrwx. 1 root root 13 Mar  8 15:13 /usr/lib/systemd/system/runlevel6.target -> reboot.target

You can switch to another runlevel (target)  with something like:

systemctl isolate runlevel7.target

and to get your current runlevel (target):

systemctl list-units --type target

and to change your default runlevel (target):

systemctl set-default graphical.target

We can get an overview of all of our services by issuing:

sudo systemctl

or check the status of a service with:

sudo systemctl status sshd

Understanding init run levels within Linux

Run levels effectively allow you to provide certain degrees of functionality - there are several run-levels that I will describe below:

0 - halt : This will simply shutdown the machine.
1 - Single User Mode: Only the console is accessible, user is NOT authenticated and goes straight into root with logging in.
2 - Multi-user mode - does not support NFS though
3 - Full multiuser mode (This is the typical level for a server without a GUI)
4 - This level is not currently used.
5 - X11 : This run level is used when the server has a desktop environment / GUI.
6 - Reboot: Simply reboot's the machine ('shutdown -r' call this)

You can go to a run level by using the init command - for example:

init 1

Will get you into runlevel 1 (single user mode.)

There are several configuration files related to init that I will highlight below.

/etc/inittab: This is where you can define a default init level e.g.:


and also where you define what to monitor when changing an init level and what is to be performed once completed.

/etc/rc.d: This directory contains a folder for each run level e.g. 'rc0.d' that then contains a series of scripts within them - either beginning with 'S' (indicating that we are entering this runlevel) or 'K' (indicating that we leaving this run level.)

/etc/rc.local: This is where you can add your own startup scripts.

Sometimes it is necessary to drop into single user mode - although you might not be able to get into the OS to configure this in the first place - a typical example is if you manage to forget your root password - rather than having to run a live cd, mount the fs and update the root password we could boot into single user mode of the OS by adding a line to the entry in the grub boot loader.

To do this we need to have console access - we should then hover over the entry in the grub boot loader and hit the 'a' key - this allows you to then append data to the kernel line - we want to add 'single' right at the end of the kernel line.

A Quick CHMOD reference

CHMOD can be written in one of two ways - either numerically e.g. chmod 777 or with lettering e.g. chmod g=rwx,o=rwx,u=rxw

The above two command are equal to one another - the numbers are added as follows:


+r      add read perms for others, owner and group
add execute perms for others, owner and group
add write perms for others, owner and group

a+rw = add read and write perms for others, group and owner
a+x = add execute perms for others, group and owner.

g-xw = remove execute and write perms for the group
o-rwx = remove read, write and execute from others.
u+r = add read permissions for owner

chmod u=xwr, g=, o= This sets the permissions so owner can execute,
write and read, although group and others have no permissions to do anything.

chmod 1777 This sets the permissions so that the owner, group and
others can read write and execute - although the 1 at the beginning is
called the 'sticky bit' that ensures Only the owner a delete the file!

chmod 2755 The two at the beginning sets the GID (group id) - this
means that any other user that creates a file or folder within the folders
heirarchy will automatically inherit / belong to the group of the parent
folder - not the group of the user that created it!

When dealing with chmod and directories it is slightly different.

When giving the read permission to let's say to 'others' they will only be
able to list the directory contents and nothing else.

In order to rename, delete, add and modify files (and also cd to the
directory) we need to set the write permissions AND the execute permissions
(write permissions on their own are useless!)

Wednesday 16 March 2016

Building a custom / newer kernel with CentOS 7

In this tutorial I will be installing the latest stable version of the Linux kernel (4.4.5 as of 15/03/2016).

There is the easy method (using a pre-compiled RPM from a source like ELRepo) or the more daunting task of manually compiling it from source..! I will be doing both...

Compiling from source

This generally involves the following:

- Download kernal, compile

- Add grub boot entry

Please refer to: https://www.howtoforge.com/kernel_compilation_centos

From a repository

Ensure you have added the ELRepo:

cd /tmp
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum update

We also need to enable the 'ELRepo.org Community Enterprise Linux Kernel Repository - el7' as it is disabled by default in /etc/yum.repos.d/elrepo.repo - although we can do this as a one of with:

yum --enablerepo=elrepo-kernel install kernel-ml

This will place the initramfs into:


The grub bootloader will automatically be updated / re-generated with a new entry for the newer kernel e.g. (taken  from grub.cfg):

menuentry 'CentOS Linux (4.5.0-1.el7.elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-327.el7.x86_6$
        set gfxpayload=keep
           insmod gzio
        insmod part_msdos
           insmod xfs
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  93af92d3-387a-4ab8-8d5f-83f8c36785b1
          search --no-floppy --fs-uuid --set=root 93af92d3-387a-4ab8-8d5f-83f8c36785b1
        linux16 /vmlinuz-4.5.0-1.el7.elrepo.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_GB.UTF-8
        initrd16 /initramfs-4.5.0-1.el7.elrepo.x86_64.img

And once booted into the new kernel - confirm version with:

uname -r

You might (like myself) also want to install the corresponding kernel headers - we can do this with: 

yum --enablerepo=elrepo-kernel install kernel-ml-devel.x86_64 kernel-ml-headers.x86_64 kernel-ml-tools.x86_64 kernel-ml-tools-libs.x86_64 kernel-ml-tools-libs-devel.x86_64

* Note: You will have to remove your existing kernel headers - as it will cause a conflict otherwise! *

Managing grub2 with CentOS

By default CentOS uses the grub2 bootloader.

There are several main configuration files:

/etc/default/grub # Sets the general characteristics of the bootloader e.g. timeouts, menu options and so on.

/etc/grub.d/ # This directory contains various configuration files:

00_header: Used to detect and set various settings like timeouts, load modules into the kernel and so on.

00_tuned: Used to apply any settings with Tuned (tuned-adm) - this allows you to tweak the kernel for better performance for specific server roles.

01_users: Script used to check if any users for grub have been defined.

10_linux: Used to create the linux entries in the bootloader.

20_linux_xen: Used to create the linux (running under Xen) entries in the bootloader.

20_ppc_terminfo: Script to check the terminal size when running the bootloader.

30_os-prober:  This script is used to detect operating systems on hard drives.

40_custom: Used to add your own entries to grub - e.g. a custom kernel.

/boot/grub2/grub.cfg # This is the file main configuration file that is generated and read by the grub bootloader. In CentOS 7 this file should not be manually changed as changes will be overwritten!


grub2-mkconfig - This commands compiles information from the above configuration files and generates the grub2.cfg file e.g.:

grub2-mkconfig -o /boot/grub/grub.cfg

grub2-install - Allows you to install grub to the MBR or a partition e.g.:

grub2-install /dev/sda

Entry Breakdown

Below I have extracted an entry from the grub bootloader on my local system that I will break down:

menuentry 'CentOS Linux (4.5.0-1.el7.elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-327.el7.x86_6$
        set gfxpayload=keep
           insmod gzio
        insmod part_msdos
           insmod xfs
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  93af92d3-387a-4ab8-8d5f-83f8c36785b1
          search --no-floppy --fs-uuid --set=root 93af92d3-387a-4ab8-8d5f-83f8c36785b1
        linux16 /vmlinuz-4.5.0-1.el7.elrepo.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_GB.UTF-8
        initrd16 /initramfs-4.5.0-1.el7.elrepo.x86_64.img
--class: This variable is purely used for theme purposes (e.g. displaying an image / logo of the OS when you hover over the boot entry and can safely be removed.

--unrestricted: This command simply allows all users to boot the menu item - and is needed (even if there are no users defined!)

load_video: This command loads the video drivers.

set gfxpayload: This defines the resolution - although if set to 'keep' it will preserve the value set by gfxmode. If it is set to 'text' it will simply boot into normal text mode.

insmod gzio: Load the gzip module

insmod part_msdos: The msdos module is required to read the msdos (MBR) table.

insmod xfs: The xfs module is used to boot are filesystem for the OS.

set root: The first instance defines where the MBR is stored.

linux16 /vmlinuz-4.5.0-1.el7.elrepo.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root: The first part - linux16 ... instructs the kernel to be loaded with a 16bit boot protocol (rather than grub's default - 32bit) this is usually done as some BIOS functions do not support the 32bit boot protocol. It proceeds to define the kernel version, the root path of the OS (i.e. '/') and a fallback kernel.

rd.lvm.lv=centos/swap rhgb quiet LANG=en_GB.UTF-8 initrd16 /initramfs-4.5.0-1.el7.elrepo.x86_64.img: This portion defines the initramfs image that will be loaded - this is a stripped down OS that is loaded into RAM initially that allows you to then boot from your OS.

* Note: LVM is used by default in CentOS 7 - this is why we see references to '/dev/mapper/xxxxx' rather than something like Debian or Ubuntu that might simply refer to the partitions as /dev/sda1 etc. *

Tuesday 15 March 2016

Setting up NTOPNG with the Cisco ASA on CentOS 7

Firstly refer to the installation instructions provided below (I would recommend installing from the repository):
Add a new repo:

sudo vi /etc/yum.repos.d/ntop.repo

and add:

name=ntop packages
name=ntop packages

and then run a update yum:

yum update

and install the relevent packages:

yum install ntopng ntopng-data hiredis-devel nprobe

and start the redis service:

service start redis

We should firstly setup nprobe to start collecting the flows from our ASA - so we should run something like:

/usr/local/bin/nprobe --zmq tcp://*:5556 -i none -n none --collector-port 2055  

We can also run ntopng directly initially to test it:

/usr/bin/ntopng -i "tcp://" --local-networks="" --http-port=3000 -G var/tmp/ntopng.pid --disable-login --dns-mode=1 -U ntopng -w 3050 -W 3051

* Note: '-zmq' refers to the interface that ntopng will tap into to get the flow data when we set it up. *

Remember to add an exception in the firewall (with firewalld) e.g.

sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --reload 

We should then create a configuration file for NTOP:

sudo vi /etc/ntopng/ntopng.conf

and enter something like:

--packet-filter="ip and not proto ipv6 and not ether host ff:ff:ff:ff:ff:ff and not net ( or and not host"
-c 9hoAtewwpC2tXRMJBfifrY24B

* Note: Refer to the stdout for any warnings! *

And then proceed by running ntopng:

sudo service ntopg start

And login with the default credentails - admin/admin.

Point your netflow device at the NTOP server (UDP/2055 by default.)

To enable the services to start on boot we can issue:

systemctl enable redis.service
systemctl enable ntopng.service

systemctl enable nprobe.service 

Monday 14 March 2016

Monitoring Linux Hosts with NAGIOS and NRPE

We should firstly install the NRPE agent of the server we wish to monitor:

wget http://assets.nagios.com/downloads/nagiosxi/agents/linux-nrpe-agent.tar.gz
tar xzf linux-nrpe*
cd linux-nrpe*
sudo ./fullinstall

** Ensure that your firewall on the client allows port 5666 (TCP/NRPE) and 5667(NSCA) inbound connections from your NAGIOS server **

We will then be prompted to enter the IP address of your NAGIOS monitoring server(s.)

Once installed you will find a set of pre-defined commands in /usr/local/nagios/etc/nrpe.cfg - which (of course) we can add additional checks to if required.

Now the next step is too install the NRPE plugin on the NAGIOS server:

wget http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz
tar zxvf nrpe-2.15.tar.gz
cd nrpe-2.15
make all
make install-daemon
make install-plugin

We should also ensure that the relevent checks we wish to perform are uncommented in the following config on the client:


We can now do a quick test of the plugin with:

/usr/local/nagios/bin/nrpe -H

We should proceed by creating the relevent definitons for the new linux host and it's services e.g.:

Add the following lines in nagios.conf

sudo vi /usr/local/nagios/etc/nagios.cfg

# Definitions for monitoring Linux hosts

and then create the linux.cfg file and add our host and service definitions:

sudo vi /usr/local/nagios/etc/objects/linux.cfg

define host{
        use             linux-server  ; Inherit default values from a template
        host_name       mylinuxserver.domain.com    ; The name we're giving to this host
        alias           My Linux Server    ; A longer name associated with the host
        max_check_attempts 3
        contacts        NAGIOS Alerting
        address         mylinuxserver.domain.com       ; IP address of the host
define service{
            use                             generic-service
            host_name                       mylinuxserver.domain.com
            service_description             CPU Load
            check_command                   check_nrpe!check_load
            max_check_attempts              3
            check_interval                  3
            retry_interval                  1
            check_period                    24x7
            notification_interval           30

Friday 11 March 2016

How to install python modules on CentOS 7

Like many other languages Python is lucky enough to have it's own package manager (PIP) that will allow you to quickly (and hassle free - most of the time) install third party modules.

To install PIP we should firstly ensure that we have the EPEL repository installed (as I am using CentOS for this tutorial):

sudo yum install epel-release

and then install the PIP package using yum:

yum install python-pip

and finally install the relevent packages via PIP e.g.

pip install requests

Creating a custom NAGIOS plugin to check your bespoke applications

Full credit / source: https://www.digitalocean.com/community/tutorials/how-to-create-nagios-plugins-with-bash-on-ubuntu-12-10

In my opinion writing addons for NAGIOS couldn't be easier - there are three main requirements:

- The check returns the relevent exit code (0 = OK, 1 = Warning, 2 = Critical, 3=Unknown)
- Ideally push some useful information out to STDOUT.

Since the requirements are so simple you can pretty much write the checks in anything you want - BASH scripting, C, python etc.

So on the client host we create the script:

sudo vi /usr/lib/nagios/plugins/vendorid.sh

vendor_id=`echo /proc/cpuinfo | grep vendor_id`
case $vendor_id in
echo "OK - $vendor_id% of disk space used."
exit 0
echo "WARNING - $vendor_id% of disk space used."
exit 1
echo "CRITICAL - $vendor_id% of disk space used."
exit 2
echo "UNKNOWN - $vendor_id% of disk space used."
exit 3

and ensure it can be executed:

chmod +x /usr/lib/nagios/plugins/vendorid.sh

Now on the client host install the NRPE plugin and overwrite /etc/nagios/nrpe.cfg

sudo rm /etc/nagios/nrpe.cfg
sudo vi /etc/nagios/nrpe.cfg

and add:



and restart the NRPE service:

service nagios-nrpe-server restart

Now on the NAGIOS server we edit the commands.cfg file:

sudo vi /etc/nagios/objects/commands.cfg

and add:

define command{
        command_name    vendorid_bash
        command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -c vendorid_bash

and then add a service definition e.g.

define service {
        use                             generic-service
        host_name                       myhost
        service_description             Custom Check
        check_command                   vendorid_bash

and finally restart nagios / check the config:

sudo /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
sudo service nagios restart

NAGIOS: check_http: Invalid option - SSL is not available

When attempting to monitor an SSL / HTTPS site with the check_http command within NAGIOS I encountered the following error:

check_http: Invalid option - SSL is not available

This is because the openssl development libraries were not available during the initial compilation of the NAGIOS plugins - to resolve this we should install the relevent packages and recompile them:

yum install openssl-devel

wget http://www.nagios-plugins.org/download/nagios-plugins-2.1.1.tar.gz
tar zxvf nagios-plugins-2.1.1.tar.gz
cd nagios-plugins-2*
./configure --with-openssl --with-nagios-user=nagios --with-nagios-group=nagios
make all
make install

Thursday 10 March 2016

Using MRTG to monitor bandwidth on your network (SNMP)

MRTG is a tool that logs bandwdith information via SNMP from a network device such as a switch and archives it providing an historic overview of bandwidth utilization. 

We should firstly install the necessary packages:

yum install mrtg net-snmp net-snmp-utils

and create a configuration for your SNMP enabled device with:

cfgmaker --global 'WorkDir: /var/www/mrtg' --output /etc/mrtg/mrtg.cfg public@router

(where 'public' is your community string.)

We can manually invoke MRTG with:

/usr/bin/indexmaker --output=/var/www/mrtg/index.html /etc/mrtg/mrtg.cfg

But we will probably want to setup a cron job to run periodically for us!

So we create the following:


and add something like:

0,5,10,15,20,25,30,35,40,45,50,55 * * * * /usr/bin/indexmaker --output=/var/www/mrtg/index.html /etc/mrtg/mrtg.cfg

Also ensure the cron daemon is running / will start on boot:

sudo chkconfig crond on
sudo service crond start

Error: Could not stat() command file '/usr/local/nagios/var/rw/nagios.cmd'!

I encountered this error after a fresh install of NAGIOS and attempting to disable a check from the web GUI.

There are two areas to check:

- Firstly if you are using something like CentOS - ensure that SELinux is not interfering - you can do this by issuing:

cat /etc/sysconfig/selinux

and ensure the 'SELINUX' is set to 'permissive' e.g.


OR better yet create an exception for httpd:

sudo vi /etc/selinux/targeted/booleans

and add / append:


Now restart the httpd server:

sudo setsebool httpd_disable_trans 1
sudo service httpd restart

The second check is to ensure that the permissions on the folder are correct:

chown nagios.nagcmd /usr/local/nagios/var/rw
chmod g+rwx /usr/local/nagios/var/rw
chmod g+s /usr/local/nagios/var/rw

and restart httpd:

sudo service httpd restart

Wednesday 9 March 2016

Setting up email notifications for NAGIOS Core with Exchange

For this demonstration I will be configuring NAGIOS to route it's notifications through an Exchange server.

We should firstly define a contact definition - this is simply a set of information that tells NAGIOS when and where to send notifications - once we have defined a contact we can then associate it with hosts, hostgroups, service groups and so on.

So we should edit the contacts definitions:

sudo vi /usr/local/nagios/etc/objects/contacts.cfg

define contact {
        contact_name                            Administrator1
        alias                                   Administrator 1
        email                                   administrator@domain.com
        service_notification_period             24x7
        service_notification_options            w,u,c,r,f,s
        service_notification_commands           notify-service-by-email
        host_notification_period                24x7
        host_notification_options               d,u,r,f,s
        host_notification_commands              notify-host-by-email

We can also create a group as well to house multiple recipients:

define contactgroup{
        contactgroup_name                   MyContactGroup
        alias                               GroupAlias
        members                             Administrator1,Administrator2,Administrator3

We can then define this contact within - lets say a specific host:

define host{
   name                         mailserver
   use                          linux-server
   notifications_enabled        1
   notification_period          24x7
   notification_interval        120
   notification_options         d,u,r,f,s
   register                     0
   contact_groups              Group1
   contacts                     Contact1
Our next task is too setup a MTA so we can send our email to the contacts specified - now as I was using the 'Core' edition of CentOS I had instal the 'mail' command:

yum install mailx mutt

The mail command that is issued by NAGIOS is defined within:

sudo vi /usr/local/nagios/etc/objects/commands.cfg

We are looking specifically at the following lines:

command_line  /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time:  $LONGDATETIME$\n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$

command_line  /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$

Although the command being invoked e.g.:

/bin/mail -s "testing subject" user@domain.com < message.txt

does not route the mail through the smarthost and will attempt to deliver it directly - so we must define the '-S' switch to manually specify the smarthost (Exchange.) You should also ensure that a receieve connector is setup and configured correctly in your Exchange environment.

To test your SMTP config we can issue something like:

echo Test | mailx -v -s "Test Subject" -S smtp=mail.domain.com:25 user@domain.com

I end up changing the 'command_line' variables (as above) to:

command_line  /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time:  $LONGDATETIME$\n" | /bin/mail -S smtp=mail.myhost.com:25 -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$

command_line  /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /bin/mail -S smtp=mail.myhost.com:25 -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$

Finally restart nagios and trigger an alert:

sudo service nagios restart

Managing Outlook clients mailbox rules on Exchange

To view current rules of a mailbox we can issue:

Get-InboxRule -Mailbox Mailbox01

And in order to look at a specific rule we can issue something like:

Get-InboxRule -Mailbox Mailbox01 -Identity "My Example Rule"

To delete a rule we can issue:

Remove-Inboxrule –Mailbox Mailbox01 -Identity "My Example Rule"

And using the Set-InboxRule and New-InboxRule cmdlets we can modify and create new rules - although to be honest it's quite often easier doing this on the client-side / Outlook.

Configure a time source / NTP on Centos

We should firstly install the necessary packages:

yum install ntp ntpdate ntp-doc

And ensure the service is enabled:

systemctl enable ntpd

Configure the relevent NTP server:

interface ignore wildcard
interface listen
interface listen ::1
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst

And finally start the service:

service ntpd start

Setup a Windows server with NAGIOS Core

Setting up the windows agent for NAGIOS allows us to monitor components such as memory, cpu, services etc. that would otherwise not be possible.

We should firstly ensure that the 'check_nt' plugin is installed:

ls -l /usr/local/nagios/libexec | grep check_nt

We have to do a few pre-requisites - firstly enabling the Windows object definitions:

sudo vi /usr/local/nagios/etc/nagios.cfg

and uncomment:


Download the latest NSClient++ client from:


Install it on the Windows server - follow the wizard and ensure that you add the IP address of the NAGIOS server for the 'Allowed Hosts' section and also ensure the following is ticked:

'Enable common check plugins'
'Enable nsclient server (check_nt)'
'Enable check_nrpe server'

Now to go 'Services' and ensure the NCCClient service has started. If we need to make any further changes to the Windows client we can simply edit 'nsclient.ini' within the NCCClient directory and restart the service.

Note: If you would like to pass arguments with the NRPE client on the NAGIOS server you should add the following to nsclient.ini on the host with NSClient++ installed:

allow arguments=1

and then restart the NSClient++ service.

We should now create the object definition on the NAGIOS server:

sudo vi /usr/local/nagios/etc/objects/windows.cfg

We can define a host as follows:

define host{
use windows-server ; Inherit default values from a Windows server template (make sure you keep this line!)
host_name winserver
alias My Windows Server

and then add the relevent service definitions:

define service{
use generic-service
host_name winserver
service_description NSClient++ Version
check_command check_nt!CLIENTVERSION

define service{
use generic-service
host_name winserver
service_description Uptime
check_command check_nt!UPTIME

Check your config with:

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

Finally restart the nagios server with:

sudo service nagios restart
sudo service httpd restart

Tuesday 8 March 2016

Configuring a static / dynamic IP, default route and DNS on CentOS 7

We will firstly setup our interface - to do so we should issue (ifconfig is not included on the core edition):

ip addr

This will return all available interfaces - in our case we are interested in one called:


So in order to statically configure the interface we should create (or modify) it's script within networking-scripts directory:

sudo vi /etc/sysconfig/network-scripts/ifcfg-eno858544

and add the following (for minimum config):

DEVICE=eno858544 # Points to the block device
ONBOOT=yes # Start interface at boot time? (or after network services restart)

and these extras (optional):

DEFROUTE=yes # Is this interface the default route?
IPV6INIT=yes # Do you want to enable IPv6?
IPV6_AUTOCONF=yes # Should usually be left as yes if you are using IPv6
IPV4_FAILURE_FATAL=no # Should the interface be disabled if the link goes down?
IPV6_FAILURE_FATAL=no # Should the interface be disabled if the link goes down?

For setting up DHCP we can do something like:

DEVICE=eno858544 # Points to the block device
ONBOOT=yes # Start interface at boot time? (or after network services restart)
PEERDNS=YES # If you specify DNS1=x.x.x.x  (or are using DHCPO) in your config here you should change this value to 'YES'
PEERROUTES=YES # Should routing information be obtained from the DHCP server?

and add the following (optionally):

DEFROUTE=yes # Is this interface the default route?
IPV6INIT=yes # Do you want to enable IPv6?
IPV6_AUTOCONF=yes # Should usually be left as yes if you are using IPv6
IPV4_FAILURE_FATAL=no # Should the interface be disabled if the link goes down?
IPV6_FAILURE_FATAL=no # Should the interface be disabled if the link goes down?

We should now proceed by setting up our routing - by defining a default gateway:

sudo vi /etc/sysconfig/network

and add something like:


ip route default dev eno858544

In order to setup DNS, domain options we can either add an entry globally in /etc/resolv.conf (like most other distros):

sudo vi /etc/resolv.conf

search mydomain.internal

or we can add an entry in our interface init script:

sudo vi /etc/sysconfig/network-scripts/ifcfg-eno858544


Monday 7 March 2016

Setting up PAT with IPTables on Debian

For this tutorial I will outline two common PAT configurations - the first one is where we have a host with a single NIC and will forward traffic from a specific / it's own local subnet:

We should firstly ensure IP forwarding is turned on in the kernel:

echo 1 > /proc/sys/net/ipv4/ip_forward

Edit the sysctl.conf file:

sudo vi /etc/sysctl.conf

and add:

net.ipv4.ip_forward = 1

For security we should also disable ICMP redirects by setting:

net.ipv4.conf.eth0.send_redirects = 0

and then run the following to apply the changes:

sudo sysctl -p /etc/sysctl.conf

We should proceed by setting up masqerrading and NAT with iptables:

iptables -t nat -A POSTROUTING -o eth0 -s -j MASQUERADE

* The above command appends a new rule to the POSTROUTING chain of the NAT table that allows agress packets on eth0 that match the source of to 'masquerade' (take the IP address of the router's interface).

We can review our rules with:

sudo iptables -vL -t nat

We should then ensure our rules persist a reboot by issuing:

iptables-save > /etc/iptables.up.rules

The second scenerio is where we have a host with two NICs - one of which hosts an internal client range ( and another which will act as the outside network ( - we would like all egress traffic from a specific internal subnet to be NAT'd out from the outside interface address of

iptables -t nat -A POSTROUTING -o eth0 -s -j MASQUERADE
iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth1 -s -o eth0 -j ACCEPT

* Where eth0 is on our EXTERNAL subnet and eth1 is on our INTERNAL network. *

Thursday 3 March 2016