Tuesday 31 January 2017

Troubleshooting file permissions with RHEL / CentOS 7

Let's say we have a file that we are unable to access as the 'apache' user.

The basics - checking the basic permissions:

ls -la /etc/apache/secret

-rw-r-----.  1 apache apache  128 Jan 31 14:16 encKey

Hear we can see that the 'apache' group has read permissions on the file and the 'apache' user / owner has read and write permissions.

So far everything looks ok - although we are still unable to access the file - so lets ensure there is not GUID / UID mismatches - to do this we can peform:

ls -lan /etc/apache/secret

-rw-r-----.  1 48 48  128 Jan 31 14:16 encKey

And then contrast the UID / GUID with:

id apache

uid=48(apache) gid=48(apache) groups=48(apache)

or alternatively:

cat /etc/passwd | grep apache

Again all looks good - so lets check the permissions on the parent directory:

ls -lan /etc | grep apache2
drwxr-xr-x.   5  0   0   4096 Jan 10 10:49 httpd

Again looks ok - the apache user has read and execute permissions.

Note: If you have recently added yourself into a group that has permission to the folder you will need to logout and back in again for changes to take effect.

The last part to check is ensure that any parent directories also have the appropriate permissions.


Friday 27 January 2017

Creating and applying patch files in CentOS 7

Patch files are often are often included when compiling applications from source and often provide fixes for security and stability in cases where maintained versions in the repositories are not kept up to date / patched quickly.

For this tutorial we will start of by creating a simple patch file that will make the relevant changes for version of a simple python program:

vi /tmp/main.py

list=[1,2,3,4,5,6,7,8,9]

for i in list:
    print('The number is: ' + str(i))

vi /tmp/main_new.py

list=['this','is','a','test']

for i in list:
    print('The word is: ' + i)

Now create the diff file:

diff -u main.py main_new.py > main.patch

and inspect the contents with:

cat test.patch

--- main.py 2017-01-27 14:34:49.077788860 +0000
+++ main_new.py 2017-01-27 14:34:36.044762764 +0000
@@ -1,4 +1,5 @@
-list=[1,2,3,4,5,6,7,8,9]
+list=['this', 'is', 'a', 'test']
 
 for i in list:
-    print('The number is: ' + str(i))
+    print('The word is: ' + i)
+

Now we can patch the original file with the 'patch' command:

patch < main.patch

Although in the real world you are more likely to encounter much larger patches than single files - in some cases the patch might span several files over an entire source tree.

To demonstrate we will download two different versions of freeradius:

cd /usr/source

wget ftp://ftp.freeradius.org/pub/freeradius/freeradius-server-3.0.12.tar.bz2
wget https://ftp.yz.yamagata-u.ac.jp/pub/network/freeradius/old/freeradius-server-3.0.8.tar.bz2

tar xvf freeradius-server-3.0.12.tar.bz2 && tar xvf freeradius-server-3.0.8.tar.bz2

Create the patch file:

diff -Naur /tmp/freeradius-server-3.0.8 /tmp/freeradius-server-3.0.12 > freerad.patch

and then apply the patch with:

patch -p3 freerad.patch


Obtaining source from yum repo in CentOS 7

Firstly ensure that you have the relevant sources configured in your yum.repos.d folder.

Lets firstly obtain the latest freeradius sources from the epel repo:

yumdownloader source freeradius

We can use the rpm2cpio utility to extract the files from the rpm package, while piping it into cpio to examine the contents of the rpm package:

rpm2cpio freeradius-3.0.4-7.el7_3.src.rpm | cpio -it

freeradius-Don-t-overwrite-ip_hton-af-prefix-in-fr_pton4-6.patch
freeradius-Rename-lt_-symbols-to-fr_.patch
freeradius-Resolve-to-all-families-on-ip_hton-fallback.patch
freeradius-access-union-consistently.patch
freeradius-add-P-option-to-radtest-synopsis.patch
freeradius-server-3.0.4.tar.bz2
...

From the output we can a bunch of patches and the server source itself.

To extract them to the current directory we issue:

rpm2cpio freeradius-3.0.4-7.el7_3.src.rpm | cpio -idmv




Wednesday 25 January 2017

How to put a version lock on a package in Cent OS 7

At times it might be necessary to prevent specific packages from updating due to manual installations of software on Linux systems.

Fortunately yum has a plugin that can ensure that a specific package remains at its current version:

yum install yum-plugin-versionlock

yum versionlock curl

Tuesday 24 January 2017

Setup MariaDB (MySQL) Master/Master Replication on CentOS 7

In this tutorial we will be setting up an active/active (or master/master) MySQL cluster.

This will provide us to make changes to either server (Server A or Server B) and ensure that the changes are applied to the each server.

For high availability we will be ensuring that each node is hosted on a different subnet / availability zone.

Server A: Availability Zone 1 / 10.1.0.200
Server B: Availability Zone 2 / 10.2.0.200

Lets firstly install the relevant packages on both servers:

yum install epel-release mariadb

Once installed run the following utility (on each server) to ensure the db server security is hardened:

Server1> mysql_secure_installation

mysql -u root -p

create user 'replicator'@'%' identified by 'yourpassword';
grant replication slave on *.* to 'replicator'@'<server-2-ip>';

FLUSH PRIVILEGES;

quit;

Server2> mysql_secure_installation

mysql -u root -p

create user 'replicator'@'%' identified by 'yourpassword';
grant replication slave on *.* to 'replicator'@'<server-1-ip>';

FLUSH PRIVILEGES;

quit;

Note: Ensure that MariaDB is listening externally and not just bound to localhost (as default.) - since the servers will need to communicate with each other!

Now lets setup the replication:

Server1> vi /etc/mysql/mariadb.conf.d/50-server.cnf

# replication settings
server-id = 1
replicate-same-server-id = 0
auto-increment-increment = 2
auto-increment-offset = 1
replicate-do-db = LinOTP2
log-bin = /var/log/mysql/mysql-bin.log
binlog-do-db = LinOTP2
relay-log = /var/lib/mysql/slave-relay.log
relay-log-index = /var/lib/mysql/slave-relay-log.index

Server2> vi /etc/mysql/mariadb.conf.d/50-server.cnf

# replication settings
server-id = 2
replicate-same-server-id = 0
auto-increment-increment = 2
auto-increment-offset = 2
replicate-do-db = LinOTP2
log-bin = /var/log/mysql/mysql-bin.log
binlog-do-db = LinOTP2
relay-log = /var/lib/mysql/slave-relay.log
relay-log-index = /var/lib/mysql/slave-relay-log.index

Restart both of the mysql servers with:

sudo systemctl restart mysql

Obtain the master log position for each server:

Server 1> mysql -uroot -p

SHOW MASTER STATUS;

* Note * Copy down the master log position in this output - we will need to use it when running the 'change master' command on Server 2 *


Server 2>
SHOW MASTER STATUS;

* Note * Copy down the master log position in this output - we will need to use it when running the 'change master' command on Server 1 *

and now configure replication:

Server 1> 

UNLOCK TABLES;

stop slave;

change master to master_host='SERVER2', master_user='replicator', master_password='yourpassword', master_log_file='mysql-bin.000001', master_log_pos=XXX;

start slave;

// replacing 'XXX' with the appropriate log position int.

Server 2> 

FLUSH TABLES WITH READ LOCK;

UNLOCK TABLES;

stop slave;

change master to master_host='SERVER1', master_user='replicator', master_password='yourpassword', master_log_file='mysql-bin.000001', master_log_pos=XXX;

start slave;

and finally review the slave status on each node with:

show slave status;

Create a test table on each server e.g.:

mysql -u root -p

create database TestDB
use TestDB

CREATE TABLE IF NOT EXISTS test (
test_id int(5) NOT NULL AUTO_INCREMENT,
PRIMARY KEY(test_id)
    );

and verify if the changes is replicated to the other node.

Allow a single user to login with a plain password over SSH - while ensuring everyone else uses keys

If for whatever reason you are forced to use plain-text authentication over SSH you can create a rule within the SSH daemon configuration to ensure that only the single user can perform it - while ensuring all other users are using keys.

vi /etc/ssh/sshd_config

and append the following:

Match User <username> 
      PasswordAuthentication yes

Reload sshd and attempt to re-authenticate.

sudo systemctl reload sshd

Friday 20 January 2017

How to keep sub processes on a terminal running after the terminal has been closed

At times I find myself running commands such as:

tail -f /var/log/messages &

or

tcpdump -i enp0s25 port 25 -w out.pcap

Which allows me to carry on debugging problems while I can see any stdout on my terminal.

Although these commands spawn sub-processes - that are attached to the (parent) terminal process.

The other day I came accross a pretty cool command that will allow me to leave these commands running - effectively releasing them from the parent process - so I can terminate the shell:

disown -a && exit


Tuesday 17 January 2017

Setting up a DNS server with bind on CentOS 7

For this tutorial we simply want to setup a simple zone with a few A and MX records for our local domain - yourdomain.com.

Let's firstly install the bind package - along with some helper tools:

sudo yum install bind bind-utils

Ensure it starts on boot:

systemctl enable named
systemctl start named

Since we do not wish to serve the general public - that is provide an open public DNS service - we will instead enforce recursive lookups and create an ACL to define exactly who (which nodes) will be able to perform DNS queries against the server.

The main configuration can be found in /etc/named.conf

vi /etc/named.conf

acl "trusted" {
        10.1.0.200;    # ns1.yourdomain.com (this host)
        10.2.0.200;    # ns2.yourdomain.com
};

In the 'options' section there are two directives we are interested in 'allow-transfer' which (as the name suggests) allows zone transfers to the secondary DNS server and 'allow-query' which defines what exactly can query the server (as defined in our 'trusted' acl block.):

 allow-transfer { 10.2.0.200; };      # allow zone transfer for secondary dns server
 allow-query { trusted; };  # allow queries from the members defined in our trusted acl

If we also wish to disable recursive queries (e.g. for zones not authoritative to our self) we can set the following under options:

recursion no;

We will create the named.conf.local file where we will define the zones we are hosting:

vi /etc/named/named.conf.local

and add the following:

zone "yourdomain.com" {
    type master;
    file "/etc/named/zones/yourdomain.com"; # zone file path
};

Make sure 'named.conf.local' is included in your main bind config:

echo 'include /etc/named/named.conf.local;' >> /etc/named.conf

And then create the zone file for 'yourdomain.com':

mkdir -p /etc/named/zones
vi /etc/named/zones/yourdomain.com

; BIND db file for yourdomain.com

$TTL 86400

@       IN      SOA     ns1.yourdomain.com.      you.yourdomain.com. (
                        2017011701 ; serial number YYMMDDNN
                        28800           ; Refresh
                        7200            ; Retry
                        864000          ; Expire
                        86400           ; Min TTL
)

                NS      ns1.yourdomain.com.
                NS      ns2.yourdomain.com.

ns1.yourdomain.com.           IN      A       10.1.0.200
ns2.yourdomain.com.           IN      A       10.2.0.200
yourdomain.com.      IN      A       8.8.1.1

$ORIGIN yourdomain.com.

We can check our configuration with the 'named-checkconf' command:

named-checkconf /etc/namedd.conf

Finally start bind (or reload it if already running) with:

sudo service named reload

and check syslog for any errors:

tail /var/log/messages | grep named

and then use nslookup or dig to verify the zone records:

dig yourdomain.com @10.1.0.200


Monday 16 January 2017

Working with LVM in CentOS 7: Resizing volumes and creating snapshots


Note: The following is not intended in any way as a practical example - rather it exists purely to demonstrate the creation of logical volumes, resizing them and creating snapshots.

Let's firstly setup the disks / partitions appropriately - I am going to use GPT for the disks, also ensuring that when creating the partitions they are setup as the 'LVM' type:

fdisk /dev/sdb
o # Initialize disk with mbr
n # Create new primary partition
[50% of disk size]
t 8e # Set partition type (or 15 if you are using GPT)
n # Create new primary partition
[50% of disk size]
t 8e # Set partition type (or 15 if you are using GPT)
w # Write changes

So we end up with two partitions /dev/sdb1 and /dev/sdb2.

and we will also create a partition on /dev/sdc - which consumes all available HD sectors:

fdisk /dev/sdc
o # Initialize disk with mbr
n # Create new primary partition
[accept defaults]
t 8e # Set partition type (or 15 if you are using GPT)
w # Write changes

So we end up with /dev/sdc1.

We will now need to create our physical volume group - this is simply all of the 'physical' disks or disk partitions e.g. /dev/sdb1:

pvcreate /dev/sdb1 /dev/sdb2 /dev/sdc1

Although I received something like:

  Device /dev/sdb2 not found (or ignored by filtering).
  Device /dev/sdc1 not found (or ignored by filtering).
  Physical volume "/dev/sdb1" successfully created.

In order to resolve this we need to set the disk label for each partition to 'loop' - this simply makes the partitions look like a normal disk.

parted /dev/sdb2
mklabel loop

and

parted /dev/sdc1
mklabel loop

and attempt to re-add:

pvcreate /dev/sdb1 /dev/sdb2 /dev/sdc1

Confirm them with:

pvdisplay

I actually don't want to include /dev/sdb2 - so i'll remove it with pvremove:

pvremove /dev/sdb2

And now create a volume group with our new physical volumes - for example:

vgcreate myvolumegroup /dev/sdb1 /dev/sdc1

And confirm with:

vgdisplay

  --- Volume group ---
  VG Name               myvolumegroup
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               8.79 GiB
  PE Size               4.00 MiB
  Total PE              2251
  Alloc PE / Size       0 / 0
  Free  PE / Size       2251 / 8.79 GiB
  VG UUID               222222-bkQV-1111-xAzI-bbPr-AP7O-333333

We can now create our logical volume - using the '-L' parameter to specific a size of 1GB, the '-n' to specify the name of 'testlv' and the target volume group 'myvolumegroup':

lvcreate -L 1G -n testlv myvolumegroup

and confirm with:

lvdisplay

We can also get the block device from the above output - and mount it / create a file system for it with:

mkfs.xfs /dev/testvol/testlv

Extending the logical volume

Extend the actual logical volume by 50MB:

lvextend -L +50M /dev/testvol/testlv

and then ensure the file-system is increased as well:

xfs_growfs /dev/testvol/testlv

If you choose ext3/4 you will need to use the 'resize2fs' utility and also ensure that the file-system is not mounted.

Now let's say that we do not have enough free disk space in our volume group - in that case we will need to add an additional disk / partition - we can do this with the vgextend command:

pvcreate /dev/sdb2
vgextend volume01 /dev/sdb2

Creating a snapshot of a logical volume

We can take snapshots of logical volumes with the 'lvcreate' command - although when snapshotting a logical volume it's important to note that the snapshot is not a 'like for like' copy in terms of blocks - instead it only holds blocks that have changed on the original logical volume since the snapshot was taken!

lvcreate -L 50M -s -n mysnapshot /dev/testvol/testlv

and to remove the snapshot we can issue:

lvremove /dev/testvol/mysnapshot

Setting up software RAID in Linux with mdadm

Although software RAID has seen a sizeable decrease in use with the now frequent use of virtualisation and the underlying storage RAID'ed on a SAN - there are still some use cases for setting up software RAID.

This article will discuss each step from start to finish on how to setup software RAID on Linux with the help of mdadm.

We should firstly note that software RAID with mdadm is handled by the md kernel driver (not a module).

A driver file will be created automatically for the RAID array when using mdadm (/dev/mdX) - but we can manually create one if desired using the mknod command - with the appropriate driver major number - 9 in this case:

cat /proc/devices | grep md

mknod /dev/mdc b 9 0

Before using the disks for RAID - we must firstly create a new partition and ensure that auto-raid detection will pick up the disk - this can be performed with fdisk by making sure the partition type is set to 'fd':

fdisk /dev/sdc
n # new partition
p # primary partition
<accept defaults>
t # change disk type
fd # set disk type to 'Linux rapid auto'
w # write changes

and repeat for the other disk.

We can now create the RAID set (using RAID 1 for this example):

mdadm -C /dev/md0 -l raid1 -n 2 /dev/sdb1 /dev/sdc1

mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Create a new file-system for it with:

mkfs.btrfs /dev/md0

Generate the mdadm configuration for persistence:

mdadm --detail --scan --verbose > /etc/mdadm.conf

Ensure that the new partitions are mounted at boot in the fstab:

vi /etc/fstab

And ensure that the RAID set is initialised on boot by adding the following:

echo "mdadm --assemble -s" >> /etc/rc.d/rc.local

We can verify the configuration of the RAID set with:

mdadm --detail /dev/md0

and also check replication status with:

cat /proc/mdstat



Re-scan the SCSI bus for new hard disks in CentOS 7

When adding additional disks (especially in virtualized environments) the Linux kernel will not always pickup the new disks straight away - so we have to manually re-scan the SCSI bus.

We should firstly identify which host controller the SCSI disks are attached to:

cat /sys/class/scsi_host/host?/proc_name

mptspi
ata_piix
ata_piix

* ata_piix and mptspi are the kernel modules for the controllers. See lsmod | grep ata_piix

We can see there are also two SATA controllers present - since we are after the SCSI one - we select

/sys/class/scsi_host/host0/proc_name

So lets invoke the re-scan manually - we do this with:

echo "- - -" > /sys/class/scsi_host/host0/scan

* The dash symbols represent a wildcard the channel number, SCSI target ID, and LUN values respectively.

Now check the kernel messages with dmesg:

dmesg | grep Attached


Thursday 12 January 2017

What does /dev/null 2>&1 mean?

Quite often you will see commands with '/dev/null 2>&1' appended onto them. e.g.

make /dev/null 2>&1

The obvious observation is that the output is being piped into /dev/null i.e. into nothingness!

Although it also appears to pipe out again shortly after.

To help visualise - 0,1,2 all mean something - they are in fact file descriptors:

0 = STDIN (Standard Input)
1 = STDOUT = (Standard Output)
3 = STDERROR (Standard Error)

So the command - as well as piping standard output into /dev/null - also then pipes (with the inclusion of the & symbol)  standard error into file descriptor 1 (stdout) - hence redirecting stderr to stdout - making it visible in the users terminal.

Encrypting a disk / volume in Fedora 25 with dm-crypt / luks

We should firstly enable the AES kernel module with:

modprobe aes

Although I encountered this error on Fedora 25 - running a modern CPU I was slightly confused:

modprobe: ERROR: could not insert 'padlock_aes': No such device

It turns out that you need to use the following module name instead:

modprobe aes_generic
modprobe dm_mod
modprobe dm_crypt

and to ensure its permanently enabled:

echo aes_generic >> /etc/modules-load.d/crypt.conf
echo dm_mod >> /etc/modules-load.d/crypt.conf
echo dm_crypt >> /etc/modules-load.d/crypt.conf

Identify the disk and re-create the partition table and create a new primary partition which we will use for our encrypted volume. (Do not create a file system on it yet!)

We can benchmark the different encryption algorithms to find the fastest available with:

cryptsetup benchmark

For this example I am sticking with AES.

Proceed by creating the dm-crypt device mapping:

cryptsetup -y -c aes -s 256 -h sha256 luksFormat /dev/sdb1

We can then open the locked device (entering your password) with:

cryptsetup open /dev/sdb1 mycryptdevice

The now unencrypted device should be available in:

/dev/mapper/mycryptdevice

We can then create new filesystem on it with:

mkfs.ext4 /dev/mapper/mycryptdevice

and mount it:

mount -t ext4 /dev/mapper/mycryptdevice /mnt

and finally removing the decrypted device with:

cryptsetup remove mycrypt

If you have persistent naming of the block device setup (since we are dealing with a USB device here) - we can also instruct the encrypted device to mount at boot:

echo "mycrypt /dev/sdc2 none none" >> /etc/crypttab

* The /etc/crypttab file defines which encrypted devices should be mounted at boot.

echo "/dev/mapper/mycrypt /crypt ext4 defaults 0 1" >> /etc/fstab

Upon reboot you should be prompted to enter the password for the encrypted partition.


Permanently mapping a USB storage device in Fedora 25

Quite often if you are dealing with scripts or want to ensure consistency with USB storage devices in Linux you will want to ensure that block device names are consistent.

We should firstly identify something distinguishable about the USB disk - we can run the following to identify something suited for this:

lsblk -f

or

blkid

/dev/sda1: UUID="11114956-82ab-4fb4-83eb-c1daa224842d" TYPE="ext4" PARTUUID="83020ff5-01"
/dev/sda2: UUID="111J-1gg3-wzKf-MGSB-IoM9-0vmR-SZhckf" TYPE="LVM2_member" PARTUUID="83020ff5-02"
/dev/sr0: UUID="0B09161C4f637420" LABEL="Oct 17 2016" TYPE="udf"
/dev/mapper/fedora-root: UUID="11162f0-0b95-4da8-80e0-32a4c522f78a" TYPE="ext4"
/dev/mapper/fedora-swap: UUID="11171de-33d0-49ec-baa1-edb0d7ef7a7a" TYPE="swap"
/dev/mapper/fedora-home: UUID="111180c0-f7c2-499d-a4e9-f282407a0e5c" TYPE="ext4"
/dev/sdc1: UUID="111c93c-6a96-4b52-8818-dfb9d17a1798" TYPE="ext4" PARTUUID="2568a1a2-01"

to get the UUID - which is a universal identifier we can use with fstab to ensure the correct partition is mapped to the appropriate mount point.

Then in fstab we can issue something like:

/dev/sdc1 /mnt/test ext4 defaults 0 1

Tuesday 10 January 2017

Easily replacing characters with the tr command in Linux

Just like many programming languages you can replace a character or characters within a string - this is also possible with Unix/Linux using the tr command - for example lets say we have a long list of emails separated line by line e.g.:

user1@domain.com
user2@domain.com
user3@domain.com
user4@domain.com
user5@domain.com
user6@domain.com

We can use tr to replace the line break with a semi-colan - so that it can easily be imported into an email clinet:

cat file.txt | tr '\n' ';'

Wish provides us with the following output:

user1@domain.com;user2@domain.com;user3@domain.com;user4@domain.com;user5@domain.com;user6@domain.com

We can also do some mundane tasks such as capitalizing words in a text file - for example:

cat file.txt
this is a test

cat file.txt | tr a-z A-Z

would produce:

THIS IS A TEST

Excluding strings based on two files with grep

Excluding strings based on two files with grep

I had a simple scenario were a notification needed to be sent to everyone within a company - although there was a distribution group encompassing every employee in the company there were specific distribution groups that needed to be excluded - given the sheer size of the distribution groups doing this manually was not an option.

To summerize we can get the data from the 'global distribution group' from the Exchange shell e.g.:

Get-DistributionGroupMember everyone@companya.com | Where-Object {$_.RecipientType -eq 'UserMailbox'} | Select-Name > allmembers.txt

and the same kind of thing for the other distribution groups we wish to exclude.

But basically we end up with two files - one of which holds all of the employee emails (companyemails.txt) and the exclusions (exclusions.txt)

Now on our linux shell - we will use grep (in reverse mode) to return all of our emails with the exclusions removed:

grep -vf exclusions.txt companyemails.txt

I got 'Binary file companyemails.txt matches' - this is apprently because grep has detected a 'NUL' character and as a result considers it a binary file.

We should also ensure that line Windows line endings are removed otherwise this will cause problems with grep:

dos2unix /tmp/companyemails.txt
dos2unix /tmp/exclusions.txt

and any trailing spaces or tabs:

sed -i 's/[[:blank:]]*$//' /tmp/companyemails.txt
sed -i 's/[[:blank:]]*$//' /tmp/exclusions.txt

Re-running the file with the -a switch (--text) resolved the problem:

grep -vaf exclusions.txt companyemails.txt

-v for reverse mode and -f for file mode.

If you are dealing with large numbers of emails it might be worth running wc as well to sanity check figures:

cat /tmp/companyemails.txt | wc -l

grep -vaf exclusions.txt companyemails.txt | wc -l

Save the output to file:

grep -vaf exclusions.txt companyemails.txt > final.txt

and finally to get all of the emails into a presentable form using 'tr':

cat final.txt | tr '\n' ';'

(This simply replaces the new line character with a semi-colon.)

Creating, mounting and burning ISO's with linux

This tutorial simply demonstrates how you can quickly create, mount and burn ISO's with linux.

ISO's can be created using the 'mkisofs' command - as follows (assuming that /tmp/test/ is the data we would like on the ISO file system):

mkisofs -r -J -o cd_image.iso /tmp/test

The '-r' switch ensures that all files and folders are readable to the client - i.e. the entity that mounts the ISO.

We can also use the '-J' (MS Joliet) switch to ensure that compatability with Windows clients is maximized.

Before burning the ISO to our media lets peform a test run by mounting it on the local system:

mount -t iso9660 -o ro,loop cd_image.iso /mnt

On my Fedora 25 box I got the following error:

mount: /tmp/cd_image.iso: failed to setup loop device: Invalid argument

This is likely due to the fact that the 'loop' kernel module is not enabled - to check it's present we can run:

lsmod | grep loop

and to enable we can run:

modprobe loop

and to enable permanently (on Debian based distros):

echo loop >> /etc/modules

or on RHEL based distros:

echo loop > /etc/modules-load.d/loop.conf

and retry mounting:

mount -t iso9660 -o ro,loop cd_image.iso /mnt

Once you are happy - we will unmount the ISO and burn it to disk with:

umount /mnt

To look for available CD / DVD writers we should issue:

wodim -scanbus

I got a 'wodim: No such file or directory. Cannot open SCSI driver!' returned when issuing this - it appears that wodim expects a SCSI device and to support anything else you need a compatability driver!

Initially I thought that the sg module was not enabled - but it turns on it was. After running

wodim dev=/dev/sr0 -checkdrive

It successfully found the drive.

It turns out in Fedora 25 you need to manaully define the block device within the wodim.conf file:

echo 'cdrom= /dev/sr0' >> /etc/wodim.conf

and run the scan again:

wodim -scanbus

Now we can write the image with:

cdrecord -v speed=16 dev=1,0,0 -data cd_image.iso

Debugging with strace

Strace is a brilliant tool for helping debugging problems when you don't have access to the source code and hence is usually aimed at administrators, engineers and the generally inquisitive types.

The utility itself monitors system calls (including return codes) for a specific process - for example:

strace cat /etc/sysctl.conf

Sample output (documented line by line):

# Execute the cat command - returns successful.
execve("/bin/cat", ["cat", "/etc/sysctl.conf"], [/* 33 vars */]) = 0
# Calling brk with an invalid value (e.g. null) simply returns the pointer location in the process memory address space.
brk(NULL)                               = 0x562ce17f4000
# Allocate 8kb of memory and instruct the kernel to choose the address at which the mapping is created. The memory can be either read or written to and uses anonymous memory.
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f6c2692b000
# Check the file '/etc/ld.so.preload' exists with read-only mode - returns failure.
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
# Open '/etc/ld.so.cache' in read-only mode and set file descriptor of 3.
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
# The above file has a file descriptor of 3, its permissions are 0644 and it is 112.9kb in size.
fstat(3, {st_mode=S_IFREG|0644, st_size=115629, ...}) = 0
# Allocate 112.9kb's of memory (chosen by the kernel) for file descriptor 3, which you can read and write to and is private memory - no offset.
mmap(NULL, 115629, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f6c2690e000
# Close file descriptor 3
close(3)                                = 0
# Open file '/lib64/libc.so.6' with file descriptor of 3, read only.
open("/lib64/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
# Read 832 bytes from file descriptor 3 and keep in the buffer below.
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0 \5\2\0\0\0\0\0"..., 832) = 832
# The above file has the permissions of 0755 and is 2.02mb in size.
fstat(3, {st_mode=S_IFREG|0755, st_size=2115832, ...}) = 0
# Allocate 3.77mb of at 0x7f6c26342000 for file descriptor 3 - allow read and execute - no offset.
mmap(NULL, 3955040, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f6c26342000
# Set no memory protection of on the first 204.4kb of memory at 0x7f6c264ff000 i.e. allow read, write and execute on it.
mprotect(0x7f6c264ff000, 2093056, PROT_NONE) = 0
# Allocate 24kb of memory at address 0x7f6c266fe000 for file descriptor 3 that allows read, write and is private memory.
mmap(0x7f6c266fe000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bc000) = 0x7f6c266fe000
# Allocate 14.3kb of memory at address 0x7f6c26704000 that allows read, write, is anonymous private memory and has no file descriptor (-1).
mmap(0x7f6c26704000, 14688, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f6c26704000
# Close file descriptor 3.
close(3)                                = 0
# Allocate (chosen by the kernel) 8kb of memory that can be read and written to, no file descriptor and no offset.
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f6c2690c000
# Set the 64-bit base for the FS register at 0x7f6c2690c700
arch_prctl(ARCH_SET_FS, 0x7f6c2690c700) = 0
# Set memory protection for 0x7f6c266fe000 (16kb) to read only.
mprotect(0x7f6c266fe000, 16384, PROT_READ) = 0
# Set memory protection for 0x562cdfa74000 (4kb) to read only.
mprotect(0x562cdfa74000, 4096, PROT_READ) = 0
# Set memory protection for 0x7f6c2692d000 (4kb) to read only.
mprotect(0x7f6c2692d000, 4096, PROT_READ) = 0
# Unmap 133kb of memory at address 0x7f6c2690e000
munmap(0x7f6c2690e000, 115629)          = 0
# Determine current location of the pointer that is allocated to the data segment of the process.
brk(NULL)                               = 0x562ce17f4000
# Change the locatino of the program break (which defines end of the processes data segment.)
brk(0x562ce1815000)                     = 0x562ce1815000
# Determine current location of the pointer that is allocated to the data segment of the process.
brk(NULL)                               = 0x562ce1815000
# Open '/usr/lib/locale/locale-archive' in read only with the file descriptor of 3.
open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
# The file descriptor has permissions of 0644 and is 107.6mb in size.
fstat(3, {st_mode=S_IFREG|0644, st_size=112823232, ...}) = 0
# Map 107.6mb of read-only, private memory to file descriptor 3 at 0x7f6c1f7a9000.
mmap(NULL, 112823232, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f6c1f7a9000
# Close file descriptor 3.
close(3)                                = 0
# File descriptor 1 (FD1) is a character device with permissions of 0620.
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0
# Open '/etc/sysctl.conf' in read-only mode as file descriptor 3.
open("/etc/sysctl.conf", O_RDONLY)      = 3
# File descriptor 3 is 449 bytes in size and has the permissions 0644.
fstat(3, {st_mode=S_IFREG|0644, st_size=449, ...}) = 0
# Announce to kernel that intention to access file data in specific pattern for FD 3.
fadvise64(3, 0, 0, POSIX_FADV_SEQUENTIAL) = 0
# Map memory (chosen by kernel) of 136kb - allow read and write access - anonymous private memory - no file descriptor or offset.
mmap(NULL, 139264, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f6c268ea000
# Read the first '131072' bytes of file descriptor 3.
read(3, "# sysctl settings are defined th"..., 131072) = 449
# Write the following buffer to stdout (FD 1)
write(1, "# sysctl settings are defined th"..., 449# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
) = 449
# Read another 131072 bytes - nothing left.
read(3, "", 131072)                     = 0
# Unmap 132kb of memory at address 0x7f6c268ea000
munmap(0x7f6c268ea000, 139264)          = 0
# Close file descriptors.
close(3)                                = 0
close(1)                                = 0
close(2)                                = 0
Exit all threads in process.
exit_group(0)                           = ?
+++ exited with 0 +++

Strace can also be attached to an exsiting process with the '-p' switch for example:

strace -p 12345

Sources:
https://aboutthebird.wordpress.com/2013/03/02/interpreting-the-output-of-strace-line-by-line/
http://unix.stackexchange.com/questions/75638/why-is-brk0-called
http://man7.org/linux/man-pages/man2/mmap.2.html
https://linux.die.net/man/2/read

Friday 6 January 2017

Preventing or changing filesystem checking on a specific disk

In some situations you won't always want to scan specific file systems - like in the event where you have a huge disk - that might take hours to scan - imagine attempting a routine reboot and getting that lumped on you unexpectedly!

Fortunately we can modify this behaviour with the 'tune2fs' utility.

Lets firstly get some information for the root file system:

tune2fs -l /dev/mapper/fedora-root

This will provide us with a whole host of information - to name a few:

- The last time the file system was mounted.
- Whether the file system state is clean or not.
- The file system UUID.
- The file system check frequency.

By default in Fedora 25 I noticed that the 'Maximum mount count' is set to -1 rendering it useless - a file system check would only typically be performed if the files ystem is marked as dirty or a user manually invokes a check with fsck.

cat /etc/mke2fs.conf | grep enable

By changing  'enable_periodic_fsck' to 1 we can re-enable periodic scans of the file system.

When the 'Mount count' hits the threshold ('Maximum mount count') or the 'Check Interval' a file system check is performed on the next boot. We can however manipulate the 'Mount count' variable hence forcing a file system check to take place upon next boot.

For example to set the max-mount-counts and mount-count values we can issue:

tune2fs -c 5 -C 5 /dev/mapper/fedora-root

(This would force a file system check on next boot.)

Where -c = max-mount-counts and -C = mount-count

To disable checks on a specific file system we would issue:

tune2fs -c 0 -C 0 /dev/mapper/fedora-root




Thursday 5 January 2017

Using sar to monitor activity counters with Fedora 25

sar allows you to view various different activity counters on your system, such as read and write speeds and memory information.

It is also a useful tool when troubleshooting potentially abnormal system performance since it provides you with a historical overview of average performance - rather than real time utilities such as iotop and htop.

Although on Fedora 25 you will need to install the sysstat package firstly:

sudo dnf install sysstat

On initial launch I received the following error message:

Cannot open /var/log/sa/sa05: No such file or directory
Please check if data collecting is enabled

It turns out that sysstat is not automatically enabled upon installation:

sudo systemctl enable sysstat
sudo systemctl start sysstat

Digging a little deeper we can find out when sysstat is invoked:

cat /etc/systemd/system/sysstat.service.wants/sysstat-collect.timer

# (C) 2014 Tomasz Torcz <tomek@pipebreaker.pl>
#
# sysstat-11.3.5 systemd unit file:
#        Activates activity collector every 10 minutes

[Unit]
Description=Run system activity accounting tool every 10 minutes

[Timer]
OnCalendar=*:00/10

[Install]
WantedBy=sysstat.service

We can see from the output that sysstat collects data from the activity counters every 10 minutes by default.

Now to retrieve disk statistics we can issue:

sar -d

Although unfortunately sar does not output the devices in an easily identifiable fashion e.g. using block device names - /dev/sda etc. Instead we get something like:

DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
Average:     dev8-0    354.10      1.31   3504.20      9.90      1.46      4.12      2.78     98.29

To map the sar output to our disks / lvm volumes we issue:

ls -lL /dev/mapper /dev/sd*

brw-rw----. 1 root disk 8,  0 Jan  5 13:53 /dev/sda

The addition of the '-l' switch provides us with additional information, such as file and group owner, last time modified among others.

We want to pay particular attention to the 'size' column - which in this case is '8, 0' - hence indicating it is the /dev/sda block device.

sar -b will provide you with an overall overview of I/O rates:

15:00:00          tps      rtps      wtps   bread/s   bwrtn/s
15:10:00       231.64     21.99    209.65    218.40   3436.20

To get an overview of CPU performance we can issue:

sar -u

15:00:00        CPU     %user     %nice   %system   %iowait    %steal     %idle
15:10:00        all      2.47      0.01      0.61     13.69      0.00     83.21

and memory with:

sar -r

5:00:00    kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit  kbactive   kbinact   kbdirty
15:10:00      9642072   6701572     41.00    247968   2134388  11042116     44.91   3313348   2980900      2632

sar can also be used to provide real-time statistics like follows (where a report is generated every 1 second 5 times:

sar -d 1 5

Wednesday 4 January 2017

Troubleshooting packet loss and performance issues with the Cisco ASA

We should firstly identify whether there are any problems with the CPU or memory with:

show cpu

Typically if you are hitting an average of 80% or above you should be concerned.

and

show memory

You should also identify any interface errors with:

show interface | i errors

and reissue it after 30 seconds or so to identify if it's increasing or not.

We should also get a rough idea of the traffic throughput while we are experiencing the problem by firstly clearing the existing traffic information with:

clear traffic

and then after 5 minutes or so running the following command:

show traffic

We should also check connection counts / limits with:

show conn count

Pay attention to the limit vs the current connection count.

For a more granular overview of actions such as NAT translations, tcp connections etc. we can use the 'perfmon' command to get the number per second:

show perfmon

We can issue the 'show blocks' command to check whether the memory is over commited:

show blocks

The 'CNT' column informs us how many blocks we have available to the device - if any are zero - or rather are frequently zero this causes an overflow and information is dropped.

Tuesday 3 January 2017

Troubleshooting calendar sharing on Exchange / Office 365 hybrid

Firstly create a test user in our Exchange environment e.g.:

cd "C:\Program Files\Microsoft\Exchange Server\V15\scripts"

or for Exchange 2010:

cd "C:\Program Files\Microsoft\Exchange Server\V14\scripts"

./New-TestCasConnectivityUser.ps1

and test calendar connectivity with one of your client access servers with:

Test-CalendarConnectivity | FL

and also check it is enabled on the OWA virtual directory with:

Get-OWAVirtualDirectory | Select Name, Server, Calendar*

Check WSSecurityAuthentication is set to true on both sides:

Get-AutodiscoverVirtualDirectory | fl name, server, *url, ExternalAuth*, WSSecurityAuthentication

Ensure that the organizational relationship is setup correctly:

Get-OrganizationRelationship | Select Identity, Name

Test-OrganizationRelationship -UserIdentity user@domain.com -Identity "Office 365 to on-premise" -Confirm -Verbose