Tuesday 28 February 2017

Setting up a Cisco 2960X Switch Stack

This tutorial will demonstrate how to get a pair of Cisco 2960X switches up and running as a stack with stackwise.

We'll be using the following for this lab:

x2 WS-C2960X-24TD-L
x2 2960-X Flexstack Stacking Modules
x2 Bladeswitch Stacking Cable

I will be creating two stacks (the 2060-X model supports up to 8 stack members) since this provides redundancy if one of the stacking cable goes and also higher overall throughput:



A single stackwise connection with the 2960-X equates to 20gbps (full duplex) - although with the additional stack connection we get 40gbps - this can go up to 80gbps with additional stack members.

There are compatibility considerations to take into account when building switch stacks - most notably the stack protocol version number should be the same between all stack members - however this does not necessarily mean that the IOS versions should be identical (although personally i'd strongly recommend it!)

The second issue relates to hardware - more specifically the model number. For example the  C2960-XR will not stack with a C2960-S - however this is perfectly feasible with the C2960X and C2960-S. (1)

Once the stack has been created and you've booted up the switches - we'll now console into them:

screen /dev/ttyUSB0 9600

Skip the initial configuration wizard and run the following to identify whether the stack has been formed:

show switch

Switch/Stack Mac Address : fa011.c222.3333
                                           H/W   Current
Switch#  Role   Mac Address     Priority Version  State 
----------------------------------------------------------
 1       Member f811.5555.6666     1      4       Ready               
*2       Master fa011.c222.3333     1      4       Ready    

The switch that you are currently consoled into is marked with an asterisk - so in my case I on the master switch.

There are a number of rules (for example: existing master of stack, hardware / software priorities)   that define how the stack master election is performed - although in our case it is simply due to the fact that one of the switches had a greater up-time than the other. Please refer to the sources / further reading section for a more detailed explanation.

However if we want to confirm that both stacks are in place - we should issue:

show switch stack-ports

Configuring a backup-master switch

It's never a bad idea to configure a backup-master switch in the stack - by default the a new switch in the stack is assigned a priority of 1 - so in my case since I've not modified these values both of them are currently set to 1.

The priority level can be between 1 and 15 - with the greater value taking precedence - for example I will set my master switch to 14 and the secondary to 13. Although this is strictly not necessary in a two-node stack - but would come in handy in the event that we add an additional switch into the stack at a later date.

To set the switch priority we can issue:

switch 2 priority 14
switch 1 priority 13

We can also renumber the switches with:

switch 2 renumber 1
switch 1 renumber 2

Adding additional switches to the stack

With the 2960X switches you can easily add an additional switch into the stack - even when it's already powered on - although when plugged in the new switch will automatically reboot itself to clear the configuration etc.

Firmware Upgrades

One of the pretty cool features is the ability to perform a firmware upgrade of all switches via one command on the master node:

archive download-sw tftp://192.168.1.1/c2960x-universalk9-mz.152-2.E6

* Note to perform the above the image must be a tar ball!

or to upgrade induvidually we can do the standard:

copy tftp://10.0.0.1/c2960x-universalk9-mz.152-2.E6.bin flash:/c2960x-universalk9-mz.152-2.E6.bin

boot system flash:/c2960x-universalk9-mz.152-2.E6.bin

do wri mem

reload

During this process the master node is tasked with ensuring that every other switch in the stack is upgraded successfully.

Command Execution

By default all commands will be executed on the master switch. In order to execute a command on a specific command issue the 'session' command a long with the switch number e.g.:

session 1

and then simply type exit to go back to the master switch again.

Cross-stack etherchannel 

Now - the stack cables provide redundant connectivity between both of the switches in the stack - however we also want to ensure that the uplinks have redundant connectivity to the switch stack - so this involves create a 'cross-stack' etherchannel which means an uplink (SPF+ / 10G port) on each stack member will hook up to the uplink switch (you'll have to forgive the basic diagram as I didn't have anything decent to draw this on right now):


To be continued...



Sources

(1) Cisco Catalyst 2960-S, 2960-X, and 2960-XR Stacking with FlexStack: http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-2960-x-series-switches/white_paper_c11-728327.html

(2) Stack Master Election:
http://www.cisco.com/c/en/us/support/docs/switches/catalyst-3750-series-switches/71925-cat3750-create-switch-stks.html#anc11

(3) Catalyst 2960-X Switch Layer 2 Configuration Guide, Cisco IOS Release 15.0(2)EX
http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960x/software/15-0_2_EX/layer2/configuration_guide/b_lay2_152ex_2960-x_cg/b_lay2_152ex_2960-x_cg_chapter_010.html

Monday 27 February 2017

Fixed: Call "HostDatastoreSystem.QueryVmfsDatastoreCreateOptions" for object "datastoreSystem-xxxx" on vCenter Server "VC FQDN" failed.

The other day when attempting to create a new VMFS datastore on an ESXI (6.0) host I encountered the following message:

Call "HostDatastoreSystem.QueryVmfsDatastoreCreateOptions" for object "datastoreSystem-xxxx" on vCenter Server "VC FQDN" failed.

On further investigation it turns out that this usually happens when either the partition table type is not set to GPT / MSDOS or is invalid in some way. This is quite common when re-using old disks that were used by other systems.

Note: ESXI 5.0+ will only work with either GPT or msdos partition tables.

However the disk I was working with was brand new and was in fact a RAID'ed disk created by the HP Storage Administrator utility - i'm not sure whether the HP Storage Administrator created a GPT partition table or not - but either way ESXI did not like it.

So instead we are forced to re-create a GPT partition table (since this disk I was working with was ~4TB.)

Firstly lets enable SSH on the ESXI host - once in we can issue the following command to provide us with a list of disks connected to the host - so we can identify the relevant disk UID:

esxcli storage core device list

You are looking for the devfs path e.g.:

/vmfs/devices/disks/naa.220118b1100111p04kf39111111

partedUtil mklabel /vmfs/devices/disks/naa.220118b1100111p04kf39111111 gpt

or for an msdos partition table:

partedUtil mklabel /vmfs/devices/disks/naa.220118b1100111p04kf39111111 msdos

Finally attempt to re-create the datastore - but remember to re-scan the disks!

Sources:
Identifying disks when working with VMware ESXi/ESX (1014953): https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1014953

Troubleshooting the Add Storage Wizard error: Unable to read partition information from this disk (1008886): https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008886

Wednesday 22 February 2017

Setting up SNAT with IPTables / CentOS 7 (NAT)

This tutorial will demonstrate how SNAT can be setup - in a common configuration - where we have an internal subnet / interface (eno1) and external subnet/internet interface (wlp2s0) and we want to forward traffic from the clients on the internal subnet to the internet interface - while ensuring traffic is NAT'd when it leaves the egress (internet) interface.

Let's firstly enable ip forwarding:

echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf

sudo sysctl -p

Flush the IPTable chains:

Set the policy for the filter table chains:

sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT

Flush all tables:
sudo iptables -F -t filter
sudo iptables -F -t nat
sudo iptables -F -t mangle
sudo iptables -F -t raw

Ensure traffic from eno1 is masqueraded - so it will get back to the interface:
iptables -t nat -A POSTROUTING -o wlp2s0 -j MASQUERADE

Allow eno1 to forward traffic to wlp2s0:
iptables -t filter -A FORWARD -i eno1 -o wlp2s0 -j ACCEPT

and the return traffic from wlp2s0 to eno1:
iptables -t filter -A FORWARD -i wlp2s0 -o eno1 -j ACCEPT

and block any other forwarding traffic:
iptables -t filter -A FORWARD -j DROP

Now try and ping a remote host from the internal device - if all goes to plan you should get a response back. If you encounter problems you might want to setup IPTables to log dropped packets to help you diagnose where exactly you are going wrong.

It goes without saying - but the final task is to tighten up the IPTables rules e.g. the INPUT/OUTPUT chains in the filter table.

Setting up wireless bridge with CentOS 7, IPTables and dhcpd

This is useful in specific situations were you need to connect hardware appliances that do not have any means on connecting to a wireless network.

Yes - you can buy specific 'of the shelf' devices to do this - although I refuse to buy any such device since it's actually pretty easy to peform on a normal computer (in my case an Intel NUC.)

I only had a single port on my NIC - so I ended up using a wireless USB dongle.

So - lets firstly install CentOS 7 (minimal) onto the hardware we're going to use - i'll do this via USB boot - so to write the image to the USB we can do something like:

sudo dd if=CentOS-7-x86_64-Minimal-1611.iso of=/dev/sdc; sync

Once we have installed the base OS we'll configure the WiFi - we'll also ensure the kernel has picked up the WiFi adapter:

dmesg | usb

If detected - we should see it with:

nmcli device show

To check the device radio / check for available wireless networks we can issue:

nmcli device wifi list

Let's say that our SSID is 'WirelessNetwork' - in order to connect to it we will use the 'connection add' switch:

nmcli connection add ifname wlp2s0 type wifi ssid WirelessNetwork

to delete a connection - find out what it's been named with:

nmcli connection show

and delete with:

nmcli connection del wifi-wlp2s0

You can also use the 'con-name' switch if you wish to have connections to different wireless network e.g.:

nmcli connection add con-name WirelessNetwork ifname wlp2s0 type wifi ssid WirelessNetwork

We can then add authentication information (in our case we're using a pre-shared key):

nmcli con modify wifi-wlp2s0 wifi-sec.key-mgmt wpa-psk
nmcli con modify wifi-wlp2s0 wifi-sec.psk <password>

Ensure wifi is turned on with:

nmcli radio wifi

and if needed turn it on with:

nmcli radio wifi on

To review our wifi connection in more detail we can issue:

nmcli connection show wifi-wlp2s0

Finally to activate our configuration we should issue:

nmcli connection up wifi-wlp2s0

Running 'nmcli connection' you will notice that it's now green - indicating that you have successfully connected to the network.

Now let's ensure IPTables is installed - refer to my post here for that:

http://blog.manton.im/2016/03/working-with-iptables-on-centos-7.html

Now - for this tutorial we'll also assume the client device that needs access to the wireless network can't been manually configured with a static IP and hence will require DHCP.

So - we'll configure a DHCP server to run on our ethernet interface (eno1):

sudo ip addr add 10.11.12.1/24 dev eno1

sudo yum install dhcp

add add something like the following to dhcpd.conf:

vi /etc/dhcp/dhcpd.conf

# name server(s)
option domain-name-servers 8.8.8.8;

# default lease time
default-lease-time 600;

# max lease time
max-lease-time 7200;

# this DHCP server to be declared valid
authoritative;

# specify network address and subnet mask
subnet 10.11.12.0 netmask 255.255.255.0 {
    # specify the range of lease IP address
    range dynamic-bootp 10.11.12.10 10.11.12.254;
    # specify broadcast address
    option broadcast-address 10.11.12.255;
    # specify default gateway
    option routers 10.11.12.1;
}

Enable and start dhcp with:

sudo systemctl enable dhcpd
sudo systemctl start dhcpd

and check the log to ensure that it is up and running / bound to the correct interface (it should automatically pickup which interface to listen on depending on what's defined in dhcpd.conf)

sudo tail -n 30 /var/log/messages | grep dhcpd

Now plug eno1 into a switch and the device in question into a port on the same VLAN - hopefully the device in question should now have picked up an IP in our new scope we defined on the dhcp server. If it hasn't tcpdump is a very useful tool to diagnose dhcp / bootp related problems.

The last step is to setup NAT'ing rules - so lets firstly enable ip forwarding:

echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf

sudo sysctl -p

Flush the IPTable chains:

Set the policy for the filter table chains:

sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT

Flush all tables:
sudo iptables -F -t filter
sudo iptables -F -t nat
sudo iptables -F -t mangle
sudo iptables -F -t raw

Ensure traffic from eno1 is masqueraded - so it will get back to the interface:
iptables -t nat -A POSTROUTING -o wlp2s0 -j MASQUERADE

Allow eno1 to forward traffic to wlp2s0:
iptables -t filter -A FORWARD -i eno1 -o wlp2s0 -j ACCEPT

and the return traffic from wlp2s0 to eno1:
iptables -t filter -A FORWARD -i wlp2s0 -o eno1 -j ACCEPT

and block any other forwarding traffic:
iptables -t filter -A FORWARD -j DROP

Now try and ping a remote host from the internal device - if all goes to plan you should get a response back. If you encounter problems you might want to setup IPTables to log dropped packets to help you diagnose where exactly you are going wrong.

It goes without saying - but the final task is to tighten up the IPTables rules e.g. the INPUT/OUTPUT chains in the filter table.

Connecting to wireless networks with nmcli in CentOS 7

For this tutorial I'll be using a USB wifi dongle - so we'll firstly check the kernel - ensuring that its picked it up OK:

dmesg | usb

If detected - we should see it with:

nmcli device show

To check the device radio / check for available wireless networks we can issue:

nmcli device wifi list

Let's say that our SSID is 'WirelessNetwork' - in order to connect to it we will use the 'connection add' switch:

nmcli connection add ifname wlp2s0 type wifi ssid WirelessNetwork

to delete a connection - find out what it's been named with:

nmcli connection show

and delete with:

nmcli connection del wifi-wlp2s0

You can also use the 'con-name' switch if you wish to have connections to different wireless network e.g.:

nmcli connection add con-name WirelessNetwork ifname wlp2s0 type wifi ssid WirelessNetwork

We can then add authentication information (in our case we're using a pre-shared key):

nmcli con modify wifi-wlp2s0 wifi-sec.key-mgmt wpa-psk
nmcli con modify wifi-wlp2s0 wifi-sec.psk <password>

Ensure wifi is turned on with:

nmcli radio wifi

and if needed turn it on with:

nmcli radio wifi on

To review our wifi connection in more detail we can issue:

nmcli connection show wifi-wlp2s0

Finally to activate our configuration we should issue:

nmcli connection up wifi-wlp2s0

Running 'nmcli connection' you will notice that it's now green - indicating that you have successfully connected to the network.

Friday 17 February 2017

Understanding the SCSI sg and sd driver in Linux

The sg driver allows users to send SCSI commands to SCSI aware devices - for example scan for disks. Once the scan has completed the SCSI disk drive (sd) can be initialised providing block level access to the media - typically only performing 'SCSI READ' and 'SCSI WRITE' commands.

In order to retrieve a list of SCSI devices we can use the 'lsscsi' command (I prefer this than the normal lsblk since it conveniently provides additional information such as the SCSI bus, channel and LUN numbers etc.)

lsscsi

[0:0:0:0]    disk    ATA      WDC WD1111111-111 1A01  /dev/sda
[1:0:0:0]    cd/dvd  HL-DT-ST DVD-RAM GHC0N    MA01  /dev/sr0
[4:0:0:0]    disk    Generic- Multi-Card       1.00  /dev/sdb

The sg driver is commonly used to interact with devices such as scanners and CD-ROM drives.

Interestingly you might have noticed that almost all disk types use the sd driver - but quite often these disks might be ATA or SATA drives. The SCSI driver is used to abstract the many different disk types - so in fact it is compatible with a whole host of different disk types.

Quickly creating a (block) full backup of a USB mass storage device with dd

Are you ever running around and looking for a spare USB drive? Ever wanted to quickly use a USB drive with data / partitions already on and only need it for a few hours - try creating a quick block copy of the device with dd:

dd if=/dev/sdd of=usb_backup.img; sync

Ensuring we use the 'sync' command to ensure that any buffers are flushed out to disk.

and to restore the image back onto the USB drive after use:

dd if=usb_backup.img of=/dev/sdd; sync

Thursday 16 February 2017

Working with extended attributes in Linux

Filesystem attributes on Linux file systems such as ext3, ext4 and xfs allow us to provide enhanced security to our files.

In order for extended attributes to work properly the filesystem (and kernel) must support them - you can easily check whether the filesystem supports them by checking he mount options with:

sudo tune2fs -l /dev/mapper/fedora-home  | grep xattr

Default mount options:    user_xattr acl

If it is not enabled you can easily add the 'user_xattr' option to the appropriate mount in the fstab.

Below I will describe some of the more common attributes:

chattr +i /etc/importantfile.conf

The 'i' stands for immutable and prevents deletion of the file.

chattr +u /etc/importantfile.conf

The 'u' options stands for undelete and allows the user to recover the file after deletion.

chattr +c /var/log/mybiglog.log

The 'c' option stands for compression and the kernel will compress the file before writing any changes to disk.

In the same way attributes can easily be removed from a file with:

chattr -i /etc/importantfile.conf

We also have extended attributes that allow you (or rather programs) to create custom attributes. There are four namespaces these extended attributes are divided up into:

- User
- System
- Security
- Trusted

By simply running the '-d' switch with getfattr we can view all of the user extended attributes:

getfattr -d user /etc/passwd

You will usually not see a lot - although I noticed that on files that are sent to you on Skype for Linux have the following user attribute added to them:

user.xdg.origin.url="https://weu1-api.asm.skype.com/v1/objects/<removed>/views/original"

Which appears to document where is was downloaded from on Skype servers.

We can also check other namespaces with the '-m' switch - for example to check the 'security' namespace:

getfattr -d -m security /etc/passwd

In this instance it returns an extended attribute that appears to be used by SELinux:

security.selinux="system_u:object_r:passwd_file_t:s0"

We can also set a custom attribute manually with:

setfattr -n user.example -v example /tmp/testfile

and remove it with:

setfattr -x user.example

Wednesday 15 February 2017

Automatically running a script on system startup with RHEL / CentOS 7

Since CentOS 7 has adopted systemd - hence replacing the need for SysV. This now begs the question of how we can easily add a script to run on startup - traditionally it's pretty easy to do with the rc.local file:

/etc/rc.local (RHEL based)

or

/etc/rc.d/rc.local (Found in Debain based distros)

In fact systemd actually maintains backward compatibility for the old SysV init script (/etc/init.d/) by using the systemd-sysv-generator utility. We can manually invoke this with:

/usr/lib/systemd/system-generators/systemd-sysv-generator

The init scripts get called just after the basic.target is executed.

By default the rc.local file is not present - so lets create it and ensure that it's executable:

sudo touch /etc/rc.local

Since the file is treated like a script we should add the relevant interpreter at the beginning of the file for example:

#!/bin/bash

Set the execution bits:

sudo chmod +x /etc/rc.local

and finally enable the rc.local service with:

sudo systemctl enable rc-local.service

Although this failed with the following message:

The unit files have no installation config (WantedBy, RequiredBy, Also, Alias
settings in the [Install] section, and DefaultInstance for template units).
This means they are not meant to be enabled using systemctl.

This is in fact because the service is static - i.e. static services have no [Install] section and are typically used as dependencies for other services.

So after some digging it turns out that the service does not need to be enabled (after inspecting the service file):

cat /usr/lib/systemd/system/rc-local.service

....
# This unit gets pulled automatically into multi-user.target by
# systemd-rc-local-generator if /etc/rc.d/rc.local is executable.
...

So it turns out by turning on the executable bit of rc.local it will be pulled into the multi-user target.

  

Setting up replication with GlusterFS on CentOS 7

GlusterFS is a relatively new (but promising) file system aimed at providing a scalable network file system for typically bandwidth intensive tasks - such as media streaming, file sharing and so on. There are also other alternatives I could have used instead - such as GPFS - although unless you have a pretty substantial budget not many businesses will be able to adopt this.

Let's firstly setup our GlusterFS volume - on node A:

yum install centos-release-gluster
yum install glusterfs-server

echo GLUSTER01 > /etc/hostname
echo 10.0.0.2 GLUSTER02 >> /etc/hosts
echo 10.0.0.1 GLUSTER01 >> /etc/hosts

sudo systemctl enable glusterd
sudo systemctl start glusterd

We'll also need to permit access the following ports for GlusterFS i.e. from node A to B and back:

111 / tcp
24007 / tcp GlusterFS Daemon.
24008 / tcp GlusterFS Management
38465 to 38467 / tcp GlusterFS NFS service
49152 to  n  / tcp - Depends on number of bricks.

which translates to:

iptables -t filter -I INPUT 3 -p tcp --dport 111 -j ACCEPT
iptables -t filter -I INPUT 3 -p tcp --dport 24007:24008 -j ACCEPT
iptables -t filter -I INPUT 3 -p tcp --dport 38465:38467 -j ACCEPT
iptables -t filter -I INPUT 3 -p tcp --dport 49152 -j ACCEPT#
sudo iptables-save > /etc/sysconfig/iptables

otherwise you might get:

peer probe: failed: Probe returned with unknown errno 107

Continue by rinse and repeating for the second server.

Now let's check connectivity with each gluster peer:

GLUSTER01> gluster peer probe GLUSTER02
GLUSTER02> gluster peer probe GLUSTER01

We can then check the peer(s) status with:

gluster peer status

Let's now add an additional drive to each host - lets say udev names it /dev/sdb:

parted /dev/sdb
mktable msdos
mkpart pri xfs 0% 100%

mkfs.xfs -L SHARED /dev/sdb1

ensure

echo 'UUID=yyyyyy-uuuu-444b-xxxx-a6a31a93dd2d /mnt/shared xfs defaults 0 0' >> /etc/fstab

Rinse and repeat for the second host.

Create the volume with:

gluster volume create datastore replica 2 transport tcp GLUSTER01:/mnt/shared/datastore GLUSTER02:/mnt/shared/datastore

and then start it with:

gluster volume start datastore

To check the status of the volume use (on any node):

gluster volume info

We should also set the access permission for example to allow the 172.30.0.0/16 network:

gluster volume set datastore auth.allow "172.30.*"

We can now mount the glusterfs filesystem on the client as follows:

sudo modprobe fuse
sudo yum install fuse fuse-libs openib libibverbs -y
sudo yum install  glusterfs-client -y

mkdir /mount/gluster
mount -t glusterfs GLUSTER01:datastore /mount/gluster

and on GLUSTER02:

sudo modprobe fuse
sudo yum install fuse fuse-libs openib libibverbs -y
sudo yum install  glusterfs-client -y

mkdir /mount/gluster
mount -t glusterfs GLUSTER02:datastore /mount/gluster

Then we can test the replication from GLUSTER01 e.g.

touch  /mount/gluster/test.txt

If it's present on GLUSTER02 we have success!

If things don't quite work the first time you can tail the log files to help troubelshoot:

tail -f /var/log/glusterfs/<name>.log

Tuesday 14 February 2017

Warning: The resulting partition is not properly aligned for best performance.

When create a new partition on an old disk via parted I received the following warning:

mkpart pri 0 -1

Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel?

This message appears because the logical sector (4kb) does not fit directly onto the physical sector (4kb) and as a result if you needed to read the logical sector - you would have to read both physical sectors resulting in loss of performance.

A sector is the smallest unit on a disk and will vary in size from drive to drive - although most modern drives use 4kb / 4096 byte sectors now to save physical space on a hard drive - opposed to 512 byte sectors. This is because each sector has a small amount of error correction data on it - so instead of 4 x 512 byte error correction data you have x1 4096 sector with error correction data on it.

Blocks in contrast are typically made up of multiple sectors and are effectvily a way of abstracting the physical sectors.

However we can get this information from the sys filesystem:

cat /sys/block/sda/queue/hw_sector_size
512

We can then define the sector size:

parted /dev/sdb
mkpart pri 512s 100%

However this still failed!

Instead I found that using percentages instead of from X sector to Y sector does the trick and forces parted to work it out itself - for example:

parted /dev/sdb
mkpart pri xfs 0% 100%

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  10.7GB  10.7GB  primary

Tuesday 7 February 2017

A quick note on the /var/run directory

You learn something new every day - after manually creating the a directory under /var/run I noticed that the directory did not persist on a reboot.

Application startup script manually create necessary folders upon boot.

Now normally the /run file system is mounted as a tempfs - you can easily verify this with:

fd -H | grep /run

Although on closer inspection /var/run is actually a symbolic link to /run!

So in order for us to ensure that the directory for our service (haproxy in this case) is created with the service startup script.

Or in my case - I simply changed the socket file path to use the parent directory /run instead of /run/haproxy e.g.:

stats socket /run/haproxy_admin.sock mode 660 level admin

sudo service haproxy restart & sudo service haproxy status


Getting your logs into AWS CloudWatch on CentOS 7

This tutorial will demonstrate how you can securely get your logs from your applications into the AWS CloudWatch service.

For this tutorial we will be forwarding specific syslog messages to CloudWatch (I would like to caputure radius AAA information.)


Firstly and most importantly lets setup a secure IAM Policy to ensure that we provide minimal access permissions to the host machine:

IAM >> Add User >> Let's call it 'remoteaccess' - we'll untick 'AWS Management Console access' as this won't be necessary for our needs.

We'll create a new group called 'Logging' and then finish the user creation.

Now click on the 'Groups' tab in the left-hand navigation pane and open the newly created 'Logging' group. Hit the permissions tab and expand the 'Inline Policies' >> Create >> Custom Policy and name it 'CloudWatchAccess' and add:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "logs:DescribeLogStreams"
    ],
      "Resource": [
        "arn:aws:logs:*:*:*"
    ]
  }
 ]
}

The next step is to install the 'CloudWatch Logs' service - as we are on CentOS 7 we will need to install it manually:

cd /tmp

wget https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py

sudo python ./awslogs-agent-setup.py --region <region-name>

Entering in your access key, secret key and path to the logs - which in my case will be:

/var/log/aaa

If you wish to manually change the access key etc. at a later date we can simplt issue:

aws configure

to modify the logging settings we can modify:

/var/awslogs/etc/awslogs.conf

and to help debug any problems we can tail:

/var/log/awslogs.log

and start the service with:

sudo service awslogs start

Source: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html

Friday 3 February 2017

Error: libsemanage.semanage_direct_remove_key: Unable to remove module radius_custom at priority 400. (No such file or directory).

The other day I received the following error message when attempting to disable a custom SELinux policy - semodule -r radius_custom

libsemanage.semanage_direct_remove_key: Unable to remove module radius_custom at priority 400. (No such file or directory).
semodule:  Failed!

Clearly it will looking in the wrong place for the module (in hindsight I should have used strace to attempt to work out where it was trying to access) - although I ended up having to manually move the module from:

mv /etc/selinux/targeted/active/modules/400/radius_custom /etc/selinux/targeted/active/modules/disabled

and then reload the policies with:

semodule -R

Thursday 2 February 2017

Setting up LinOTP on CentOS 7 with FreeRADIUS (Version 3)

Currently the LinOTP documentation does not explain exactly how to get FreeRADIUS 3 up and running with it's perl module.

Some notes:

There is no need to populate the 'users' file (/etc/raddb/users)

Instead refer below for sample configuration that will work with FreeRADIUS 3:

https://groups.google.com/forum/#!topic/privacyidea/O2wdnmxIFNw

You will also need to install some additional dependencies for the LinOTP perl module:

sudo cpan LWP::Protocol::https

sudo yum install perl-Crypt-SSLeay perl-Net-SSLeay

I had to make extensive use of FreeRADIUS debug mode and the httpd error log:

radiusd -XXX

tail -f /var/log/httpd/httpd_error

Also if you have SELinux enabled you should keep in mind that access to the LinOTP server via the script will likely fail - to review:

ausearch -m avc -ts today | audit2allow

Another problem I encountered was issues with different versions of the Perl CARP module:

Thu Feb  2 13:57:46 2017 : Error: rlm_perl: perl_embed:: module = /etc/raddb/mods-config/perl/privacyidea.pm , func = authenticate exit status= Undefined subroutine &Carp::authenticate called at /usr/share/perl5/vendor_perl/Carp.pm line 100.  

Fortunately 

Wednesday 1 February 2017

Using fatrace to monitor calls to specific directories / files on CentOS 7

Currently fatrace is not available within the EPEL repo's for CentOS 7 - so we must instead download the Fedora COPR repo:

cd /tmp

curl https://copr.fedorainfracloud.org/coprs/ifas/fatrace/repo/epel-7/ifas-fatrace-epel-7.repo > /etc/yum.repos.d/ifas.repo

yum install fatrace

and then to monitor the current mount for open handles we can issue:

sudo fatrace -f O -c

The '-f' parameter specifies that we wish to monitor open handles - available handles are as follows:

C = Create file
R = Read file
O = Open file
W = Write to file