By default when enabling ISAKMP / IPSec on an interface the ASA permits access to the service (UDP 500, 4500 and IPSec) to everyone. However in some circumstances where you can reliably predict the source of VPN initiaitors you should ideally lock down access. Unfortuantely this can't be performed via apply an ACL to the interface and instead needs to be performed via the control pane.
We'll firstly need to obtain a list of the IP's in tunnel groups and add them to an ACL e.g.:
access-list outside-control-plane extended permit udp host <REMOTE PEER #1> host <ASA-VPN-ENABLED-INTERFACE> eq 500
access-list outside-control-plane extended permit udp host <REMOTE PEER #2> host <ASA-VPN-ENABLED-INTERFACE> eq 500
access-list outside-control-plane extended deny udp any any eq 500
access-list outside-control-plane extended permit udp host <REMOTE PEER #1> host <ASA-VPN-ENABLED-INTERFACE> eq 4500
access-list outside-control-plane extended permit udp host <REMOTE PEER #2> host <ASA-VPN-ENABLED-INTERFACE> eq 4500
access-list outside-control-plane extended deny udp any any eq 4500
access-list outside-control-plane extended permit ipsec host <REMOTE PEER #1> host <ASA-VPN-ENABLED-INTERFACE>
access-list outside-control-plane extended permit ipsec host <REMOTE PEER #2> host <ASA-VPN-ENABLED-INTERFACE>
access-list outside-control-plane extended deny ipsec any any
access-group outside-control-plane in interface outside-pri control-plane
Note: The above examples presume you do NOT have any IPSec VPN servers behind the firewall.
We can also perform the same for SSL VPNs:
access-list outside-control-plane extended permit tcp host <REMOTE PEER #1> host <ASA-VPN-ENABLED-INTERFACE> eq 443
access-list outside-control-plane extended permit tcp host <REMOTE PEER #2> host <ASA-VPN-ENABLED-INTERFACE> eq 443
access-list outside-control-plane extended deny tcp any <ASA-VPN-ENABLED-INTERFACE> eq 443
Pages
▼
Tuesday, 10 September 2019
Thursday, 8 August 2019
Setting up bonding with LACP using the ip command in Linux
This can be accomplished quite quickly with the IP command if you only need it temporarily:
ip link add bond0 type bond
ip link set bond0 down
ip link set bond0 type bond mode 802.3ad
ip link set enp1s0 down
ip link set enp1s0 master bond0
ip link set enp2s0 down
ip link set enp2s0 master bond0
ip link set bond0 up
and to remove the bonding we can issue:
ip link del bond0
ip link set enp1s0 up
ip link set enp2s0 up
ip link add bond0 type bond
ip link set bond0 down
ip link set bond0 type bond mode 802.3ad
ip link set enp1s0 down
ip link set enp1s0 master bond0
ip link set enp2s0 down
ip link set enp2s0 master bond0
ip link set bond0 up
and to remove the bonding we can issue:
ip link del bond0
ip link set enp1s0 up
ip link set enp2s0 up
Quickstart: Installing Arch Linux 2019.X
Firstly download the latest iso image from one of the mirrors below:
https://www.archlinux.org/download
This will get you into the system under the root user.
The setup portion is a Gentoo style approach of efffectively 'assembling' the system yourself.
From here we'll firstly partition the disks:
sdX 8:0 0 1000G 0 disk
In this example we'll create three partitions - one for the root fs, another for our home fs and finally one for swap.
With Arch we have a few options for network configuration - either netctl or networkd (a newer component.)
https://www.archlinux.org/download
wget https://www.mirrorservice.org/sites/ftp.archlinux.org/iso/2019.08.01/archlinux-xxxx.xx.xx-x86_64.isoand then write it to your preferred media:
dd bs=8M if=archlinux-xxxx.xx.xx-x86_64.iso of=/dev/sdX | syncUpon booting the image select the default selection to boot Arch.
This will get you into the system under the root user.
The setup portion is a Gentoo style approach of efffectively 'assembling' the system yourself.
From here we'll firstly partition the disks:
lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdX 8:0 0 1000G 0 disk
In this example we'll create three partitions - one for the root fs, another for our home fs and finally one for swap.
parted -a optimal /dev/sdXhyuCreate the filesystems with:
mktable gpt
mkpart ESP boot fat32 0% 500MB
mkpart root ext4 500MB 250000MB
mkpart home ext4 250GB 750GB
mkpart swap ext4 750GB 800GB
set 1 boot on
mkfs.msdos /dev/sdX1Proceed by mounting the file systems:
mkfs.ext4 /dev/sdX2
mkfs.ext4 /dev/sdX3
mkfs.ext4 /dev/sdX4
mkswap /dev/sdX4
swapon /dev/sdX4
mount -t auto /dev/sdX2 /mntWe'll need the network setup at this point so we can access the arch repo's:
mkdir -P /mnt/boot/EFI && mount -t auto /dev/SdX1 /mnt/boot/EFI
mkdir /mnt/home && mount -t auto /dev/SdX3 /mnt/home
dhclientand then pull down all the nessasery compontents for the root fs:
pacstrap /mnt base base-develOnce complete we'll need to generate the fstab for the new system:
genfstab -U /mnt >> /mnt/etc/fstaband then change our root password by chrooting into the new system along with the hostname:
arch-chroot /mntWe'll also configure regional and time settings with:
hostname arch-box
passwd
ln -sf /usr/share/zoneinfo/<region>/<city> /etc/localtimeI'm going to use KDE Plasma for the desktop environment:
hwclock --systohc
locale-gen
printf "LANG=en_GB.UTF-8" > /etc/locale.conf
export LANG=en_GB.UTF-8
pacman -S xorg xorg-server xorg-xinit plasma-meta sddmFinally we will configure grub:
pacman -S grub efibootmgr dosfstools os-prober mtoolsExit the jail:
grub-install --target=x86_64-efi --efi-directory=/boot/EFI --bootloader-id=grub_uefi --recheck
grub-mkconfig -o /boot/grub/grub.cfg
exitand restart:
shutdown -r nowOnce booted into the new OS we'll setup the network configuration - for this example I'll be setting up DHCP.
With Arch we have a few options for network configuration - either netctl or networkd (a newer component.)
vi /etc/netctl/enp2s0
Description=LAN interfaceEnsure the interface will come up on boot by issuing:
Interface=enp2s0
Connection=ethernet
IP=dhcp
netctl enable enp2s0Enable and start the DHCP service with:
systemctl enable dhcpcdand then attempt to start the interface with:
systemctl start dhcpcd
netctl start enp2s0
Friday, 12 July 2019
Using Juniper SRX devices as routers
The SRX series are part of Junipers security line of products and provide firewall among a host of other security features such as IDS and IPS.
However you can effectively use the SRX range as a traditional router by changing the forwarding mode from flow based (stateful inspected) to packet based (stateless per packet inspection.)
You can verify the forwarding mode by issuing:
show security flow status
We should firstly ensure we remove any existing security configuration from the device with:
delete security
and then ensure the forwarding mode is set to 'packet based':
set security forwarding-options family mpls mode packet-based
commit it and then reboot:
commit
run request system reboot
Upon restart check the forwarding mode again with:
show security flow status
Thursday, 4 July 2019
Configuring an Etherchannel (with LACP) between JunOS and Cisco IOS
Juniper JunOS
Firstly create the aggregated interface:
edit chassis
set aggregated-devices ethernet device-count 2
top
edit interfaces
set ae0 aggregated-ether-options
and define the LACP interval period (i.e. the rate at which the device will send / receive LACP protocol messages):
set aex aggregated-ether-options lacp periodic fast
By default the 'lacp periodic fast' sets the transmission rate to 1 second.
Note: It's important that the rate is matched on the other end as well.
We'll now associate our interfaces with the aggregated link we've just configured:
edit interfaces
edit ge-0/0/1
set ether-options 802.3ad ae0
exit
edit ge-0/0/2
set ether-options 802.3ad ae0
Cisco IOS
conf t
int range gi1/0/1-2
channel-protocol lacp
channel-group 1 mode active
lacp rate fast
no shut
int po1
description etherchannel
Firstly create the aggregated interface:
edit chassis
set aggregated-devices ethernet device-count 2
top
edit interfaces
set ae0 aggregated-ether-options
and define the LACP interval period (i.e. the rate at which the device will send / receive LACP protocol messages):
set aex aggregated-ether-options lacp periodic fast
By default the 'lacp periodic fast' sets the transmission rate to 1 second.
Note: It's important that the rate is matched on the other end as well.
We'll now associate our interfaces with the aggregated link we've just configured:
edit interfaces
edit ge-0/0/1
set ether-options 802.3ad ae0
exit
edit ge-0/0/2
set ether-options 802.3ad ae0
Cisco IOS
conf t
int range gi1/0/1-2
channel-protocol lacp
channel-group 1 mode active
lacp rate fast
no shut
int po1
description etherchannel
Tuesday, 2 July 2019
Base Junos Configuration
The following template will get the fundamental features setup in Junos and act as a base for building more advanced configurations:
# Enter configuration mode
cli
configure exclusive
# Configure root user key / password
set system root-authentication load-key-file
[OR]
set system root-authentication plain-text-password
# Enable remote management
edit system services
active ssh
ativate web-management https
set web-management https port 443
set web-management https system-generated-certificate
set web-management https interface fxp0.0
# Disable insecure services
deactivate telnet
decativate web-management http
# Setup hostname
top
set system host-name "host01"
# Setup time / date / ntp
set system time-zone Europe/London
exit
set date ntp 1.uk.pool.ntp.org
set cli idle-timeout 10
# Setup new user and assign login class
edit
edit system login
edit user jbloggs
set authentication plain-text-password
set full-name "Joe Bloggs"
set class operator | read-only | super-user
# Create custom login class
set system login class test-class permissions [interface interface-control]
set system login class test-class idle-timeout 10
[OR]
# Configure RADIUS
set system radius-server 10.11.12.254 source-address 10.11.12.1
edit system radius-server 10.11.12.254
set secret <pass-phrase>
set port 1845
# Ensure radius requests originate from the mgmt interface
routing-instance mgmt_junos
exit
set system authentication-order [radius password]
# Assign a default class for remote users
set system login user remote class super-user
### Setup Layer 3 Interface
# Change physical properties
edit interfaces ge-0/0/1
set speed 10m
set link-mode full-duplex
### Create VLAN
set vlans testvlan vlan-id 123
set vlans testvlan2 vlan-id 456
# Change logical properties
edit interfaces ge-0/0/1 unit 0
set vlan-id 50
edit family inet
set address 1.2.3.254/24
### Setup Access Port
# Change logical properties
edit interfaces ge-0/0/2 unit 0
set family ethernet-switching interface-mode access
set family ethernet-switching vlan members 123
### Setup Trunk Port
edit interfaces ge-0/0/3 unit 0
set family ethernet-switching port-mode trunk vlan members [testvlan testvlan2]
### Syslog Forwarding
* This is performed via the local syslog server rather than the Juniper CLI (messages found in /var/log/messages)
* To edit the configuration from the CLI use 'edit system syslog'.
### Commit changes
commit
# Enter configuration mode
cli
configure exclusive
# Configure root user key / password
set system root-authentication load-key-file
[OR]
set system root-authentication plain-text-password
# Enable remote management
edit system services
active ssh
ativate web-management https
set web-management https port 443
set web-management https system-generated-certificate
set web-management https interface fxp0.0
# Disable insecure services
deactivate telnet
decativate web-management http
# Setup hostname
top
set system host-name "host01"
# Setup time / date / ntp
set system time-zone Europe/London
exit
set date ntp 1.uk.pool.ntp.org
set cli idle-timeout 10
# Setup new user and assign login class
edit
edit system login
edit user jbloggs
set authentication plain-text-password
set full-name "Joe Bloggs"
set class operator | read-only | super-user
# Create custom login class
set system login class test-class permissions [interface interface-control]
set system login class test-class idle-timeout 10
[OR]
# Configure RADIUS
set system radius-server 10.11.12.254 source-address 10.11.12.1
edit system radius-server 10.11.12.254
set secret <pass-phrase>
set port 1845
# Ensure radius requests originate from the mgmt interface
routing-instance mgmt_junos
exit
set system authentication-order [radius password]
# Assign a default class for remote users
set system login user remote class super-user
### Setup Layer 3 Interface
# Change physical properties
edit interfaces ge-0/0/1
set speed 10m
set link-mode full-duplex
### Create VLAN
set vlans testvlan vlan-id 123
set vlans testvlan2 vlan-id 456
# Change logical properties
edit interfaces ge-0/0/1 unit 0
set vlan-id 50
edit family inet
set address 1.2.3.254/24
### Setup Access Port
# Change logical properties
edit interfaces ge-0/0/2 unit 0
set family ethernet-switching interface-mode access
set family ethernet-switching vlan members 123
### Setup Trunk Port
edit interfaces ge-0/0/3 unit 0
set family ethernet-switching port-mode trunk vlan members [testvlan testvlan2]
### Syslog Forwarding
* This is performed via the local syslog server rather than the Juniper CLI (messages found in /var/log/messages)
* To edit the configuration from the CLI use 'edit system syslog'.
### Commit changes
commit
Wednesday, 29 May 2019
Configuring Auto QoS on Cisco Switches
Auto QoS is a great feature included with the majority of switches running at least the LAN Base feature set. It will likely require some further tweaking after it's setup however it's a great base for applying QoS.
Cisco provides support for it's own telephony devices (surprise, surprise!) through CDP broadcasts. However in my case I am working with a different vendor and since not all switches will provide classification of packets I'm relying on the tagging being performed by the downstream devices.
It's very simple to setup - simply apply the following to the switch ports in scope (i.e. the ones connected to the telephony devices):
conf t
int range gi1/0/1-10
auto qos trust dscp
end
This will instruct the switch ports in scope to trust DSCP markings applied by the downstream devices (as I'm sure you're aware by default DSCP marking are typically stripped.)
The 'auto qos trust dscp' also enables qos globally for us and also applies a few other directives on the interface - so in reality a lot of the setup is performed for you - however in reality it's still crucial that you understand what each directives means!
do show run int gi1/0/1
interface GigabitEthernet1/0/1
switchport access vlan 2000
switchport mode access
speed auto
srr-queue bandwidth share 1 30 35 5
priority-queue out
mls qos trust dscp
auto qos trust dscp
spanning-tree portfast
spanning-tree bpduguard enable
To verify QoS is turned on globally we can review:
show mls qos
and to review interface specific QoS information:
show mls qos interface gi1/0/1
We can also test QoS is successfully prioritising packets with iperf (tagging the traffic with a non zero DSCP value) e.g.:
iperf -c 10.11.12.13 -i 1 -S 0xB8 -t 0
'0xB8' is the hexadecimal equivalent of TOS's 184 - which equates to DSCP's 'ef' / 46. According to the man page (at least in mine) the value must be in hexadecimal TOS form. There is an list of all of them available here.
We can then review the QoS counters for the interface with:
show mls qos interface gi1/0/1 stat
Cisco provides support for it's own telephony devices (surprise, surprise!) through CDP broadcasts. However in my case I am working with a different vendor and since not all switches will provide classification of packets I'm relying on the tagging being performed by the downstream devices.
It's very simple to setup - simply apply the following to the switch ports in scope (i.e. the ones connected to the telephony devices):
conf t
int range gi1/0/1-10
auto qos trust dscp
end
This will instruct the switch ports in scope to trust DSCP markings applied by the downstream devices (as I'm sure you're aware by default DSCP marking are typically stripped.)
The 'auto qos trust dscp' also enables qos globally for us and also applies a few other directives on the interface - so in reality a lot of the setup is performed for you - however in reality it's still crucial that you understand what each directives means!
do show run int gi1/0/1
interface GigabitEthernet1/0/1
switchport access vlan 2000
switchport mode access
speed auto
srr-queue bandwidth share 1 30 35 5
priority-queue out
mls qos trust dscp
auto qos trust dscp
spanning-tree portfast
spanning-tree bpduguard enable
To verify QoS is turned on globally we can review:
show mls qos
and to review interface specific QoS information:
show mls qos interface gi1/0/1
We can also test QoS is successfully prioritising packets with iperf (tagging the traffic with a non zero DSCP value) e.g.:
iperf -c 10.11.12.13 -i 1 -S 0xB8 -t 0
'0xB8' is the hexadecimal equivalent of TOS's 184 - which equates to DSCP's 'ef' / 46. According to the man page (at least in mine) the value must be in hexadecimal TOS form. There is an list of all of them available here.
We can then review the QoS counters for the interface with:
show mls qos interface gi1/0/1 stat
Thursday, 23 May 2019
Cross compile packages for OpenWRT / LEDE
For this tutorial I'll be using Fedora 29 for the build host.
We'll install the necessary dependencies firstly:
sudo dnf install asciidoc binutils bzip2 flex git gawk intltool zlib gmake ncurses openssl-devel patchutils p5-extutils-makemaker unzip wget gettext libxslt zlib-devel boost-jam perl-XML-Parser libusb-devel dev86 sharutils java-1.7.0-openjdk-devel b43-fwcutter zip
The next step is to obtain the OpenWRT SDK which will allows us to cross-compile packages that we require on OpenWRT.
I'll be using a BT Home Hub 5A for this exercise - so I browse the releases:
Under the supplementary section you should find the SDK e.g.
We'll proceed by downloading and extracting it:
The default feeds will be targeted at 17.01.4 and hence be missing fping - however the current master branch has fping available - so we'll add the following line to feeds.conf.default ensure it's indexed / available:
Update the feeds (as defined in feeds.conf.default):
and grab fping with:
We'll generate our config file:
Select 'Network' and ensure the fping package is marked with an 'M' and then save the changes to '.config'
Also make sure that cryptographic signing is disabled (otherwise the build process will fail): 'Global build settings' > Untick 'Cryptographically sign package lists' and hit Save.
We'll now attempt to compile fping:
The binary is created in the following directory:
Finally upload the package via SFTP/SCP to the router and install it with opkg:
We'll install the necessary dependencies firstly:
sudo dnf install asciidoc binutils bzip2 flex git gawk intltool zlib gmake ncurses openssl-devel patchutils p5-extutils-makemaker unzip wget gettext libxslt zlib-devel boost-jam perl-XML-Parser libusb-devel dev86 sharutils java-1.7.0-openjdk-devel b43-fwcutter zip
The next step is to obtain the OpenWRT SDK which will allows us to cross-compile packages that we require on OpenWRT.
I'll be using a BT Home Hub 5A for this exercise - so I browse the releases:
https://downloads.openwrt.org/releases/17.01.4/targets/lantiq/xrx200/
Under the supplementary section you should find the SDK e.g.
lede-sdk-<version-number>-<vendor>-<model>_gcc-<version number>_musl-<version number>.Linux-<architecure>.tar.xz
We'll proceed by downloading and extracting it:
wget https://downloads.openwrt.org/releases/17.01.4/targets/lantiq/xrx200/lede-sdk-17.01.4-lantiq-xrx200_gcc-5.4.0_musl-1.1.16.Linux-x86_64.tar.xz
tar xvf lede-sdk-17.01.4-lantiq-xrx200_gcc-5.4.0_musl-1.1.16.Linux-x86_64.tar.xz && cd lede-sdk-17.01.4-lantiq-xrx200_gcc-5.4.0_musl-1.1.16.Linux-x86_64
The default feeds will be targeted at 17.01.4 and hence be missing fping - however the current master branch has fping available - so we'll add the following line to feeds.conf.default ensure it's indexed / available:
src-git fping https://github.com/openwrt/packages.git
Update the feeds (as defined in feeds.conf.default):
./scripts/feeds update -a
and grab fping with:
./scripts/feeds install fping
We'll generate our config file:
make menuconfig
Select 'Network' and ensure the fping package is marked with an 'M' and then save the changes to '.config'
Also make sure that cryptographic signing is disabled (otherwise the build process will fail): 'Global build settings' > Untick 'Cryptographically sign package lists' and hit Save.
We'll now attempt to compile fping:
make -j1 V=s
The binary is created in the following directory:
bin/packages/mips_24kc/fping/
Finally upload the package via SFTP/SCP to the router and install it with opkg:
opkg install fping_4.2-1_mips_24kc.ipk
Wednesday, 8 May 2019
Linux: Backup Options
There are countless ways to backup disks easily with Linux - however I'm going to demonstrate some of the more commonly used methods.
Forenote: Always ensure the discs are not in use / mounted while performing the below operations otherwise it is likely that new / changed files will be corrupted and will run into problems with the file system.
Backing up a disk with dd
sudo dd if=/dev/xvda of=/mnt/usbdrive | sync
or better yet we can use a sane block size (dd uses 512 bytes by default):
sudo dd bs=16M if=/dev/xvda of=/mnt/usbdrive | sync
Backing up a disk with dd over ssh
Utilising SSH provides us with encryption - ideal for remote backups e.g. over public networks:
sudo ssh user@remote "dd if=/dev/xvda1 " | dd of=backup.gz
However it does introduce an overhead due to the encryption - so we can pipe it into gzip in order to speed things up:
sudo ssh user@remote "dd if=/dev/xvda1 | gzip -1 -" | dd of=backup.gz
Backing up a mounted system with rsync
Forenote: Always ensure the discs are not in use / mounted while performing the below operations otherwise it is likely that new / changed files will be corrupted and will run into problems with the file system.
Backing up a disk with dd
sudo dd if=/dev/xvda of=/mnt/usbdrive | sync
or better yet we can use a sane block size (dd uses 512 bytes by default):
sudo dd bs=16M if=/dev/xvda of=/mnt/usbdrive | sync
Backing up a disk with dd over ssh
Utilising SSH provides us with encryption - ideal for remote backups e.g. over public networks:
sudo ssh user@remote "dd if=/dev/xvda1 " | dd of=backup.gz
sudo ssh user@remote "dd if=/dev/xvda1 | gzip -1 -" | dd of=backup.gz
Backing up a mounted system with rsync
If the system is currently mounted we can use rsync to perform a backup (ensuring we exclude certain directories such as /dev, /mnt etc):
sudo rsync -aAXv / --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /mnt
In the above command we employ 'archive' mode that ensures symbolic links, devices, permissions, ownerships, modification times, ACLs, and extended attributes are preserved.
and over rsync over SSH
There are of course many other ways to skin a cat e.g. using netcat (which is significantly faster than dd over SSH - however lacks encryption.)
and over rsync over SSH
sudo rsync -aAXve ssh user@remote:/ --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /mnt
There are of course many other ways to skin a cat e.g. using netcat (which is significantly faster than dd over SSH - however lacks encryption.)
Sources
Thursday, 18 April 2019
Uploading ISO's to storage domains in oVirt
oVirt currently doesn't allow you to upload ISO's over its web interface - you'll need to use the cli to do this.
If you wish to upload an ISO to an ISO storage domain you should issue you should issue:
engine-iso-uploader --iso-domain <storage-domain-name> upload <path-to-iso>
If you wish to upload an ISO to an ISO storage domain you should issue you should issue:
engine-iso-uploader --iso-domain <storage-domain-name> upload <path-to-iso>
Wednesday, 10 April 2019
[CENTOS] Displaying messages in the mail queue and manually flushing them
The mail queue can be checked with simply:
Alternatively if you are after a count of messages that have been deffered for what ever reason you can issue:
and to attempt to resend them we can issue:
mailq
Alternatively if you are after a count of messages that have been deffered for what ever reason you can issue:
find /var/spool/postfix/deferred -type f | wc -l
and to attempt to resend them we can issue:
postqueue -f
Tuesday, 9 April 2019
Preventing kernel modules from being loaded at the bootloader / grub in CentOS 7 / RHEL
Although this will typically done with the mod probe there are situations where the need to disable specific kernel modules before loading the kernel are necessary. One such situation is while I was installing a fresh instance of CentOS on an older server.
At the CentOS bootloader select the relevant entry and hit tab. You should now be able to edit the Linux kernel (vmlinuz) boot parameters.
Simply append:
and hit enter.
This should theoretically work on all modern kernels / distros - so is not just limited to CentOS / RHEL.
Source: https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt
At the CentOS bootloader select the relevant entry and hit tab. You should now be able to edit the Linux kernel (vmlinuz) boot parameters.
Simply append:
module_blacklist=<module_name>
and hit enter.
This should theoretically work on all modern kernels / distros - so is not just limited to CentOS / RHEL.
Source: https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt
Saturday, 16 March 2019
Using yum to download a package and all it's associated dependencies
This tutorial will demonstrate how to do a download-only of a package and all of it's dependancies.
To elaborate - I recently installed Fedora 29 on a Macbook, but unfortunately there was no native support for the WLAN driver.
However it was available from RPMFusion - packaged under 'akmod-wl' - however downloading this and all of it's dependancies would have taken a long time - so instead we can use plugin for yum called 'yum-downloadonly':
yum install yum-downloadonly
We can then issue something like follows to download the required packages on a working computer which is running Fedora 29 (though ensure it is running exactly the same minor version as well!):
sudo yum install --downloadonly --downloaddir=/tmp akmod-wl
However this is not ideal largely due to the fact that it will download all required packages that the system needs. If some of these packages are already installed on the system they will be omitted.
So instead I came up with the idea of quickly building a jail with the basic packages to get yum up and running (this would mimic the newly installed OS):
mkdir -p /chroot/fedora29/var/lib/rpm
rpm --root /chroot/fedora29 --initdb
yumdownloader --destdir=/var/tmp fedora-release
cd /var/tmp
rpm --root /chroot/fedora29 -ivh --nodeps fedora-release*rpm
sudo yum install --installroot=/chroot/fedora29 --downloadonly --downloaddir=/tmp akmod-wl
Then copy everything from the temp folder onto the new workstation and issue:
rpm -i *
To elaborate - I recently installed Fedora 29 on a Macbook, but unfortunately there was no native support for the WLAN driver.
However it was available from RPMFusion - packaged under 'akmod-wl' - however downloading this and all of it's dependancies would have taken a long time - so instead we can use plugin for yum called 'yum-downloadonly':
yum install yum-downloadonly
We can then issue something like follows to download the required packages on a working computer which is running Fedora 29 (though ensure it is running exactly the same minor version as well!):
sudo yum install --downloadonly --downloaddir=/tmp akmod-wl
However this is not ideal largely due to the fact that it will download all required packages that the system needs. If some of these packages are already installed on the system they will be omitted.
So instead I came up with the idea of quickly building a jail with the basic packages to get yum up and running (this would mimic the newly installed OS):
mkdir -p /chroot/fedora29/var/lib/rpm
rpm --root /chroot/fedora29 --initdb
yumdownloader --destdir=/var/tmp fedora-release
cd /var/tmp
rpm --root /chroot/fedora29 -ivh --nodeps fedora-release*rpm
sudo yum install --installroot=/chroot/fedora29 --downloadonly --downloaddir=/tmp akmod-wl
Then copy everything from the temp folder onto the new workstation and issue:
rpm -i *
Friday, 15 March 2019
Generating a new UUID for XFS/EXT2/3/4 filesystems
Although very rare there will be circumstances were you encounter duplicate filesystem UUIDs.
Upon mounting one e.g.:
mount -t auto /dev/sdb1
mount: wrong fs type, bad option, bad superblock on /dev/sdb1
Upon mounting one e.g.:
mount -t auto /dev/sdb1
mount: wrong fs type, bad option, bad superblock on /dev/sdb1
Tailing dmesg provides the clue as to what has gone wrong:
[ 1103.580854] XFS (xvdp1): Filesystem has duplicate UUID xxxxxx-yyyyyy-zzzzz-aaaa-bbbbbbbbbbb - can't mount
So we'll need to change the UUID of one of disks - to do this with an XFS filesystem we can use:
xfs_admin -U generate /dev/sdb1
and with the EXT family we can use:
uuidgen
<generated UUID>
tune2fs /dev/xvdp1 -U <generated UUID>
Finally attempt to remount:
mount -t auto /dev/sdb1
Tuesday, 12 March 2019
Checking switch port bandwidth utilisation with SNMP / Nagios
In order to monitor port bandwidth utilization on Cisco switches via SNMP we'll firstly need to install a plugin from the Nagios Exchange called 'iftraffic2':
Download and install the plugin:
cd /usr/local/nagios/libexec
curl https://exchange.nagios.org/components/com_mtree/attachment.php?link_id=1720&cf_id=24 -O check_iftraffic
chmod +x check_iftraffic
The usage for the plugin is as follows:
./check_iftraffic -H <hostname> -C <community-string> -r -i <interface-name> -b <interface-capacity> -u <interface-unit> -w <warning-limit-percentage> -c <critical-limit-percentage>
We'll need to obtain the interface name of the interface we wish to poll - we can use snmpwalk to do this for us:
yum -y install net-snmp-utils
snmpwalk -v 2c -c <community-string> <hostname> 1.3.6.1.2.1.2.2.1.2
> IF-MIB::ifDescr.67 = STRING: Port-channel1
> IF-MIB::ifDescr.68 = STRING: Port-channel2
> IF-MIB::ifDescr.69 = STRING: Port-channel3
> IF-MIB::ifDescr.70 = STRING: Port-channel4
...
Note: '1.3.6.1.2.1.2.2.1.2' is the OID for interface descriptions - more information can be found here.
We'll also need the interface capacity:
snmpwalk -v 2c -c <community-string> <hostname> 1.3.6.1.2.1.2.2.1.5
> IF-MIB::ifSpeed.67 = Gauge32: 2000000000
> IF-MIB::ifSpeed.68 = Gauge32: 2000000000
> IF-MIB::ifSpeed.69 = Gauge32: 2000000000
> IF-MIB::ifSpeed.70 = Gauge32: 2000000000
Now note that the OID '1.3.6.1.2.1.2.2.1.5' returns the interface capacity in bits per second - so we'll need to convert this to gigabits per second (as the plugin doesn't support bits per second) - so we do:
2000000000 / 1000000000 = 2 (Gigabits)
For this example we'll use 'Port-channel1' - so the plugin would be executed as follows:
./check_iftraffic -H <hostname> -C <community-string> -r -i Port-channel1 -b 2 -u g -w 70 -c 85
The -b switch specifies our 2 Gbps and the -u switch instructs the plugin that we are giving the measurements in Gigabits.
If successfull you should have something like the following:
> Total RX Bytes: 1000.00 MB, Total TX Bytes: 2000.00 MB<br>Average Traffic: 5.75 MB/s (2.2%) in, 1.48 MB/s (0.6%) out| inUsage=2.2,50,70 outUsage=0.6,50,70 inAbsolut=10000000 outAbsolut=20000000
Now we simply need to setup the respective command and service definitions in nagios e.g.:
### commands.conf
# check switch port bandwidth
define command{
command_name check_bandwidth
command_line /usr/local/nagios/libexec/check_iftraffic -H $HOSTADDRESS$ -C $ARG1$ -r -i $ARG2$ -b $ARG3$ -u g -w 50 -c 70
}
### switches.conf
define service{
use generic-service
host_name SWITCH-STACK
service_description HHSVRSTK Uplink: Bandwidth Utilization
check_command check_bandwidth!<community-string>!Port-channel1!2
normal_check_interval 1
retry_check_interval 1
}
Download and install the plugin:
cd /usr/local/nagios/libexec
curl https://exchange.nagios.org/components/com_mtree/attachment.php?link_id=1720&cf_id=24 -O check_iftraffic
chmod +x check_iftraffic
The usage for the plugin is as follows:
./check_iftraffic -H <hostname> -C <community-string> -r -i <interface-name> -b <interface-capacity> -u <interface-unit> -w <warning-limit-percentage> -c <critical-limit-percentage>
We'll need to obtain the interface name of the interface we wish to poll - we can use snmpwalk to do this for us:
yum -y install net-snmp-utils
snmpwalk -v 2c -c <community-string> <hostname> 1.3.6.1.2.1.2.2.1.2
> IF-MIB::ifDescr.67 = STRING: Port-channel1
> IF-MIB::ifDescr.68 = STRING: Port-channel2
> IF-MIB::ifDescr.69 = STRING: Port-channel3
> IF-MIB::ifDescr.70 = STRING: Port-channel4
...
Note: '1.3.6.1.2.1.2.2.1.2' is the OID for interface descriptions - more information can be found here.
We'll also need the interface capacity:
snmpwalk -v 2c -c <community-string> <hostname> 1.3.6.1.2.1.2.2.1.5
> IF-MIB::ifSpeed.67 = Gauge32: 2000000000
> IF-MIB::ifSpeed.68 = Gauge32: 2000000000
> IF-MIB::ifSpeed.69 = Gauge32: 2000000000
> IF-MIB::ifSpeed.70 = Gauge32: 2000000000
Now note that the OID '1.3.6.1.2.1.2.2.1.5' returns the interface capacity in bits per second - so we'll need to convert this to gigabits per second (as the plugin doesn't support bits per second) - so we do:
2000000000 / 1000000000 = 2 (Gigabits)
For this example we'll use 'Port-channel1' - so the plugin would be executed as follows:
./check_iftraffic -H <hostname> -C <community-string> -r -i Port-channel1 -b 2 -u g -w 70 -c 85
The -b switch specifies our 2 Gbps and the -u switch instructs the plugin that we are giving the measurements in Gigabits.
If successfull you should have something like the following:
> Total RX Bytes: 1000.00 MB, Total TX Bytes: 2000.00 MB<br>Average Traffic: 5.75 MB/s (2.2%) in, 1.48 MB/s (0.6%) out| inUsage=2.2,50,70 outUsage=0.6,50,70 inAbsolut=10000000 outAbsolut=20000000
Now we simply need to setup the respective command and service definitions in nagios e.g.:
### commands.conf
# check switch port bandwidth
define command{
command_name check_bandwidth
command_line /usr/local/nagios/libexec/check_iftraffic -H $HOSTADDRESS$ -C $ARG1$ -r -i $ARG2$ -b $ARG3$ -u g -w 50 -c 70
}
### switches.conf
define service{
use generic-service
host_name SWITCH-STACK
service_description HHSVRSTK Uplink: Bandwidth Utilization
check_command check_bandwidth!<community-string>!Port-channel1!2
normal_check_interval 1
retry_check_interval 1
}
Monday, 11 March 2019
Setup Nagios Core for SNMP traps with snmptrapd and snmptt
We'll firstly need to download and execute the installer script from Nagios.com:
sudo yum -y install bzip2
cd /tmp
wget https://assets.nagios.com/downloads/nagiosxi/scripts/NagiosXI-SNMPTrap-setup.sh
sh ./NagiosXI-SNMPTrap-setup.sh
This will install and setup snmptrapd and snmptt while ensuring the firewall is configured properly (udp/162 - however you may wish to lock this down further)
We'll then need to add our MIB's - for this example I'll be using a combination of SG200/300 switches and so will download the MIB's from:
https://software.cisco.com/download/home/284645417/type/283965836/release/1.4.1.03
In the commerical version of Nagios you can add these with ease through the GUI - however if you (like me) are using Core you'll need to use the addmib tool:
cd /tmp
unzip MIBs_Sx200_v1.4.1.03.zip
cd MIBs_Sx200_v1.4.1.03
mkdir /usr/share/snmp/mibs/Cisco-SG200-300
mv *.mib /usr/share/snmp/mibs/Cisco-SG200-300
and add them with:
find /usr/share/snmp/mibs/Cisco-SG200-300 -maxdepth 1 -type f -exec addmib {} \;
We can now send a test trap with:
snmptrap -v 2c -c public localhost '' linkUp ifDescr s eth0 ifAdminStatus i 1 ifOperStatus i 1
You should now see this logged to /var/log/snmptt/snmpttunknown.log
However this wasn't the case for me - everything I was sending was being dropped / didn't show up in the snmptt log. After inspecting the service log for snmptrapd I quickly noticed the following warning:
Mar 11 14:51:33 host.internal snmptrapd[2752]: NET-SNMP version 5.7.2
Mar 11 14:51:33 host.internal systemd[1]: Started Simple Network Management Protocol (SNMP) Trap Daemon..
Mar 11 15:31:28 host.internal snmptrapd[2752]: No access configuration - dropping trap.
Mar 11 15:31:41 host.internal snmptrapd[2752]: No access configuration - dropping trap.
This behaviour is expected by default - as the snmptrapd team decided (wisely) to utilize authentication for incoming SNMP traps - however oddly this shouldn't have been an issue since the installer script from nagios added 'disableAuthorization yes' to '/etc/snmp/snmptrapd.conf' - however after reloading the service all was well - so I can only imagine the config had been added and the service did not reload / restart as intended.
tailing /var/log/snmptt/snmptt.log shows:
Mon Mar 11 16:03:56 2019 .1.3.6.1.6.3.1.1.5.4 Normal "Status Events" localhost - A linkUp trap signifies that the SNMP entity, acting in an eth0 up up
NOTE: Unknown MIB's will be sent to /var/log/snmptt/snmpttunknown.log
You'll also notice that in the nagios logs the following warning:
'Warning: Passive check result was received for service 'SNMP Traps' on host 'localhost', but the service could not be found!'
So it looks like snmptt is successfully sending the information to nagios - however nagios does not know what to do with it!
We'll proceed by defining the service it's trying to submit data for:
define service {
name SNMP Traps
service_description SNMP Traps
active_checks_enabled 1 ; Active service checks are enabled
passive_checks_enabled 1 ; Passive service checks are enabled/accepted
parallelize_check 1 ; Active service checks should be parallelized
process_perf_data 0
obsess_over_service 0 ; We should obsess over this service (if necessary)
check_freshness 0 ; Default is to NOT check service 'freshness'
notifications_enabled 1 ; Service notifications are enabled
event_handler_enabled 1 ; Service event handler is enabled
flap_detection_enabled 1 ; Flap detection is enabled
process_perf_data 1 ; Process performance data
retain_status_information 1 ;
retain_nonstatus_information 1 ;
check_command check-host-alive ;
is_volatile 1
check_period 24x7
max_check_attempts 1
normal_check_interval 1
retry_check_interval 1
notification_interval 120
notification_period 24x7
notification_options w,u,c,r
contacts contact1,contact2
register 0
}
define service {
use SNMP Traps
host_name localhost
service_description TRAP
check_interval 120
}
We can then test the serviced is working correctly with:
/usr/local/nagios/libexec/eventhandlers/submit_check_result localhost TRAP 2 "TESTING"
The changes should now be reflected on the nagios check status.
The next step is to load the relevant MIB files so we can translates the O.I.D's. In this tutorial we will be checking link status of a port so we'll need 'IF-MIB..txt' (which should come as part of the default installation - if not see here: http://www.net-snmp.org/docs/mibs/IF-MIB.txt)
We'll proceed by generating the snmptt.conf with:
rm -rf /etc/snmp/snmptt.conf
snmpttconvertmib --in=/usr/share/snmp/mibs/IF-MIB.txt --out=/etc/snmp/snmptt.conf --exec='/usr/local/nagios/libexec/eventhandlers/submit_check_result $r TRAP 2'
Open up the snmptt.conf file and ensure that the EXEC line for EVENT 'linkUp' returns 0 opposed to '2' and leave EVENT 'linkDown' as it is.
and reload the service with:
sudo service snmptt reload
We can then test it with: snmptrap -v 2c -c public localhost '' linkDown ifDescr s fa0/1 ifAdminStatus i 0 ifOperStatus i 0
and then check it reverts with:
snmptrap -v 2c -c public localhost '' linkUp ifDescr s fa0/1 ifAdminStatus i 0 ifOperStatus i 0
sudo yum -y install bzip2
cd /tmp
wget https://assets.nagios.com/downloads/nagiosxi/scripts/NagiosXI-SNMPTrap-setup.sh
sh ./NagiosXI-SNMPTrap-setup.sh
This will install and setup snmptrapd and snmptt while ensuring the firewall is configured properly (udp/162 - however you may wish to lock this down further)
We'll then need to add our MIB's - for this example I'll be using a combination of SG200/300 switches and so will download the MIB's from:
https://software.cisco.com/download/home/284645417/type/283965836/release/1.4.1.03
In the commerical version of Nagios you can add these with ease through the GUI - however if you (like me) are using Core you'll need to use the addmib tool:
cd /tmp
unzip MIBs_Sx200_v1.4.1.03.zip
cd MIBs_Sx200_v1.4.1.03
mkdir /usr/share/snmp/mibs/Cisco-SG200-300
mv *.mib /usr/share/snmp/mibs/Cisco-SG200-300
and add them with:
find /usr/share/snmp/mibs/Cisco-SG200-300 -maxdepth 1 -type f -exec addmib {} \;
We can now send a test trap with:
snmptrap -v 2c -c public localhost '' linkUp ifDescr s eth0 ifAdminStatus i 1 ifOperStatus i 1
You should now see this logged to /var/log/snmptt/snmpttunknown.log
However this wasn't the case for me - everything I was sending was being dropped / didn't show up in the snmptt log. After inspecting the service log for snmptrapd I quickly noticed the following warning:
Mar 11 14:51:33 host.internal snmptrapd[2752]: NET-SNMP version 5.7.2
Mar 11 14:51:33 host.internal systemd[1]: Started Simple Network Management Protocol (SNMP) Trap Daemon..
Mar 11 15:31:28 host.internal snmptrapd[2752]: No access configuration - dropping trap.
Mar 11 15:31:41 host.internal snmptrapd[2752]: No access configuration - dropping trap.
This behaviour is expected by default - as the snmptrapd team decided (wisely) to utilize authentication for incoming SNMP traps - however oddly this shouldn't have been an issue since the installer script from nagios added 'disableAuthorization yes' to '/etc/snmp/snmptrapd.conf' - however after reloading the service all was well - so I can only imagine the config had been added and the service did not reload / restart as intended.
tailing /var/log/snmptt/snmptt.log shows:
Mon Mar 11 16:03:56 2019 .1.3.6.1.6.3.1.1.5.4 Normal "Status Events" localhost - A linkUp trap signifies that the SNMP entity, acting in an eth0 up up
NOTE: Unknown MIB's will be sent to /var/log/snmptt/snmpttunknown.log
You'll also notice that in the nagios logs the following warning:
'Warning: Passive check result was received for service 'SNMP Traps' on host 'localhost', but the service could not be found!'
So it looks like snmptt is successfully sending the information to nagios - however nagios does not know what to do with it!
We'll proceed by defining the service it's trying to submit data for:
define service {
name SNMP Traps
service_description SNMP Traps
active_checks_enabled 1 ; Active service checks are enabled
passive_checks_enabled 1 ; Passive service checks are enabled/accepted
parallelize_check 1 ; Active service checks should be parallelized
process_perf_data 0
obsess_over_service 0 ; We should obsess over this service (if necessary)
check_freshness 0 ; Default is to NOT check service 'freshness'
notifications_enabled 1 ; Service notifications are enabled
event_handler_enabled 1 ; Service event handler is enabled
flap_detection_enabled 1 ; Flap detection is enabled
process_perf_data 1 ; Process performance data
retain_status_information 1 ;
retain_nonstatus_information 1 ;
check_command check-host-alive ;
is_volatile 1
check_period 24x7
max_check_attempts 1
normal_check_interval 1
retry_check_interval 1
notification_interval 120
notification_period 24x7
notification_options w,u,c,r
contacts contact1,contact2
register 0
}
define service {
use SNMP Traps
host_name localhost
service_description TRAP
check_interval 120
}
We can then test the serviced is working correctly with:
/usr/local/nagios/libexec/eventhandlers/submit_check_result localhost TRAP 2 "TESTING"
The changes should now be reflected on the nagios check status.
The next step is to load the relevant MIB files so we can translates the O.I.D's. In this tutorial we will be checking link status of a port so we'll need 'IF-MIB..txt' (which should come as part of the default installation - if not see here: http://www.net-snmp.org/docs/mibs/IF-MIB.txt)
We'll proceed by generating the snmptt.conf with:
rm -rf /etc/snmp/snmptt.conf
snmpttconvertmib --in=/usr/share/snmp/mibs/IF-MIB.txt --out=/etc/snmp/snmptt.conf --exec='/usr/local/nagios/libexec/eventhandlers/submit_check_result $r TRAP 2'
Open up the snmptt.conf file and ensure that the EXEC line for EVENT 'linkUp' returns 0 opposed to '2' and leave EVENT 'linkDown' as it is.
and reload the service with:
sudo service snmptt reload
We can then test it with: snmptrap -v 2c -c public localhost '' linkDown ifDescr s fa0/1 ifAdminStatus i 0 ifOperStatus i 0
and then check it reverts with:
snmptrap -v 2c -c public localhost '' linkUp ifDescr s fa0/1 ifAdminStatus i 0 ifOperStatus i 0
Thursday, 7 March 2019
Setting up NPS / RADIUS for use with a Cisco 2960X
Below is a sample configuration to get up and running with Radius:
2960X Configuration
conf t
radius server <server-name>
address ipv4 <server-ip>
key <shared-secret>
aaa new-model # create new aaa model
aaa authentication login default group radius local-case # allow radius and local user authentication by default
aaa authorization exec default group radius local-case if-authenticated # allow radius and local user authorisation by default
aaa accounting system default start-stop group radius # only account for radius
2960X Configuration
conf t
radius server <server-name>
address ipv4 <server-ip>
key <shared-secret>
aaa new-model # create new aaa model
aaa authentication login default group radius local-case # allow radius and local user authentication by default
aaa authorization exec default group radius local-case if-authenticated # allow radius and local user authorisation by default
aaa accounting system default start-stop group radius # only account for radius
umasks: Ensuring httpd / apache is assigning the appropriate permissions on files / directories
I came across an issue the other day where the user httpd was running as was part of group that had been assigned permissions to the www root. Typically the user httpd runs under will be the owner of these files and directories and as a result will almost always have adequate permissions to read, write and execute. However in this case because it was part of a group instead the default umask setting of 022 was preventing the httpd user from writing to the files.
The umask can be worked out as follows - for example a umask of 002:
Directories: 777 - 002 = 775
Files: 666 - 002 = 664
i.e. the owner and group are able to read, write and execute directories and everyone else can only read and execute them. While the owner and group can write, write files and everyone else can only read them.
In order to apply these to httpd we can simply add the following line under the service stanza in /lib/systemd/system/httpd.service:
vim /lib/systemd/system/httpd.service
and finally ensure httpd is restarted with:
sudo systemctl daemon-reload
sudo systemctl httpd restart
The umask can be worked out as follows - for example a umask of 002:
Directories: 777 - 002 = 775
Files: 666 - 002 = 664
i.e. the owner and group are able to read, write and execute directories and everyone else can only read and execute them. While the owner and group can write, write files and everyone else can only read them.
In order to apply these to httpd we can simply add the following line under the service stanza in /lib/systemd/system/httpd.service:
vim /lib/systemd/system/httpd.service
[Service]
...
UMask = 0002
and finally ensure httpd is restarted with:
sudo systemctl daemon-reload
sudo systemctl httpd restart
Tuesday, 19 February 2019
Debugging with the puppet agent command
To enable debugging on a puppet agent you can issue:
The above command does not actually carry out any changes - it only simulates what could have happened. Removing the '--noop' switch allows us to actually carry out those changes e.g.:
puppet agent --test --noop --debug --trace
The above command does not actually carry out any changes - it only simulates what could have happened. Removing the '--noop' switch allows us to actually carry out those changes e.g.:
puppet agent --test --debug --trace
Tuesday, 5 February 2019
Creating your first Puppet module
This tutorial will be a quick demonstration of how to create, build and test a basic puppet module.
Firstly generate the puppet module with:
mkdir ~/workbench && cd ~/workbench
puppet module generate username-webmin --skip-interview
This will generate the following directory structure:
.
└── webmin (module directory)
├── examples
│ └── init.pp (example of how to initailize the class)
├── Gemfile (used to describe dependancies needed for the module / Ruby)
├── manifests (holds the module manifests i.e. contains a set of instructions that need to be run)
│ └── init.pp (the default manifest - it defines our main class: webmin)
├── metadata.json (contains module metadata like author, module description, dependancies etc.)
├── Rakefile (essentially a makefile for Ruby)
├── README.md (contains module documentation)
└── spec (used for automated testing - is optional)
├── classes
│ └── init_spec.rb
└── spec_helper.rb
If we do a cat on the init.pp within the examples directory:
cat examples/init.pp
include ::webmin
This include statement can be used in other manifests in Puppet and simply imports the webmin class.
In older modules you might have seen a params manifest (params.pp) - this design pattern has recently been replaced with the release of Hiera v5 with in-module data (https://github.com/puppetlabs/best-practices/blob/master/puppet-module-design.md)
This means that we need to construct our Hiera hierarchy in the form of 'hiera.yaml' in our module. This will typically look like this:
cat webmin/hiera.yaml
---
version: 5
defaults:
datadir: 'data'
data_hash: 'yaml_data'
hierarchy:
- name: 'Full Version'
path: '%{facts.os.name}-%{facts.os.release.full}.yaml'
- name: 'Major Version'
path: '%{facts.os.name}-%{facts.os.release.major}.yaml'
- name: 'Distribution Name'
path: '%{facts.os.name}.yaml'
- name: 'Operating System Family'
path: '%{facts.os.family}-family.yaml'
- name: 'common'
path: 'common.yaml'
Note: It's important to keep in mind that the hierarchy is reusable e.g. avoid adding in specific networks or environments.
We'll also need to create our data directory to hold our yaml data in:
mkdir webmin/data
touch webmin/data/Debian-family.yaml
touch webmin/data/RedHat-family.yaml
touch webmin/data/common.yaml
For the sake of time and simplicity I have confined the yaml data to a few OS families.
As seen in the hierarchy anything defined within the 'Operating System Family' take presidense over anything in 'common'. However if this module were to be applied to say OpenSUSE only settings in common.yaml would be applied - so it's important to ensure that there are suitable defaults in common data.
We'll proceed by defining our variables within the init.pp (manifests directory):
class webmin (
Boolean $install,
Optional[Array[String]] $users,
Optional[Integer[0, 65535]] $portnum,
Optional[Stdlib::Absolutepath] $certificate,
)
{
notify { 'Applying class webmin...': }
}
In the above class we are defining several variables that will allow the user to customise how the module configures webmin. As you'll notice three of them are optional - so if they are undefined we'll provide the defaults our self in the manifest.
At this point we might want to verify the syntax in our init.pp manifest is valid - we can do this with:
puppet parser validate webmin/manifests/init.pp
We will also create two more classes - one for setup / installation of webmin and the other for configuration of it:
touch /webmin/manifests/install.pp && touch /webmin/manifests/configure.pp
We'll declare these in our init.pp like follows:
class webmin (
Boolean $install,
String $webmin_package_name,
Optional[Array[String]] $users,
Optional[Integer[0, 65535]] $portnum,
Optional[Stdlib::Absolutepath] $certificate,
)
{
notify { 'Applying webmin class...': }
contain webmin::install
contain webmin::configure
Class['::webmin::install']
-> Class['::webmin::configure']
}
And then declare the classes:
cat /webmin/manifests/install.pp
# @summary
# This class handles the webmin package.
#
# @api private
#
class webmin::install {
if $webmin::install {
package { $webmin::webmin_package_name:
ensure => present,
}
service { $webmin::webmin_package_name:
ensure => running,
enable => true,
subscribe => Package[$webmin::webmin_package_name],
}
}
}
and
cat /webmin/manifests/configure.pp
# @summary
# This class handles webmin configuration.
#
# @api private
#
class webmin::configure {
}
}
You'll notice that although we have defined variables in the parent class we have not actually defined what these will be yet. This is where Hiera will come in - so to data/common.yaml we add:
---
webmin::webmin_package_name: webmin
webmin::install: true
webmin::users: ~
webmin::portnum: 10000
webmin::certificate: ~
We'll also need to add repositories for webmin - as by default (to my knowledge at least) it's not included in any of the base repositories included with Redhat or Debian.
In order to configure repositories for particular operating systems we will need to use the yum and apt modules. Module dependencies are defined in the 'metadata.json' file - for example:
"dependencies": [
{ "name": "puppet/yum", "version_requirement": ">= 3.1.0" },
{ "name": "puppetlabs/apt", "version_requirement": ">= 6.1.0" },
{ "name": "puppetlabs/stdlib", "version_requirement": ">= 1.0.0" }
],
So your metadata.json would look something like:
{
"name": "username-webmin",
"version": "0.1.0",
"author": "Joe Bloggs",
"summary": "A module used to perform installation and configuration of webmin.",
"license": "Apache-2.0",
"source": "https://yoursite.com/puppetmodule",
"project_page": "https://yoursite.com/puppetmodule",
"issues_url": "https://yoursite.com/puppetmodule/issues",
"dependencies": [
{ "name": "puppet/yum", "version_requirement": ">= 3.1.0" },
{ "name": "puppetlabs/apt", "version_requirement": ">= 6.1.0" },
{ "name": "puppetlabs/stdlib", "version_requirement": ">= 1.0.0" }
],
"data_provider": null
}
Because we haven't packaged up the module yet we'll need to install the dependencies manually i.e.:
puppet module install puppetlabs/apt
puppet module install puppet/yum
We'll then need to copy them to our working directory (~/workbench) e.g.:
cp -R /etc/puppetlabs/code/environments/production/modules/{yum,apt,stdlib} ~/workbench
We'll use Hiera to configure the OS specific settings for the repositories:
cat webmin/data/RedHat-family.yaml
---
version: 5
yum::managed_repos:
- 'webmin_repo'
yum::repos:
webmin_repo:
ensure: 'present'
enabled: true
descr: 'Webmin Software Repository'
baseurl: 'https://download.webmin.com/download/yum'
mirrorlist: 'https://download.webmin.com/download/yum/mirrorlist'
gpgcheck: true
gpgkey: 'http://www.webmin.com/jcameron-key.asc'
target: '/etc/yum.repos.d/webmin.repo'
cat webmin/data/Debian-family.yaml
---
version: 5
apt::source
webmin_repo
location: 'http://download.webmin.com/download/repository'
release: 'stretch'
repos: 'contrib'
key: '1B24BE83'
key_source: 'http://www.webmin.com/jcameron-key.asc'
include_src: 'false'
Let's check everything looks good:
puppet parser validate manifests/*.pp
We'll now try and apply our manifests and see if they apply correctly:
puppet apply --modulepath=~/workbench webmin/tests/init.pp
With any luck you will now have webmin installed and should also see our notify messages we included earlier.
Finally we can publish the module with:
sudo puppet module build webmin-reloaded
We can now distribute this internally or alternatively upload it for public consumption at the Puppet Forge.
To test the built module you can run:
sudo puppet module install ~/Workbench/username-webmin-0.1.0.tar.gz
Firstly generate the puppet module with:
mkdir ~/workbench && cd ~/workbench
puppet module generate username-webmin --skip-interview
This will generate the following directory structure:
.
└── webmin (module directory)
├── examples
│ └── init.pp (example of how to initailize the class)
├── Gemfile (used to describe dependancies needed for the module / Ruby)
├── manifests (holds the module manifests i.e. contains a set of instructions that need to be run)
│ └── init.pp (the default manifest - it defines our main class: webmin)
├── metadata.json (contains module metadata like author, module description, dependancies etc.)
├── Rakefile (essentially a makefile for Ruby)
├── README.md (contains module documentation)
└── spec (used for automated testing - is optional)
├── classes
│ └── init_spec.rb
└── spec_helper.rb
If we do a cat on the init.pp within the examples directory:
cat examples/init.pp
include ::webmin
This include statement can be used in other manifests in Puppet and simply imports the webmin class.
In older modules you might have seen a params manifest (params.pp) - this design pattern has recently been replaced with the release of Hiera v5 with in-module data (https://github.com/puppetlabs/best-practices/blob/master/puppet-module-design.md)
This means that we need to construct our Hiera hierarchy in the form of 'hiera.yaml' in our module. This will typically look like this:
cat webmin/hiera.yaml
---
version: 5
defaults:
datadir: 'data'
data_hash: 'yaml_data'
hierarchy:
- name: 'Full Version'
path: '%{facts.os.name}-%{facts.os.release.full}.yaml'
- name: 'Major Version'
path: '%{facts.os.name}-%{facts.os.release.major}.yaml'
- name: 'Distribution Name'
path: '%{facts.os.name}.yaml'
- name: 'Operating System Family'
path: '%{facts.os.family}-family.yaml'
- name: 'common'
path: 'common.yaml'
Note: It's important to keep in mind that the hierarchy is reusable e.g. avoid adding in specific networks or environments.
We'll also need to create our data directory to hold our yaml data in:
mkdir webmin/data
touch webmin/data/Debian-family.yaml
touch webmin/data/RedHat-family.yaml
touch webmin/data/common.yaml
For the sake of time and simplicity I have confined the yaml data to a few OS families.
As seen in the hierarchy anything defined within the 'Operating System Family' take presidense over anything in 'common'. However if this module were to be applied to say OpenSUSE only settings in common.yaml would be applied - so it's important to ensure that there are suitable defaults in common data.
We'll proceed by defining our variables within the init.pp (manifests directory):
class webmin (
Boolean $install,
Optional[Array[String]] $users,
Optional[Integer[0, 65535]] $portnum,
Optional[Stdlib::Absolutepath] $certificate,
)
{
notify { 'Applying class webmin...': }
}
In the above class we are defining several variables that will allow the user to customise how the module configures webmin. As you'll notice three of them are optional - so if they are undefined we'll provide the defaults our self in the manifest.
At this point we might want to verify the syntax in our init.pp manifest is valid - we can do this with:
puppet parser validate webmin/manifests/init.pp
We will also create two more classes - one for setup / installation of webmin and the other for configuration of it:
touch /webmin/manifests/install.pp && touch /webmin/manifests/configure.pp
We'll declare these in our init.pp like follows:
class webmin (
Boolean $install,
String $webmin_package_name,
Optional[Array[String]] $users,
Optional[Integer[0, 65535]] $portnum,
Optional[Stdlib::Absolutepath] $certificate,
)
{
notify { 'Applying webmin class...': }
contain webmin::install
contain webmin::configure
Class['::webmin::install']
-> Class['::webmin::configure']
}
And then declare the classes:
cat /webmin/manifests/install.pp
# @summary
# This class handles the webmin package.
#
# @api private
#
class webmin::install {
if $webmin::install {
package { $webmin::webmin_package_name:
ensure => present,
}
service { $webmin::webmin_package_name:
ensure => running,
enable => true,
subscribe => Package[$webmin::webmin_package_name],
}
}
}
and
cat /webmin/manifests/configure.pp
# @summary
# This class handles webmin configuration.
#
# @api private
#
class webmin::configure {
}
}
You'll notice that although we have defined variables in the parent class we have not actually defined what these will be yet. This is where Hiera will come in - so to data/common.yaml we add:
---
webmin::webmin_package_name: webmin
webmin::install: true
webmin::users: ~
webmin::portnum: 10000
webmin::certificate: ~
We'll also need to add repositories for webmin - as by default (to my knowledge at least) it's not included in any of the base repositories included with Redhat or Debian.
In order to configure repositories for particular operating systems we will need to use the yum and apt modules. Module dependencies are defined in the 'metadata.json' file - for example:
"dependencies": [
{ "name": "puppet/yum", "version_requirement": ">= 3.1.0" },
{ "name": "puppetlabs/apt", "version_requirement": ">= 6.1.0" },
{ "name": "puppetlabs/stdlib", "version_requirement": ">= 1.0.0" }
],
So your metadata.json would look something like:
{
"name": "username-webmin",
"version": "0.1.0",
"author": "Joe Bloggs",
"summary": "A module used to perform installation and configuration of webmin.",
"license": "Apache-2.0",
"source": "https://yoursite.com/puppetmodule",
"project_page": "https://yoursite.com/puppetmodule",
"issues_url": "https://yoursite.com/puppetmodule/issues",
"dependencies": [
{ "name": "puppet/yum", "version_requirement": ">= 3.1.0" },
{ "name": "puppetlabs/apt", "version_requirement": ">= 6.1.0" },
{ "name": "puppetlabs/stdlib", "version_requirement": ">= 1.0.0" }
],
"data_provider": null
}
Because we haven't packaged up the module yet we'll need to install the dependencies manually i.e.:
puppet module install puppetlabs/apt
puppet module install puppet/yum
We'll then need to copy them to our working directory (~/workbench) e.g.:
cp -R /etc/puppetlabs/code/environments/production/modules/{yum,apt,stdlib} ~/workbench
We'll use Hiera to configure the OS specific settings for the repositories:
cat webmin/data/RedHat-family.yaml
---
version: 5
yum::managed_repos:
- 'webmin_repo'
yum::repos:
webmin_repo:
ensure: 'present'
enabled: true
descr: 'Webmin Software Repository'
baseurl: 'https://download.webmin.com/download/yum'
mirrorlist: 'https://download.webmin.com/download/yum/mirrorlist'
gpgcheck: true
gpgkey: 'http://www.webmin.com/jcameron-key.asc'
target: '/etc/yum.repos.d/webmin.repo'
cat webmin/data/Debian-family.yaml
---
version: 5
apt::source
webmin_repo
location: 'http://download.webmin.com/download/repository'
release: 'stretch'
repos: 'contrib'
key: '1B24BE83'
key_source: 'http://www.webmin.com/jcameron-key.asc'
include_src: 'false'
Let's check everything looks good:
puppet parser validate manifests/*.pp
We'll now try and apply our manifests and see if they apply correctly:
puppet apply --modulepath=~/workbench webmin/tests/init.pp
With any luck you will now have webmin installed and should also see our notify messages we included earlier.
Finally we can publish the module with:
sudo puppet module build webmin-reloaded
We can now distribute this internally or alternatively upload it for public consumption at the Puppet Forge.
To test the built module you can run:
sudo puppet module install ~/Workbench/username-webmin-0.1.0.tar.gz
Wednesday, 30 January 2019
php-fpm Log Location (RHEL / PHP7)
More of a note to myself but php-fpm no longer appears to write logs a long with the web server's (at least not on 7.0.27 on RHEL.
The various logs are written to:
/var/opt/rh/rh-php70/log/php-fpm
The various logs are written to:
/var/opt/rh/rh-php70/log/php-fpm
Friday, 25 January 2019
Linux: Server Not Syncing with NTP (Stuck in INIT state)
The service was confirmed running and set to start on boot:
sudo service ntpd status
Redirecting to /bin/systemctl status ntpd.service
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-01-25 12:11:16 GMT; 13min ago
Process: 32649 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 32650 (ntpd)
CGroup: /system.slice/ntpd.service
└─32650 /usr/sbin/ntpd -u ntp:ntp -g
sudo service ntpd status
Redirecting to /bin/systemctl status ntpd.service
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-01-25 12:11:16 GMT; 13min ago
Process: 32649 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 32650 (ntpd)
CGroup: /system.slice/ntpd.service
└─32650 /usr/sbin/ntpd -u ntp:ntp -g
We can quickly check if NTP is not properly synchronised with the 'ntpstat' command:
> ntpstat
unsynchronised
time server re-starting
polling server every 8 s
We can also check the connection status of the ntp server with:
ntpq -p
remote refid st t when poll reach delay offset jitter
================================================
meg.magnet.ie .INIT. 16 - - 512 0 0.000 0.000 0.000
ec2-52-53-178-2 .INIT. 16 - - 512 0 0.000 0.000 0.000
chris.magnet.ie .INIT. 16 - - 512 0 0.000 0.000 0.000
From the above (specifically the INIT state) my immediate thought was that it was a firewall issue somewhere.
It's worth checking the EC2 instance SG to ensure that the server can reach udp/123 outbound. However remember to also check the Network ACL (it's stateless) and ensure that udp/123 can get out as well.
ntpd was attempting to initiate a connection with the ntp servers however never got past this phase. After confirming the firewall rules, sg's, ACL's etc. (i.e. 123/udp outbound and ensuring that session states were maintained) I decided to directly query one of the NTP servers with:
ntpdate -d meg.magnet.ie
This was successful so it seemed something else was causing this.
In the end I realised it was failing because ntpd was binding with localhost and was attempting to access the external NTP servers (obviously failing because they are unroutable from a loopback device!)
Changing:
listen interface 127.0.0.1
to
listen interface 10.11.12.13
in /etc/ntp.conf and restarting ntpd resolves the issue:
remote refid st t when poll reach delay offset jitter
===============================================
x.ns.gin.ntt.ne 249.224.99.213 2 u 27 64 1 11.206 52.795 0.000
213.251.53.217 193.0.0.229 2 u 26 64 1 12.707 53.373 0.000
ntp3.wirehive.n 195.66.241.3 2 u 25 64 1 14.334 53.125 0.000
h37-220-20-12.h 82.69.97.89 2 u 24 64 1 19.211 53.350 0.000