Tuesday, 5 February 2019

Creating your first Puppet module

This tutorial will be a quick demonstration of how to create, build and test a basic puppet module.

Firstly generate the puppet module with:

mkdir ~/workbench && cd ~/workbench

puppet module generate username-webmin --skip-interview

This will generate the following directory structure:

.
└── webmin (module directory)
    ├── examples
    │   └── init.pp (example of how to initailize the class)
    ├── Gemfile (used to describe dependancies needed for the module / Ruby)
    ├── manifests (holds the module manifests i.e. contains a set of instructions that need to be run)
    │   └── init.pp (the default manifest - it defines our main class: webmin)
    ├── metadata.json (contains module metadata like author, module description, dependancies etc.)
    ├── Rakefile (essentially a makefile for Ruby)
    ├── README.md (contains module documentation)
    └── spec (used for automated testing - is optional)
        ├── classes
        │   └── init_spec.rb
        └── spec_helper.rb

If we do a cat on the init.pp within the examples directory:

cat examples/init.pp

include ::webmin

This include statement can be used in other manifests in Puppet and simply imports the webmin class.

In older modules you might have seen a params manifest (params.pp) - this design pattern has recently been replaced with the release of Hiera v5 with in-module data (https://github.com/puppetlabs/best-practices/blob/master/puppet-module-design.md)

This means that we need to construct our Hiera hierarchy in the form of 'hiera.yaml' in our module. This will typically look like this:

cat webmin/hiera.yaml

---
version: 5

defaults:
  datadir: 'data'
  data_hash: 'yaml_data'

hierarchy:
  - name: 'Full Version'
    path: '%{facts.os.name}-%{facts.os.release.full}.yaml'

  - name: 'Major Version'
    path: '%{facts.os.name}-%{facts.os.release.major}.yaml'

  - name: 'Distribution Name'
    path: '%{facts.os.name}.yaml'

  - name: 'Operating System Family'
    path: '%{facts.os.family}-family.yaml'

  - name: 'common'
    path: 'common.yaml'

Note: It's important to keep in mind that the hierarchy is reusable e.g. avoid adding in specific networks or environments.

We'll also need to create our data directory to hold our yaml data in:

mkdir webmin/data
touch webmin/data/Debian-family.yaml
touch webmin/data/RedHat-family.yaml
touch webmin/data/common.yaml

For the sake of time and simplicity I have confined the yaml data to a few OS families.

As seen in the hierarchy anything defined within the 'Operating System Family' take presidense over anything in 'common'. However if this module were to be applied to say OpenSUSE only settings in common.yaml would be applied - so it's important to ensure that there are suitable defaults in common data.

We'll proceed by defining our variables within the init.pp (manifests directory):

class webmin (
  Boolean $install,
  Optional[Array[String]] $users,
  Optional[Integer[0, 65535]] $portnum,
  Optional[Stdlib::Absolutepath] $certificate,
  )
  {
  notify { 'Applying class webmin...': }
  }

In the above class we are defining several variables that will allow the user to customise how the module configures webmin. As you'll notice three of them are optional - so if they are undefined we'll provide the defaults our self in the manifest.

At this point we might want to verify the syntax in our init.pp manifest is valid - we can do this with:

puppet parser validate webmin/manifests/init.pp

We will also create two more classes - one for setup / installation of webmin and the other for configuration of it:

touch /webmin/manifests/install.pp && touch /webmin/manifests/configure.pp

We'll declare these in our init.pp like follows:

class webmin (
  Boolean $install,
  String $webmin_package_name,
  Optional[Array[String]] $users,
  Optional[Integer[0, 65535]] $portnum,
  Optional[Stdlib::Absolutepath] $certificate,
  )
  {
  notify { 'Applying webmin class...': }

  contain webmin::install
  contain webmin::configure

  Class['::webmin::install']
  -> Class['::webmin::configure']

  }

And then declare the classes:

cat /webmin/manifests/install.pp

# @summary
#   This class handles the webmin package.
#
# @api private
#
class webmin::install {

  if $webmin::install {

    package { $webmin::webmin_package_name:
      ensure => present,
    }

    service { $webmin::webmin_package_name:
    ensure    => running,
    enable    => true,
    subscribe => Package[$webmin::webmin_package_name],
    }

  }

}

and

cat /webmin/manifests/configure.pp

# @summary
#   This class handles webmin configuration.
#
# @api private
#
class webmin::configure {

  }

}

You'll notice that although we have defined variables in the parent class we have not actually defined what these will be yet. This is where Hiera will come in - so to data/common.yaml we add:

---
webmin::webmin_package_name: webmin
webmin::install: true
webmin::users: ~
webmin::portnum: 10000
webmin::certificate: ~

We'll also need to add repositories for webmin - as by default (to my knowledge at least) it's not included in any of the base repositories included with Redhat or Debian.

In order to configure repositories for particular operating systems we will need to use the yum and apt modules. Module dependencies are defined in the 'metadata.json' file - for example:

"dependencies": [
  { "name": "puppet/yum", "version_requirement": ">= 3.1.0" },
  { "name": "puppetlabs/apt", "version_requirement": ">= 6.1.0" },
  { "name": "puppetlabs/stdlib", "version_requirement": ">= 1.0.0" }
],

So your metadata.json would look something like:

{
  "name": "username-webmin",
  "version": "0.1.0",
  "author": "Joe Bloggs",
  "summary": "A module used to perform installation and configuration of webmin.",
  "license": "Apache-2.0",
  "source": "https://yoursite.com/puppetmodule",
  "project_page": "https://yoursite.com/puppetmodule",
  "issues_url": "https://yoursite.com/puppetmodule/issues",
  "dependencies": [
  { "name": "puppet/yum", "version_requirement": ">= 3.1.0" },
  { "name": "puppetlabs/apt", "version_requirement": ">= 6.1.0" },
  { "name": "puppetlabs/stdlib", "version_requirement": ">= 1.0.0" }
  ],
  "data_provider": null
}

Because we haven't packaged up the module yet we'll need to install the dependencies manually i.e.:

puppet module install puppetlabs/apt
puppet module install puppet/yum

We'll then need to copy them to our working directory (~/workbench) e.g.:

cp -R /etc/puppetlabs/code/environments/production/modules/{yum,apt,stdlib} ~/workbench

We'll use Hiera to configure the OS specific settings for the repositories:

cat webmin/data/RedHat-family.yaml

---
version: 5

yum::managed_repos:
    - 'webmin_repo'

yum::repos:
    webmin_repo:
        ensure: 'present'
        enabled: true
        descr: 'Webmin Software Repository'
        baseurl: 'https://download.webmin.com/download/yum'
        mirrorlist: 'https://download.webmin.com/download/yum/mirrorlist'
        gpgcheck: true
        gpgkey: 'http://www.webmin.com/jcameron-key.asc'
        target: '/etc/yum.repos.d/webmin.repo'

cat webmin/data/Debian-family.yaml

---
version: 5

apt::source
    webmin_repo
        location: 'http://download.webmin.com/download/repository'
        release: 'stretch'
        repos: 'contrib'
        key: '1B24BE83'
        key_source: 'http://www.webmin.com/jcameron-key.asc'
        include_src: 'false'

Let's check everything looks good:

puppet parser validate manifests/*.pp

We'll now try and apply our manifests and see if they apply correctly:

puppet apply --modulepath=~/workbench webmin/tests/init.pp

With any luck you will now have webmin installed and should also see our notify messages we included earlier.

Finally we can publish the module with:

sudo puppet module build webmin-reloaded

We can now distribute this internally or alternatively upload it for public consumption at the Puppet Forge.

To test the built module you can run:

sudo puppet module install ~/Workbench/username-webmin-0.1.0.tar.gz

Wednesday, 30 January 2019

php-fpm Log Location (RHEL / PHP7)

More of a note to myself but php-fpm no longer appears to write logs a long with the web server's (at least not on 7.0.27 on RHEL.

The various logs are written to:

/var/opt/rh/rh-php70/log/php-fpm

Friday, 25 January 2019

Linux: Server Not Syncing with NTP (Stuck in INIT state)

The service was confirmed running and set to start on boot:

sudo service ntpd status

Redirecting to /bin/systemctl status ntpd.service
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2019-01-25 12:11:16 GMT; 13min ago
  Process: 32649 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 32650 (ntpd)
   CGroup: /system.slice/ntpd.service
           └─32650 /usr/sbin/ntpd -u ntp:ntp -g

We can quickly check if NTP is not properly synchronised with the 'ntpstat' command:

> ntpstat
unsynchronised
  time server re-starting
   polling server every 8 s

We can also check the connection status of the ntp server with:

ntpq -p

     remote           refid      st t when poll reach   delay   offset  jitter
================================================
 meg.magnet.ie   .INIT.          16 -    -  512    0    0.000    0.000   0.000
 ec2-52-53-178-2 .INIT.          16 -    -  512    0    0.000    0.000   0.000
 chris.magnet.ie .INIT.          16 -    -  512    0    0.000    0.000   0.000

From the above (specifically the INIT state) my immediate thought was that it was a firewall issue somewhere. 

It's worth checking the EC2 instance SG to ensure that the server can reach udp/123 outbound. However remember to also check the Network ACL (it's stateless) and ensure that udp/123 can get out as well.

ntpd was attempting to initiate a connection with the ntp servers however never got past this phase. After confirming the firewall rules, sg's, ACL's etc.  (i.e. 123/udp outbound and ensuring that session states were maintained) I decided to directly query one of the NTP servers with:

ntpdate -d meg.magnet.ie

This was successful so it seemed something else was causing this.  

In the end I realised it was failing because ntpd was binding with localhost and was attempting to access the external NTP servers (obviously failing because they are unroutable from a loopback device!)

Changing:

listen interface 127.0.0.1

to

listen interface 10.11.12.13

in /etc/ntp.conf  and restarting ntpd resolves the issue:

     remote           refid      st t when poll reach   delay   offset  jitter
===============================================
 x.ns.gin.ntt.ne 249.224.99.213   2 u   27   64    1   11.206   52.795   0.000
 213.251.53.217  193.0.0.229      2 u   26   64    1   12.707   53.373   0.000
 ntp3.wirehive.n 195.66.241.3     2 u   25   64    1   14.334   53.125   0.000
 h37-220-20-12.h 82.69.97.89      2 u   24   64    1   19.211   53.350   0.000

Thursday, 18 October 2018

Setup Active / Passive Failover Cluster on ASA 5515X

Firstly ensure both ASA's are identical i.e. same IOS version, hardware and license otherwise the below will fail.

For this tutorial we will use a single interface (m0/0 for management), 2 (aggregated) interfaces for the failover link (and stateful replication) and finally 4 interfaces for our data.

ASA1> conf t
hostname ASA1

interface m0/0
management-only
nameif management
security-level 0
ip add 10.0.18.98 255.255.255.0 standby 10.0.18.99
no shutdown
route management 10.0.18.0 255.255.255.0 10.0.18.1

Setup Users / SSH / AAA with:

enable password securepassword
crypto key generate rsa general-keys modulus 2048
username yourusername password yousecurepassword privilege 15
username yourusername attributes
service-type admin
aaa authentication ssh console LOCAL
aaa authentication http console LOCAL
ssh verson 2

Enable ICMP for inside networks:

icmp permit any inside

Enable management access with:

http server enable
http 10.0.18.0 255.255.255.0 management
ssh 10.0.18.0 255.255.255.0 management

Configure our data interfaces and their assosiated etherchannels:

ASA1) int po1
port-channel
vlan 1000
no shut

int gi0/0
channel-group 1 mode active
no shut

int gi0/1
channel-group 1 mode active
no shut

int gi0/2
channel-group 1 mode active
no shut

We'll be serving three client VLANs - so we'll setup the trunking:

int po1.100
description InsideNetwork
vlan 100
ip address 172.16.32.2 255.255.255.248 standby 172.16.32.3
nameif inside
security-level 100
no shut

int po1.101
description OutsidePrimary
vlan 101
ip address 123.123.123.123 255.255.255.240 standby 123.123.123.124
nameif outside
security-level 0
no shut

int po1.102
description OutsideBackup
vlan 102
ip address 192.168.10.1 255.255.255.0 standby 192.168.10.2
nameif dmz
security-level 0
no shut

and on our switch stack:

int po1
switchport mode trunk
switchport trunk native vlan 1000
switchport trunk allowed vlan 100,101,102
no shutdown

int range gi1/0/1-3
channel-protocol lacp
channel-group 1 mode active
spanning-tree portfast trunk # to help speed up convergence
spanning-tree bpduguard enable

int po2
switchport mode trunk
switchport trunk native vlan 1000
switchport trunk allowed vlan 100,101,102
no shutdown

int range gi2/0/1-3
channel-protocol lacp
channel-group 1 mode active

Note: The channel group mode has to be active as the ASA does not support non-dynamic etherchannel, PAgP etc.

We'll now configure the failover link - for this we'll add redundancy via an etherchannel again:

ASA1> int po2
no shut

int gi0/4
channel-group 2 mode active
no shut

int gi0/5
channel-group 2 mode active
no shut

and then on the switch:

int po3
description failover link
switchport mode access
switchport access vlan 300
description ASA-Master-Failover
no shutdown

int po4
description failover link
switchport mode access
switchport access vlan 300
description ASA-Master-Backup
no shutdown

int range gi1/0/23,gi2/0/23
channel-group 3 mode active
channel-protocol lacp
no shutdown

int range gi1/0/24,gi2/0/24
channel-group 4 mode active
channel-protocol lacp
no shutdown

And now set the failover interface (po2 in our case):

failover lan interface FAIL-OVER po2
failover interface ip FAIL-OVER 192.168.254.1 255.255.255.240 standby 192.168.254.2
failover key strongpassword
failover lan unit primary

We'll also want to ensure that our subinterfaces (outside, inside and the DMZ) are monitored for link failures:

monitor-interface outside
monitor-interface inside
monitor-interface DMZ

enable finally enable the failover feature with:

failover
failover link FAIL-OVER

and save:

wri mem

Now on the slave ASA:

Define our failover interface:

int po2
no shut

int gi0/4
channel-group 2 mode active
no shut

int gi0/5
channel-group 2 mode active
no shut

failover lan interface FAIL-OVER po2
failover interface ip FAIL-OVER 192.168.254.1 255.255.255.240 standby 192.168.254.2
failover key strongpassword
failover lan unit secondary
failover

And then to confirm (on either unit):

show failover

If you need to execute commands on the slave you can issue:

failover exec standby show int ip br

or alternatively the current master:

failover exec active show int ip br

Uploading and booting from ROMMON Mode on ASA 5505/5510/5515/5540

Firstly power down the ASA. We'll now need to get into ROMMON mode - hookup to the console  and make sure that you also have an ethernet cable / machine plugged into the management port. Proceed by powering on the ASA - you should see a message stating:

'Use BREAK or ESC to interrupt boot.'

Hit ESC - at this point you should be in rommon mode - from the prompt enter the following to configure the IP settings so we will be able to copy the image over the ethernet port (this can also be performed over serial but it can take a long time):

ADDRESS=192.168.10.100
SERVER=192.168.10.254
IMAGE=asa992-smp-k8.bin
PORT=Management0/0
RETRY=3

If the TFTP server is on another subnet add the following (otherwise leave blank):

GATEWAY=192.168.10.1

and finally run the following to inititiate the tftp copy:

tftp

Wednesday, 17 October 2018

ASA Cluster Setup

Clustering's main advantage in relation to the ASA's is the boot in throughput. It's typically employed in data centres and larger enterprise networks. However it's important to note that this does come at a cost - this is because by clustering you limit the feature set.

Clustering is now supported on the 5500 series (5512 and above) from IOS 9.2+. However you might need to upgrade your license (for free) in order to 'unlock' the functionality.

For this topology we will be using spanned etherchannel mode. Spanned etherchannels allow the ASA cluster to present a single IP address (the master nodes IP) - however with the exception of the management interface that will operate as a induvidual interface on each ASA (this comes in very useful when troubleshooting.)

We'll firstly configure this on our ASA's:

ASA1> cluster interface-mode spanned
ASA2> cluster interface-mode spanned

Note: A reboot may be required at this aferwards.

When configuring our managed interface we will need to create an IP pool so that the cluster master node can allocate an management IP to each member:

ip local pool MGMT_POOL 10.0.18.97-10.0.18.99

and then configure our management intefaces on the master node:

Note: The IP pool we created earlier is used to allocate induvidual IP's to each management interface in the cluster - while the explicit IP defined on the interface configuration below is the main cluster IP (i.e. the shared one.)

ASA1> interface m0/0
management-only
nameif management-pri
security-level 0
ip add 10.0.18.100 255.255.255.0 cluster-pool MGMT_POOL
no shutdown

http server enable
http 10.0.18.0 255.255.255.0 management-pri
ssh 10.0.18.0 255.255.255.0 management-pri

Next we will configure the data links (i.e. all of the traffic you wish to serve) - to do this we will setup a spanned etherchannel. This translates to an etherchannel spanning over all of the ASA's (i.e. all ASA's are connected together with a single channel group. We'll use two physical ports on each ASA for this example:

ASA1) int po10
port-channel span-cluster
vlan 1000
no shut

int gi0/1
channel-group 10 mode active
no shut

int gi0/2
channel-group 10 mode active
no shut

We'll be serving three client VLANs - so we'll setup the trunking:

int po10.100
description InsideNetwork
vlan 100
nameif inside
security-level 100
no shut

int po10.101
description OutsidePrimary
vlan 101
nameif outside1
security-level 0
no shut

int po10.102
description OutsideBackup
vlan 102
nameif outside2
security-level 0
no shut

Note if you are connecting the etherchannel to a vPC (Cisco Nexus technology) or a VSS (Cisco Catalyst 6500/6800 technology) you'd need to ammend as follows:
ASA1) int gi0/1
channel-group 10 mode active vss-id 1
port-channel span-cluster vss-load-balance

int gi0/2
channel-group 10 mode active vss-id 2
port-channel span-cluster vss-load-balance

Note: We don't need to apply this configuration on the slave switch as the settings will automatically propogate when the CCL is established.

The backend switch stack was a 3650X - for completeness I have included the other side of the spanned etherchannels mentioned above:

SWITCH-STACK> do show run ...

interface Port-channel10
 switchport trunk native vlan 1000
 switchport trunk allowed vlan 100-102
 switchport mode trunk

interface GigabitEthernet1/0/1
 switchport trunk native vlan 1000
 switchport trunk allowed vlan 100-102
 switchport mode trunk
 switchport nonegotiate
 storm-control broadcast level 10.00
 storm-control action trap
 no cdp enable
 channel-protocol lacp
 channel-group 10 mode active

interface GigabitEthernet1/0/2
 switchport trunk native vlan 1000
 switchport trunk allowed vlan 100-102
 switchport mode trunk
 switchport nonegotiate
 storm-control broadcast level 10.00
 storm-control action trap
 no cdp enable
 channel-protocol lacp
 channel-group 10 mode active

interface GigabitEthernet2/0/1
 switchport trunk native vlan 1000
 switchport trunk allowed vlan 100-102
 switchport mode trunk
 switchport nonegotiate
 storm-control broadcast level 10.00
 storm-control action trap
 no cdp enable
 channel-protocol lacp
 channel-group 10 mode active

interface GigabitEthernet2/0/2
 switchport trunk native vlan 1000
 switchport trunk allowed vlan 100-102
 switchport mode trunk
 switchport nonegotiate
 storm-control broadcast level 10.00
 storm-control action trap
 no cdp enable
 channel-protocol lacp
 channel-group 10 mode active

We'll now configure the cluster control links - these are setup in a *device local* etherchannel (i.e. ASA1 --> SW1,SW2 over po1, and ASA2 --> SW1,SW2 over po2.)

As per Cisco's guidance we should try our best to ensure that the CCLs (Cluster Control Links) can handle the same throughput as the data links. By doing this we can ensure failover can happen quickly during congestion.

ASA1> int gi0/4
description Cluster Control Link
channel-group 11 mode active
no shut

int gi0/5
description Cluster Control Link
channel-group 11 mode active
no shut

int po11
description Cluster Control Link
no shut

Again for completeness I have included the backend switch configuration:

int range gi1/0/23,gi2/0/23
desc ASA Primary CCL
channel-protocol lacp
channel-group 11 mode active

int po11
desc ASA Primary CCL
switchport mode access
switchport access vlan 300
no shut

int range gi1/0/24,gi2/0/24
desc ASA Backup CCL
channel-protocol lacp
channel-group 12 mode active

int po12
desc ASA Backup CCL
switchport mode access
switchport access vlan 300
no shut

We're now in a position to enable the cluster - we should do this on our primary ASA firstly:

cluster group HLXCluster
local-unit ASA-Primary
cluster-interface po11 ip 192.168.120.1 255.255.255.252
priority 1
key str0ngp@55word!
enable

Note: The cluster member with the lowest 'priority' will become the master.

To confirm run: sh cluster info

and then the secondary:

cluster group HLXCluster
local-unit ASA-Secondary
cluster-interface po11 ip 192.168.120.2 255.255.255.252
priority 100
key str0ngp@55word!
enable

and finally to confirm run (again):

sh cluster info

Sources

Chapter: Configuring a Cluster of ASAs

Wednesday, 10 October 2018

Quickly identify the character encoding of a file in the shell

Using the file command  as follows will allow you to identify what character encoding a specific file has. This came in handy when I was reading a file from Python as by default it treats the file as ASCII encoded.  

bash> file -i file.txt 
test.txt: text/plain; charset=utf-16le