Wednesday 27 December 2017

Hiera and how it works within Puppet

Hiera and how it works within Puppet

Hiera allows us to modify settings from modules within Puppet - for this example I will be tweaking some of the default settings from the saz/ssh module.

Let's start by firstly install hiera:

puppet module install puppet/hiera

Hiera makes use of hierarchies - for example servers in a specific location might need a paticular DNS server, however all servers might require a specific SSH configuration. These settings are defined within the hiera.yaml file:

cat /etc/puppetlabs/code/environments/production/hiera.yaml

---
version: 5
defaults:
  # The default value for "datadir" is "data" under the same directory as the hiera.yaml
  # file (this file)
  # When specifying a datadir, make sure the directory exists.
  # See https://docs.puppet.com/puppet/latest/environments.html for further details on environments.
  # datadir: data
  # data_hash: yaml_data
hierarchy:
  - name: "Per-node data (yaml version)"
    path: "nodes/%{::trusted.certname}.yaml"
  - name: "Other YAML hierarchy levels"
    paths:
      - "common.yaml"

We're going to modify the heirarchy a little - so let's back it up firstly:

cp /etc/puppetlabs/code/environments/production/hiera.yaml /etc/puppetlabs/code/environments/production/hiera.yaml.bak

and replace it with:

---
version: 5
defaults:
  # The default value for "datadir" is "data" under the same directory as the hiera.yaml
  # file (this file)
  # When specifying a datadir, make sure the directory exists.
  # See https://docs.puppet.com/puppet/latest/environments.html for further details on environments.
  # datadir: data
  # data_hash: yaml_data
hierarchy:
  - name: "Per-Node"
    path: "nodes/%{::trusted.certname}.yaml"
  - name: "Operating System"
    path: "os/%{osfamily}.yaml"
  - name: "Defaults"
    paths:
      - "common.yaml"

We now have the ability to set OS specific settings - for example some (older) operating systems might not support specific cipher suites.

Let's run the following on our client to identify what Puppet classifies it as:

facter | grep family

  family => "RedHat",

So let's create the relevent structure:

touch /etc/puppetlabs/code/environments/production/data/os/RedHat.yaml
touch /etc/puppetlabs/code/environments/production/data/os/Debian.yaml
touch /etc/puppetlabs/code/environments/production/data/os/common.yaml

We'll proceed by installing the saz/ssh module:

puppet module install saz/ssh

In this example we will concentrate on hardening the SSH server:

cat <<EOT > /etc/puppetlabs/code/environments/production/data/common.yaml
---
ssh::storeconfigs_enabled: true

ssh::server_options:
    Protocol: '2'
    ListenAddress:
        - '127.0.0.0'
        - '%{::hostname}'
    PasswordAuthentication: 'no'
    SyslogFacility: 'AUTHPRIV'
    HostbasedAuthentication: 'no'
    PubkeyAuthentication: 'yes'
    UsePAM: 'yes'
    X11Forwarding: 'no'
    ClientAliveInterval: '300'
    ClientAliveCountMax: '0'
    IgnoreRhosts: 'yes'
    PermitEmptyPasswords: 'no'
    StrictModes: 'yes'
    AllowTcpForwarding: 'no'
 
EOT

We can check / test the values with:

puppet lookup ssh::server_options --merge deep --environment production --explain --node <node-name>

Finally restart the puppet server:

sudo service puppetserver restart

and poll the server from the client:

puppet client -t


Creating files from templates with Puppet

To utilise a templates when creating new files we can issue something like: /etc/puppetlabs/code/environments/production/manifests/site.pp

 file { '/etc/issue':
    ensure  => present,
    owner   => 'root',
    group   => 'root',
    mode    => 0644,
    content => template($module_name/issue.erb),
  }

The source / content must be stored within a puppet module - so in the case we were using Saz's SSH module - we would place the template in:

/etc/puppetlabs/code/environments/production/modules/ssh/templates

touch /etc/puppetlabs/code/environments/production/modules/ssh/templates/issue.erb

and the site.pp file would look something like:

 file { '/etc/issue':
    ensure  => present,
    owner   => 'root',
    group   => 'root',
    mode    => 0644,
    content => template($ssh/issue.erb),
  }

Reload the puppet server:

sudo service puppetserver reload

and pull down the configuration on the client:

puppet agent -t

Friday 22 December 2017

Changing regional settings (locate, time zone, keyboard mappings) in CentOS 7 / RHEL

Quite often when deploying new instances of CentOS the process of setting regional settings like time is often hidden from the user behind the OS installer and more often than not it is not necessary to change these.

However this post will outline the steps that need to be taken if a server has been moved geographically or has simply not been configured correctly in the first place!

We'll start my changing the time zone - this is pretty straight forward and you can find timezone settings available to the system in:

ls -l /usr/share/zoneinfo/

In this case we'll use the GB (Great Britain) - by creating a symbolic link:

sudo rm /etc/localtime

ln -s /usr/share/zoneinfo/GB /etc/localtime

We'll also ensure NTP is installed:

sudo yum install ntp && sudo service ntpd start

sudo systemctl enable ntpd OR chkconfig ntpd on

We'll also want to change the system locale - we can view the current one with:

locale

or 

cat /etc/locale.conf

and change it by firstly identifying locales available to us:

localectl list-locales

and then set it with:

localectl set-locale en_IE.utf8

or manually change locale.conf:

cat "en_IE.utf8" > /etc/locale.conf

and confirm with:

localectl status

Finally we need to change the key mappings - to view the current selection issue:

localectl list-keymaps

and then set with:

localectl set-keymap ie-UnicodeExpert



Wednesday 20 December 2017

vi(m) Cheat Sheet

The following is a list of common commands that I will gradually compile for working with vi / vim.

Find and Replace (Current Line)

:/s/csharp/java

Find and Replace (All Line)

:$s/csharp/java


Tuesday 19 December 2017

Adding a new disk with LVM

Identify the new disk with:

lsblk

and add the disk as a physical volume:

pvcreate /dev/sdb

Verify it with:

pvdisplay

Now create a new virtual group with:

vgcreate myvg /dev/sdb

and finally a new logical volume (ensuring all space is allocated to it):

lvcreate -n mylg -l 100%FREE myvg

and verify with:

lvdisplay

and finally create a new filesystem on the volume:

mkfs.xfs /dev/myvg/mylv

Wednesday 13 December 2017

Using pip with a Python virtual environment (venv)

A venv is a way of isolating the host environment from that of a python project. I only came across these because Pycharm decided to adopt them by default now when creating new projects.

To install additional modules using pip we must firstly enter the virtual environment - your project should look something like:

├── bin
│   ├── activate
│   ├── activate.csh
│   ├── activate.fish
│   ├── activate_this.py
│   ├── easy_install
│   ├── easy_install-3.6
│   ├── pip
│   ├── pip3
│   ├── pip3.6
│   ├── python -> python3.6
│   ├── python3 -> python3.6
│   ├── python3.6
│   ├── python-config
│   ├── watchmedo
│   └── wheel
├── include
│   └── python3.6m -> /usr/include/python3.6m
├── lib
│   └── python3.6
├── lib64 -> lib
└── pip-selfcheck.json

In order to enter the virtual environment cd to the project directory e.g.:

cd /path/to/your/project

and then issue the following:

source venv/bin/activate

we can then use pip as usual to install modules:

pip install watchdog

vSphere Replication 6.5 Bug: 'Not Active' Status

This happened to myself when setting up a brand new vSphere lab with vSphere 6.5 and the vSphere Replication Appliance 6.5.1.

After setting up a new replicated VM I was presented with the 'Not Active' status - although there was no information presented in the tool tip.

So to dig a little deeper we can use the CLI to query the replicated VM status - but firstly we'll need to obtain the VM id number:

vim-cmd vmsvc/getallvms

and then query the state with:

vim-cmd hbrsvc/vmreplica.getState <id>

Retrieve VM running replication state:
        The VM is configured for replication. Current replication state: Group: CGID-1234567-9f6e-4f09-8487-1234567890 (generation=1234567890)
        Group State: full sync (0% done: checksummed 0 bytes of 1.0 TB, transferred 0 bytes of 0 bytes)

So it looks like it's at least attempting to perform the replication - however is stuck at 0% - so now devling into the logs:

cat /var/log/vmkernel.log | grep Hbr

2017-12-13T10:12:18.983Z cpu21:17841592)WARNING: Hbr: 4573: Failed to establish connection to [10.11.12.13]:10000(groupID=CGID-123456-9f6e-4f09-
8487-123456): Timeout
2017-12-13T10:12:45.102Z cpu18:17806591)WARNING: Hbr: 549: Connection failed to 10.11.12.13 (groupID=CGID-123456-9f6e-4f09-8487-123456): Timeout

It looks like the ESXI host is failing to connect to 10.11.12.13 (the Virtual Replication Appliance in my case) - so we can double check this
with:

cat /dev/zero | nc -v 10.11.12.13 10000

(Fails)

However if we attempt to ping it:

ping 10.11.12.13

we get a responce - so it looks like it's a firewall issue.

I attempt to connect to the replication appliance from another server:

cat /dev/zero | nc -v 10.11.12.13 10000

Ncat: Version 7.60 ( https://nmap.org/ncat )
Ncat: Connected to 10.11.12.13:10000.

So it looks like the firewall on this specific host is blocking outbound connections on port 10000.

My suspisions were confirmed when I reviewed the firewall rules from within vCenter on the Security Profile tab of the ESXI host:



Usually the relevent firewall rules are created automatically - however this time for whatever reason they have not been - so we'll need to
proceed by creating a custom firewall rule (which unfortuantely is quite cumbersome...):

SSH into the problematic ESXI host and create a new firewall config with:

touch /etc/vmware/firewall/replication.xml

and set the relevent write permissions:

chmod 644 /etc/vmware/firewall/replication.xml
chmod +t /etc/vmware/firewall/replication.xml

vi /etc/vmware/firewall/replication.xml

<!-- Firewall configuration information for vSphere Replication -->
<ConfigRoot>
<service>
<id>vrepl</id>
<rule id='0000'>
<direction>outbound</direction>
<protocol>tcp</protocol>
<porttype>dst</porttype>
<port>
<begin>10000</begin>
<end>10010</end>
</port>
</rule>
<enabled>true</enabled>
<required>false</required>
</service>
</ConfigRoot>

Revert the permsissions with:

chmod 444 /etc/vmware/firewall/replication.xml

and restart the firewall service:

esxcli network firewall refresh

and check its there with:

esxcli network firewall ruleset list

(Make sure it's set to 'enabled' - if not you can enable it via the vSphere GUI: ESXI Host >> Configuration >> Security Profile >> Edit Firewall
Settings.)

and the rules with:

esxcli network firewall ruleset rule list | grep vrepl

Then re-check connectivity with:

cat /dev/zero | nc -v 10.11.12.13 10000
Connection to 10.0.15.151 10000 port [tcp/*] succeeded!

Looks good!

After reviewing the vSphere Replication monitor everything had started syncing again.


Sources:

https://kb.vmware.com/s/article/2008226
https://kb.vmware.com/s/article/2059893




Friday 8 December 2017

Thursday 7 December 2017

Using USB storage with ESXI / vSphere 6.0 / 6.5

In order to get USB drives working with ESXI (which is not officially supported) we'll need to ensure the USB arbitrator service has been stopped (this will unfortunately prevent you from using USB pass through devices in your VM's - however in a development environment I can afford to for go this.):

/etc/init.d/usbarbitrator stop

and ensure it is also disabled upon reboot:

chkconfig usbarbitrator off

We'll now plug the device in and identify the disk with dmesg or:

ls /dev/disks

Create a new GPT table on the disk:

partedUtil mklabel /dev/disks/mpx.vmhba37\:C0\:T0\:L0 gpt

* Note: Some disks will come up as 'naa...' as well *

We now need to identify the start / end sectors:

partedUtil getptbl /dev/disks/mpx.vmhba37\:C0\:T0\:L0

>> gpt
>> 38913 255 63 625142448

To work out the end sector we do:

[quantity-of-cylinders] * [quantity-of-heads] * [quantity-of-sectors-per-track] - 1

so: 38913 * 255 * 63 - 1

which equals:

625137344

The start sector is always 2048 - however in earlier versions of VMFS (e.g. VMFS3 - this was 128)

partedUtil setptbl /dev/disks/mpx.vmhba37\:C0\:T0\:L0 gpt "1 2048 625137344 AA31E02A400F11DB9590000C2911D1B8 0"

and finally create the filesystem:

vmkfstools -C vmfs6 -S USB-Stick /dev/disks/mpx.vmhba37\:C0\:T0\:L0:1

Wednesday 6 December 2017

Quickstart: Accessing an SQLite database from the command line

We'll firstly obtain the relevant packages:

sudo dnf install sqlite

or on Debian based distro's:

sudo apt-get install sqlite3

Then open the database with:

sqlite3 /path/to/database.sqlite

To view the tables we should issue:

.tables

and to review the rows within them:

select * from <table-name>;

and to describe the table schema issue:

.schema <table-name>

to insert issue:

insert into <table-name> values('testing',123);

and to delete issue:

delete from <table-name> where <column-name> = 1;