Saturday 31 December 2016

Using the 'nice' command to control a proceses CPU utilization

The nice command can be used to control the CPU utilization of a specific process.

There are varying levels ranging from -20 (which favours the process's CPU utilization) and 19 which assigns a process the least favouritable for CPU utilization.

For example you might want to ensure a large report does not consume all of the available CPU - so you could issue something like:

nice -n 19 /bin/bash /path/to/report.sh

or if we wanted to ensure that the process had priority over others:

nice -n -20 /bin/bash /path/to/report.sh

Friday 23 December 2016

Vulnerability scanning with Google

Google can be a huge asset for both white-hat and black-hat hackers - with some carefully crafted queries we can help identify various security issues with a website.

I have outlined some of the common ones below.

Identifying publically accessable directory lsitings:
site:website.com intitle:index.of

Identifying configuration files:
site:website.com ext:xml | ext:conf | ext:cnf | ext:reg | ext:inf | ext:rdp | ext:cfg | ext:txt | ext:ora | ext:ini

Database files exposed:
site:website.com ext:sql | ext:dbf | ext:mdb

Log files exposed:
site:website.com ext:log

Backups and old files:
site:website.com ext:bkf | ext:bkp | ext:bak | ext:old | ext:backup

Login pages:
site:website.com inurl:login

SQL errors:
site:website.com intext:"sql syntax near" | intext:"syntax error has occurred" | intext:"incorrect syntax near" | intext:"unexpected end of SQL command" | intext:"Warning: mysql_connect()" | intext:"Warning: mysql_query()" | intext:"Warning: pg_connect()"

Publicly exposed documents:
site:website.com ext:doc | ext:docx | ext:odt | ext:pdf | ext:rtf | ext:sxw | ext:psw | ext:ppt | ext:pptx | ext:pps | ext:csv

PHP configuration:
site:website.com ext:php intitle:phpinfo "published by the PHP Group"

Thursday 22 December 2016

Microsoft Exchange: Checking a specific mailbox for corruption

Using the 'New-MailboxRepairRequest' we can quickly detect any corruption in a user mailbox.

I recommend using the '-DetectOnly' switch initially to firstly identify what has (if anything) been corrupted e.g.:

New-MailboxRepairRequest -Mailbox user001 -CorruptionType SearchFolder,AggregateCounts,ProvisionedFolder,FolderView -DetectOnly

and then finally - to actually perform the repairs:

New-MailboxRepairRequest -Mailbox user001 -CorruptionType SearchFolder,AggregateCounts,ProvisionedFolder,FolderView

We can also check all mailboxes within a specific database with the '-Database' switch:

New-MailboxRepairRequest -Database MBDB01 -CorruptionType SearchFolder,AggregateCounts,ProvisionedFolder,FolderView -DetectOnly

If you are on Exchange 2013 / 2016 you can use the 'Get-MailboxRepairRequest' cmdlet to view the status of any repair requests:

Get-MailboxRepairRequest | FL

Otherwise (if you are on an older version) you will need to check the event log for the status - specifically event code: 10062 and 10048.

If corruption is detected in the database you will see something like the following:

Corruptions detected during online integrity check for request 1111111-2222-3333-4444-5555555555555
Mailbox:1111111-2222-3333-4444-5555555555555 (Joe Bloggs)
Database:Mailbox Database 01
Corruption Is Fixed FID Property Resolution
"Folder View", No, "1680-E3FFF4 (Inbox)", 0x00000001, "Delete the corrupted view"
"Folder View", No, "1680-E3FFF4 (Inbox)", 0x00000001, "Delete the corrupted view"

I recommend repeating the procedure until the request come back clean.

Tuesday 20 December 2016

Get the total amount of memory used by a program running several threads

We should firstly get the pid of the parent process e.g.:

ps aux | grep mysqld

mysql     1842  1.1  7.2 2385476 748852 ?      Sl   Jun15.............

and then use ps to retrieve all of its threads and residential memory:

ps -p 1842 -L -o rss

and use awk to add up all of the output and convert to mb (or gb) e.g.:

ps -p 1842 -L -o rss | awk '{s+=$1/1024} END {print s}'

And to put it into a script we can do:

#!/bin/bash

application=$1

if [ -z "$1" ]; then
    echo "Please provide a process name in args!"
    exit
fi

pid=`ps aux | grep -v mysqld_safe | grep $application | grep -v grep | awk '{print $2}' | head -n1 | awk '{print $1;}'`

echo The total residential memory consumed by this application is `ps -p $pid -L -o rss | awk '{s+=$1/1024} END {print s}'`MB

Memory: Understanding the difference between VIRT, RES and SHR in top / htop

VIRT refer to the virtual size of a process - which is the total amount of memory it is using. For example this could be files on the disk that have been mapped to it, shared libariries and other memory shared with other processes. Hence it is not reliable when attempting to work out the total amount of memory used by multiple processes - since some of it can be shared amoungst them.

RES (Resident Memory) reperesents a more accurate picture of what the process is actually using in terms of physical RAM - it does not include any shared memory - such as shared libraries. And as a result is typically always smaller that the VIRT size since most programs will rely on libraries such as libc.

SHR reperesents how much memory other shared memory and libraries reperesent.

We can get an overview and totals of what memory is mapped to a process with the 'pmap' utility - for example:

sudo pmap <pid>

or

sudo pmap <pid> | grep libc

Monday 19 December 2016

Quick Reference: umask examples

Like chmod - umask is used to set permissions - but in a slightly different way. The umask utility applies to files / folders that do not exist - while in contrast chmod is applied to files / folders that already present of the filesystem.

umask stands for 'user file-creation mode mask' - which allows you to define the default set of permissions for a user or system-wide.

The normal user umask is typically set to 002 - which chmod's directories as 775 (everyone can read them but only group and owner can write) and 664 for files - again effect.

The root user on the other hand is usually set to 022 - which instead chmod's the permissions for folders as 755 and 644 - which is as above - but prevents the group from writing to the files or folders.

You can convert the umask into chmod format by performing the following for directories:

777 - umask = chmod

777 - 022 = 755

and similarly for files:

666 - umask = chmod

666 - 002 = 664

You can view the umask for the current user in the terminal by simply issuing:

umask

The umask can be set from a number of locations - although there is a specific order that they are searched and as a result if you have conflicting values - the first one it detects will be applied.

You can configure the system-wide umask within: /etc/login.defs e.g.:

grep UMASK /etc/login.defs

UMASK                   077

This umask will be applied if there is not another umask defined for a user elsewhere e.g.:

cat /etc/profiles

We can see the logic that sets the umask - and checks whether the user is root or not:

if [ $UID -gt 199 ] && [ "`/usr/bin/id -gn`" = "`/usr/bin/id -un`" ]; then
    umask 002
else
    umask 022
fi

umask is evaluated with the following preferences:
local users profile, entry in the users GECOS field, entry in /etc/default/login, entry in /etc/login.defs (Source: https://linux.die.net/man/8/pam_umask)

Although if we wish to set an individual users umask we can edit:

sudo vi ~/.bashrc

and (after verifying it doesn't already exist) add the following at the end of the file:

umask 022

Example Use Case

Lets say we have a script user that pulls configuration (using rysnc or something similar) from one node to another - the configuration residing on the source host is read and resides in /etc/myapp - 

Now usually with a fairly static configuration you might issue something like:

chown -R /etc/myapp root:mygroup

* Where the script user is present in 'mygroup'

although the application on the server writes additional files that only the owner can view and also does not include the ensure that the 'mygroup' group has ownership of the file - when the script user polls the newly created file it is unable to read it.

So - in order to ensure that the 'mygroup' group has ownership and is able to read the newly created files we can issue the following:


Friday 16 December 2016

Allowing a script (non-privileged) user to execute privileged commands on Linux

Quite often scripts that are performed by non-privileged users (as they always should where possible) will need to perform a privileged action such as reload a service.

In order to do this 'as securely' as possible we can employ sudo - and telling sudo that the user can run the command xyz as the 'root' user and nothing else.

To do this we need to edit the sudoers file:

vi /etc/sudoers

and add the following line

myusername ALL = (root) NOPASSWD: /usr/sbin/service myservice reload

Thursday 15 December 2016

Preventing user from logging in via SSH although allowing SCP operations (scponly)

I have come across numerous scenarios where scripts and programs will require SCP to work properly - although do not require SSH access.

By default on CentOS there is not a shell that allows you to restrict SSH but allow SCP - so instead we have to install the 'scponly' shell from EPEL:

yum install scponly

Once installed it should be added to: /etc/shells

/bin/sh
...
/usr/bin/scponly
/bin/scponly

proceed by creating a group for it:

sudo groupadd scponly

Create directory you wish to serve e.g.:

/var/secure/uploads

and ensure the appropriate ownership information is applied (I only want the script to read the files):

sudo chown root:scponly

and permissions e.g.:

sudo chmod 770 /var/secure/uploads

sudo chmod 640 /var/secure/uploads/*

and create a new user and ensure they are part of the 'scponly' group and the appropriate shell is assigned to them:

sudo useradd -m -d /home/script -s "/usr/bin/scponly" -c "script" -G scponly script




Error: The file '/etc/nginx/ssl/example.pem' is mislabeled on your system

#!!!! The file '/etc/nginx/ssl/example.pem' is mislabeled on your system.

The above error was generated because the specific file type (.pem / certificate) was placed in a location that was not allowed via a SELinux module ruleset or alternatively the file permissions were incorrect.

In order to rectify the problem we should issue:

 restorecon -R -v /etc/nginx/ssl/example.pem

Wednesday 14 December 2016

Basics: Locking down a Linux system

(See baseline also)

Below is a quick must-do (or baseline if you will) list I have compiled that should be applied to any server running linux:

1. Remove all unnecessary services - the smaller attack surface you have the better! If you can help it try to install 'minimal' or 'core' versions of the OS - both CentOS and Debian provide them!

2. Harden SSH configuration in /etc/ssh/sshd_config:

- Disable root logins
PermitRootLogin no

- Disable plain-text logins (i.e. private key auth only)
PasswordAuthentication no

- Only listen on necessary interfaces
ListenAddress 10.11.12.13

- Ensure only SSH v2 is enabled (enabled by default on most modern versions of OpenSSH)
Protocol 2

- Ensure UsePrivilegeSeparation is set to 'sandbox' mode
UsePrivilegeSeparation sandbox

- Ensure logging is enabled
SyslogFacility AUTHPRIV
LogLevel INFO

- Set login grace time (can help mitigate DDoS attacks)
LoginGraceTime 1m

- Ensure strict mode is enabled (ensures SSH keys etc. are setup correctly for users)
StrictMode yes

- Public key authentication:
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys

- Ensure user can't login with empty passwords:
PermitEmptyPasswords no

- Enable / disable challenge response if needed (e.g. Google Authenticator, two-factor auth etc.):
ChallengeResponseAuthentication no

- Disable weak ciphers (source: https://calomel.org/openssh.html)
Ciphers aes256-ctr,aes192-ctr,aes128-ctr

- Unless absolutley needed - disable X11 forwarding:
X11Forwarding no

and in /etc/ssh/ssh_config:

- Unless absolutely needed - disable X11 forwarding:
ForwardAgent no
ForwardX11 no
ForwardX11Trusted no

3. Kernel tweaks

# Disable IPv6 if not needed
echo 1 > /proc/sys/net/ipv6/conf/<interface-name>/disable_ipv6
Reboot and verify - if all OK make the changes permanent with:
echo 'net.ipv6.conf.all.disable_ipv6 = 1' > /etc/sysctl.conf

# Disable IP forwarding (i.e. if not using anything that requires routing e.g. vpn server etc.)
net.ipv4.ip_forward = 0

# log any malformed packets
net.ipv4.conf.all.log_martians = 1

# disable source routing
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0

# disable icmp re-directs / icmp dos mitigation
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0

# enable syn cookies for synflood protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1280

# enable reverse-path forwarding filter
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# disable tcp timstamping
net.ipv4.tcp_timestamps = 0

Now make the changes permanent with:

sysctl -p

4. Ensure selinux is enabled:

sestatus

If it is not turned on we modify:

/etc/sysconfig/selinux

and ensure 'SELINUXTYPE=targeted'

5. Setup IPTables properly

Ensure that a default-accept rule is in place on all of the default chains:

sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT

and then flush the chains as well as any non-standard chains:

sudo iptables -t nat -F
sudo iptables -t mangle -F
sudo iptables -F
sudo iptables -X

Now we will start by allowing traffic to freely flow out and in from our loopback interface:

sudo iptables -A INPUT -i lo -j ACCEPT
sudo iptables -A OUTPUT -o lo -j ACCEPT

We will also want to ensure that already established connections can get back to the server - i.e. allow stateful connections.

sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

We will likely also want to allow all outbound traffic from connections that are currently established:

sudo iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED -j ACCEPT

We will also likely wish to allow SSH access from a specific host network:

sudo iptables -A INPUT -p tcp -s 10.11.12.13/32 --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT

and allow the incoming SSH connection outbound back to the SSH initiator:

sudo iptables -A OUTPUT -p tcp --sport 22 -m conntrack --ctstate ESTABLISHED -j ACCEPT

We might also wish to accept ICMP echo requests:

sudo iptables -A INPUT -p icmp -j ACCEPT

Allow outbound requests for DNS and HTTP/S:

sudo iptables -A OUTPUT -p udp --dport 53 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp --dport 53 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp --dport 80 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp --dport 443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT

And also log and then drop any other packets:

sudo iptables -N LOGGING
sudo iptables -A INPUT -j LOGGING
sudo iptables -A FORWARD -j LOGGING
sudo iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables-Dropped: " --log-level 4
sudo iptables -A LOGGING -j DROP

Finally we should remove our default-allow rules we inserted prior:

sudo iptables -P INPUT DROP
sudo iptables -P FORWARD DROP
sudo iptables -P OUTPUT DROP

I also wrote an article here: http://blog.manton.im/2016/06/securing-iptables-in-centos-7-attack.html that might be of interest.

6. Ensure ntp / datetime information is configured correctly.

7. Ensure you have brute-force detection / prevention system in place e.g. fail2ban or sshguard.

8. Utilize tcpwrappers

9. Setup syslog forwarding to a centralized server.

10. Harden other services - dependent on intended usage e.g. web server (apache, nginx etc.)

Tuesday 6 December 2016

OpenVPN AS: 'Address already in use'

Came across a rather annoying problem today with the OpenVPN Access Server when separating out admin and user access to the server.

After adding a secondary interface to the server (for admin access) and configuring OpenVPN AS to run the 'Admin Web UI' on it
The fatal mistake was after I removed the secondary interface - the admin interface defaulted back to my primary interface and the 'Client Web Server' IP address was set to '0.0.0.0' - hence I assume OpenVPN AS was not aware that the 'Client Web Server' port was also bound to the 'Admin Web Server' (usually it is fine to share the same port if the exact IP is specified in both sections.)



So the access-server GUI suddenly became unavailable. Tailing the openvpnas logs returned:

2016-12-06 15:41:24+0000 [-] OVPN 0 OUT: 'Tue Dec  6 15:41:24 2016 Exiting due to fatal error'
2016-12-06 15:41:24+0000 [-] OVPN 0 ERR: 'Tue Dec  6 15:41:24 2016 TCP/UDP: Socket bind failed on local address [AF_INET]172.30.0.194:443: Address already in use'
2016-12-06 15:41:24+0000 [-] Server Agent initialization status: {'errors': {u'openvpn_0': [('error', "process started and then immediately exited: ['Tue Dec  6 15:41:24 2016 TCP/UDP: Socket bind failed on local address [AF_INET]1.2.3.4:443: Address already in use']"), ('error', 'service failed to start or returned error status')]}, 'service_status': {'bridge': 'started', 'log': 'started', 'license': 'started', 'iptables_web': 'started', 'iptables_openvpn': 'started', 'ip6tables_openvpn': 'started', 'auth': 'started', 'ip6tables_live': 'started', 'client_query': 'started', 'api': 'started', 'daemon_pre': 'started', 'web': 'started', 'db_push': 'started', 'iptables_live': 'started', u'openvpn_0': 'off', 'crl': 'started', 'user': 'started'}}

Since I was unable to edit the configuration via the GUI I ended up examining the 'config.json' configuration file:

cat /usr/local/openvpnas/etc/config.json | grep 943

    "admin_ui.https.port": "443",
    "cs.https.port": "443",
 
Although changes to this file didn't seem to work and the error persisted.

So eventually I found out about the 'confdba' command - wich let me view the current database configuration (/usr/local/openvpn_as/etc/db/):

/usr/local/openvpn_as/scripts/confdba -a

and then modify (either the ip address keys or in my case I simply changed the ports aroudn so they would not conflict with eahc other):

/usr/local/openvpn_as/scripts/confdba -mk "cs.https.port" -v "446"

restart the service:

sudo service openvpnas restart

and viola - the web GUI was back up and running!

Friday 2 December 2016

Creating swap partition in CentOS 7

We should firstly examine the current swap space with:

swapon -s

Identify how much space we have available for swap with:

df -h

and use the fallocate command to quickly create our swap file e.g.:

sudo fallocate -l 128m /swpfile

Set the relevent permissions on the swap file (we don't want anyone else except root to have access to the swap file!):

sudo chmod 600 /swpfile

And then instruct the system to create the swap space with:

sudo mkswap /swpfile

and enable the swap space with:

sudo swapon /swpfile

and then confirm with:

swapon -s

and

freemem

Checking whether selinux is available / enabled

The easiest way to check the current status of selinux is to either issue:

sestatus

or

cat /etc/sysconfig/selinux

If neither work we should check the kernel name / version with:

uname -a

and confirm whether the kernel supports selinux (i.e. if it's runnung above version 2.6 it should support SELinux)

If you get a 'command not found' error we can try to install:

yum install setools policycoreutils selinux-policy

and issue the following again:

sestatus

You should now (hopefully) be able to enable by ensuring 'SELINUX=enforcing' and 'SELINUXTYPE=targeted' is present in:

/etc/sysconfig/selinux

Manually creating the .ssh directory

In some cases I have found that the .ssh directory is not automatically present and hence requires us to create this manually - although it is important permissions are set appropriately, otherwise the OpenSSH will reject the 'authorized_keys' file when a user attempts to login (assuming that 'strict_mode' is enabled within the sshd-config.)

So to create the file structure manually we should issue:

su username

mkdir ~/.ssh
chmod 700

touch ~/.ssh/authorized_keys
chmod 700 ~/.ssh/authorized_keys

Wednesday 30 November 2016

Setting up mutt notes

Mutt will by default lookup the $MAIL variable in order to identify where the user mailbox is created e.g.:

echo $MAIL
/var/mail/username

If for some reason this is not set we can issue:

export $MAIL=/var/mail/username

and to make it permanent:

echo ~/.bashrc >> 'export $MAIL=/var/mail/username'

On first launch if your mail directoy does not exist ask you whether you would like it to create a new mail directory.

Sometimes if after first launch the mailbox (or it's folder) is deleted you might get the following error message:

/var/mail/username: No such file or directory

To re-create the mailbox we should generate a test mail to our self e.g.:

echo Test Email | mail $USER

and verify again:

mutt

Tuesday 29 November 2016

Mount point persistence with fstab

We should firstly identify the block device with dmesg:

dmesg | grep sd

[611156.2271561] sd 2:0:3:0: [sdd] Attached SCSI disk

Create a new partition table:

sudo fdisk /dev/sdd
o (to create a new / empty DOS partition table.)
n (to create a new primary ext3 partition.)
w (to write changes.)

Lets create the filesystem with:

mkfs.ext3 /dev/sdd1

Now grab the UUID of the partition with:

blkid /dev/sdd1

and then perform a test mount of the partition e.g.:

mkdir -p /mount/mountpoint

mount -t auto /dev/sdd1 /mount/mountpoint

and if all goes well - add our fstab entry in e.g:

echo 'UUID=1c24164-e383-1093-22225-60ced4113179  /backups  ext3 defaults  0  0' >> fstab

and reboot the system to test.

Friday 25 November 2016

Setting up highly available message queues with RabbitMQ and Cent OS 7

Since RabbitMQ runs on Erlang we will need to install it from the epel repo (as well as a few other dependancies):

yum install epel-release erlang-R16B socat python-pip

Download and install rabbitmq:

cd /tmp
wget https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.6/rabbitmq-server-3.6.6-1.el7.noarch.rpm
rpm -i rabbitmq-server-3.6.6-1.el7.noarch.rpm

ensure that the service starts on boot and is started:

chkconfig rabbitmq-server on
sudo service rabbitmq-server start

Rinse and repeat on the second server.

Now before creating the cluster ensure that both servers have the rabbitmq-server is a stopped state:

sudo service rabbitmq-server stop

There is a cookie file that needs to be consistent across all nodes (master, slaves etc.):

cat /var/lib/rabbitmq/.erlang.cookie

Copy this file from the master to all other nodes in the cluster.

And then turn on the rabbitmq-server service on all nodes:

sudo service rabbitmq-server start

Now reset the app on ALL slave nodes:

rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app

and the following on the master node:

rabbitmqctl stop_app
rabbitmqctl reset

and create the cluster from the master node:

rabbitmqctl join_cluster rabbit@slave

* Substituting 'slave' with the hostname of the slave(s.)

NOTE: Make sure that you use a hostname / FQDN (and that each node can resolve each others) otherwise you might encounter problems when connecting the nodes.

Once the cluster has been created we can verify the status with:

rabbitmqctl cluster_status

We can then define a policy to define to provide HA for our queues:

rabbitmqctl start_app
rabbitmqctl set_policy ha-all "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'

Ensure you the pika (python) library installed (this provides a way of interacting with AMQP):

pip install pika

Let's enable the HTTP management interface so we can easily view our exchanges, queues, users and so on:

rabbitmq-plugins enable rabbitmq_management

and restart the server with:

sudo service rabbitmq-server restart

and navigate to:

http://<rabitmqhost>:15672

By default the guest user should be only useable from localhost (guest/guest) - although if you are on a cli you might need to remotely access the web interface and as a result will need to enable the guest account temporrily:

echo '[{rabbit, [{loopback_users, []}]}].' > /etc/rabbitmq/rabbitmq.config

Once logged in proceed to the 'Admin' tab >> 'Add a User' and ensure they can access the relevent virtual hosts.

We can revoke guest access from interfaces other than localhost by remove the line with:

sed -i '/loopback_users/d' /etc/rabbitmq/rabbitmq.config

NOTE: You might need to delete the

and once again restart the server:

sudo service rabbitmq-server restart

We can now test the mirrored queue with a simple python publish / consumer setup.

The publisher (node that instigates the messages) code should look something like:

import pika

credentials = pika.PlainCredentials('guest', 'guest')

connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='rabbitmaster.domain.internal',port=5672,credentials=credentials))
channel = connection.channel()

channel.queue_declare(queue='hello')

channel.basic_publish(exchange='',
                      routing_key='hello',
                      body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()

We should now see a new queue (that we have declared) in the 'Queues' tab within the web interface.

We can also check the queue status accross each node with:

sudo rabbitmqctl list_queues

If all goes to plan we should see the queue length in consistent accross all of our nodes.

Now for the consumer we can create something like the following:

import pika

credentials = pika.PlainCredentials('guest', 'guest')

connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='rabbitmaster.domain.internal',port=5672,credentials=credentials))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
   print(" [x] Received %r" % (body,))
channel.basic_consume(callback,
                     queue='hello',
                     no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()

Now the last piece of the puzzles is to ensure that the network level has some kind of HA e.g. haproxy, nginx and so on.

* Partly based of / adapted from the article here: http://blog.flux7.com/blogs/tutorials/how-to-creating-highly-available-message-queues-using-rabbitmq

Tuesday 22 November 2016

Troubleshooting certificate enrollment in active directory

Start by verifying the currently published CA(s) with:

certutil -config - -ping

and also adsiedit:

CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=yourdomain,DC=internal

Confirm whether the CA is entrpise or standalone with:

certutil –cainfo

The CA type must be Enterprise otherwise MMC enrollment will not work.

We can also verify the permissions on the CA itself by gonig to the Certificate Authority snapin:

CertSrv.msc

and right-hand clicking on the server node >> Security >> and ensuring the relevant users have the 'request' permission - which should typically be 'Authenticated Users' and that Domain Admins, Enterprise Admins and Administrators have the 'enroll' permission.

We can pull down a list of certificates from  issue the retrieve a list of templates from the CA:

certutil –template

Example output:

EFSRecovery: EFS Recovery Agent ACCESS DENIED
CodeSigning: Code Signing
CTLSigning: Trust List Signing
EnrollmentAgent: Enrollment Agent
EnrollmentAgentOffline: Exchange Enrollment Agent (Offline request)

Verify whether the relevant template you are trying to issue has 'Access Denied' appended to it - if so it is almost certainly a permissions issue on the certificate template - go to:

certtmpl.msc

and right-hand click on the certificate >> Security tab and verify the permissions.

Also within the Certificate Template mmc snapin - check the Subject Name tab and ensure that 'Build from the active directory information' is selected if you are attempting to request the certificate from MMC - as the MMC snapin does not support provide a common name with the request! (This caught me out!)

Last one: An obvious one - but ensure that the certificate template is actually issued:

CA Authority >> Right-hand click Certificate Templates >> New >> Certificate template to issue.

Manually (painfully) generating a server certificate for LDAPS on Server 2003.

This is a bit of an odd one - as this process can be automated - but if you like me - prefer to do this manually I have documented the steps (briefly) below.

Firstly add the CA role by going to 'Add and Remove Programs' from the control panel and selecting the 'Add/Remove Windows Components' and ensure that 'Certificate Services' is checked as well as ensuring that the 'CA Web Enrollment' feature is installed as well (click on the details button.)

Now lets create a certificate template for this purpose - so go to:

mmc.exe >> 'Add Snapins' >> Certificate Authority >> Right-hand click on the 'Certificate Templates' node and select 'Manage.' We will now duplicate an existing template (as required) - so right-hand click on 'Domain Controller Authentication' and hit 'Duplicate Template.' I then named my new template: 'Domain Controller Authentication Manual' and in the 'Subject Name' tab ensure 'Obtain from Active Directory' is selected. In the 'Security' tab ensure that only the 'Domain Admins' user group has the enroll permissions and in the 'Extensions' tab that 'Server Authentication' (OID: 1.3.6.1.5.5.7.3.1) and finally in the 'Request Handling' tab ensure that 'Allow private key to be exported' is ticked.

Click apply / OK etc. and finally hit OK on the new template form to create the template.

Then on the CA authority snapin - right-hand click the 'Certificate Templates' node >> New >> 'Certificate Template to Issue' and select the relevant template. NOTE: In my case the template wasn't present so I added the template via CLI:

certutil -SetCAtemplates +DomainControllerAuthenticationManual

Restart certificate services.

Now we need to ensure that the FQDN of the server is within the trusted sites zone in IE e.g.:

myca.domain.internal

(If you do not add the FQDN to the trusted zone you will get an 'access denied' message when attempting to generate the certificate - which can be quite misleading!)

and then browse to:

http://myca.domain.internal/certsrv

and enter your domain credentials.

Then from the task list select 'Request a certificate' >> 'Advanced certificate request' >> 'Create and submit a request to this CA'. At this point you should be prompted to install an active-x control - ensure this is installed before proceeding.

Select the 'Domain Controller Authentication Manual' template and ensure that the subject names matches that of the DC you wish to setup LDAP for and also ensure 'Store certificate in the local computer certificate store
' is ticked and finally hit submit to import the certificate into your computer's certificate store.

We should also ensure that the "HTTP SSL" service has started and will be started automatically at boot!

and then test our ldaps connection:

cmd.exe ldp

and connect to the server using the DC's FQDN (the IP will NOT work) e.g.

mydc.mydomain.internal

For more information about troubleshooting ldap over SSL see below:

https://support.microsoft.com/en-gb/kb/938703

Friday 18 November 2016

Troubleshooting netlogon problems with Windows 7/10/2008/2012

Firstly verify any DNS servers:

ipconfig /all

Sample output: 10.1.1.1

and ensure they are (all) responding with e.g.:

cmd
nslookup
server 10.1.1.1
google.com

if fails check with telnet e.g. (assuming the DNS server is running over TCP):

cmd
telnet 10.1.1.1 53

and verify you get a response.

We can check if the netlogon service is able to communicate with our DNS server with:

nltest /query

we can also verify the last state of the secure channel created between the client and DC with:

nltest /sc_query:yourdomain.internal

(This will also inform you of which DC the channel was created with.)

We can also attempt to reset this with:

nltest /sc_reset:yourdomain.internal

or alternatively use sc_verify (this won't break the exisiting secure channel unless it's not established):

nltest /sc_verify:yourdomain.internal

If the issue is related to more than one client it could be due to loss of network connectivity or a DC related issue - to check the DC we can issue:

dcdiag /a

Turning on logging with UFW

If you are unfortuante enough to be working with Ubuntu you might have come accross UFW - a wrapper for IPTables that aims to 'simplify' management of the firewall.

To enable logging in UFW you should firstly ensure its not already turned on with:

sudo ufw status verbose | grep logging

and if not enabled issue:

sudo ufw logging on

We can also adjust the logging level with:

ufw logging [low] | [medium] | [full]

Low: Provides information on all dropped packets and packets that are setup to be logged.

Medium: Matches all low level events plus all Invalid packets and any new connections.

High: Matches all medium level events plus all packets with the exception of rate limiting.

Full: Logs everything.

The logs are typically located within:

/var/log/ufw

e.g. tail -f /var/log/ufw/ufw.log

Monday 14 November 2016

Email spoofing, SPF and P1/P2 headers

SMTP message headers comprise of two different headers types: P1 and P2.

The way I like to conceptualize it is relating a P1 header to network frame and a P2 header to an IP packet - the frame is forwarded via a network switch (which is unaware of any lower level PDU's encapsulated within the frame) - it is only until the frame reaches a layer 3 device that the IP packet is inspected and a decision is made.

By design SPF only checks the P1 headers - not the P2 header. This presents a problem when a sender spoofs the sender within a P2 headers - while the sender in the P1 header could be completely legitimate.



The below diagram demonstrates an example of a spoofed email abusing the P2 header:



A logical approach to this would be to simply instruct your anti-spam to compare the relevant P1 and P2 headers and if a mismatch is encountered simply drop the email. Although however there are a few situations where this would cause problems - such as when a sender is sending a mail item on behalf of another sender (think mail group) or when an email is forwarded - in the event of the email bouncing the forwarder should receive the bounce notification rather than the original sender.

So instead we can pre-define IP addresses that are allowed to send on behalf of our domain:

In Exchange: In order to resolve this problem we can block inbound mail from our own domain by removing the 'ms-exch-smtp-accept-authoritative-domain-sender' permission (this blocks both 'MAIL FROM' (P1) and 'FROM' (P2) fields) from our publicly accessible (anonymous) receive connector - although however, this will cause problems if there any senders (e.g. printers, faxes or bespoke applications) that send mail on behalf of the domain externally - so a separate receive connector (with the ms-exch-smtp-accept-authoritative-domain-sender permission) should be setup to cater for these devices.

So we should firstly block your sending domain with:

Set-SenderFilterConfig -BlockedDomains mydomain.com

and ensure internal mail can flow freely with:

Set-SenderFilterConfig -InternalMailEnabled $true

and to remove the 'ms-exch-smtp-accept-authoritative-domain-sender' permission we can issue:

Get-ReceiveConnector "Public Receive Connector" | Get-ADPermission -user "NT AUTHORITY\Anonymous Logon" | where {$_.ExtendedRights -like "ms-exch-smtp-accept-authoritative-domain-sender"} | Remove-ADPermission

and if needed - ensure that your receive connector for printers, faxes etc. can receive email from them:

Get-ReceiveConnector "Internal Receive Connector" | Add-ADPermission -user "NT AUTHORITY\Anonymous Logon" -ExtendedRights "ms-Exch-SMTP-Accept-Any-Sender"

and finally restart Microsoft Exchange Transport Services.

Setting up client certificate authentication with Apple iPhones / iPads

Client certificates can come in very handy when you wish to expose internal applications that you wish to make publicly accessible to specific entities.

Fortunately most reverse proxies such as IIS, httpd, nginx and haproxy provide this functionality - although for this tutorial I will concentrate on nginx since the configuration is pretty straight forward and I (personally) tend to have less cross-platform problems when working with it.

* For this tutorial I am already assuming that you have your own server certificate (referred to as server.crt)

So lets firstly create our CA that we will use to issue our client certificates:

openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt

and then generate our client private key and CSR:

openssl req -out client.csr -new -newkey rsa:2048 -nodes -keyout client.key

and then self-sign our new certificate with:

openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt

Now we want to import the key pair into our iPhone / iPad - this can be performed by the Apple Configuration Utility or much more easily by simply sending an email to the device with the key pair attached.

However we must firstly create a .pfx package with both the private and public key in it - to do this we should issue:

openssl pkcs12 -inkey client.key -in client.crt -export -out client.pfx

and setup our nginx configuration:

server {
    listen        443;
    ssl on;
    server_name clientssl.example.com;

    ssl_certificate      /etc/nginx/certs/server.crt;
    ssl_certificate_key  /etc/nginx/certs/server.key;
    ssl_client_certificate /etc/nginx/certs/ca.crt;
    ssl_verify_client on;

    location / {
        proxy_pass http://1.2.3.4:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

test the configuration with:

nginx -t

and if correct reload the server with:

sudo service nginx reload



Thursday 27 October 2016

Installing Foreman on CentOS 7 / Debian 8

Foreman provides a feature rich frontend for Puppet - that allows you to easily deploy, manage and monitor your puppet infrastructure.

Note: It is highly recommended that you use the official Puppet packages from the official Puppet repository when using t in conjunction with Foreman.

So - lets firstly add the Foreman repository with:

yum install epel-release http://yum.theforeman.org/releases/1.7/el7/x86_64/foreman-release.rpm

yum -y install foreman-installer

or for Debian 8:

echo "deb http://deb.theforeman.org/ jessie 1.13" > /etc/apt/sources.list.d/foreman.list
echo "deb http://deb.theforeman.org/ plugins 1.13" >> /etc/apt/sources.list.d/foreman.list
apt-get -y install ca-certificates
wget -q https://deb.theforeman.org/pubkey.gpg -O- | apt-key add -

apt-get update && apt-get -y install foreman-installer

and then run the installer with:

foreman-installer

I got a few errors during the initial install:

"Error: Removing mount files: /etc/puppet/files does not exist"

and

'Error executing SQL; psql returned pid 1842 exit 1: 'ERROR:  invalid locale name: "en_US.utf8"'

In order to resolve the above problem we should generate en_US.utf8 locale - in Debian we run:

dpkg-reconfigure locales

and ensure 'en_US.utf8' is selected.

Uninstall foreman with:

sudo apt-get --purge remove foreman foreman-installer foreman-proxy
sudo rm -rf /var/lib/foreman /usr/share/foreman /usr/share/foreman-proxy/logs
sudo rm -R /etc/apache2/conf.d/foreman*

and attempt to reinstall:

foreman-installer

Review the logs at /var/log/foreman/install.log for any problems:

cat /var/log/foreman/install.log | grep ERROR

and launch the web app:

https://hostname

The defualt login details are supposedly: admin / changeme - but this didn't seem to be the case for myself - so I ended up manually resetting the password in the console with:

foreman-rake permissions:reset

Initially we will want to import all of our puppet classes - this can be performed by going to:

Configure >> Classes and then click on 'Import classes from puppet.xyz.com'

Tuesday 25 October 2016

Locking down a linux system with the help of Puppet

Puppet as well as a great deployment tool is brilliant for ensuring that your systems configuration is as it should be.

When working with Linux (well any operating system really) I like to create a security baseline that acts as an applied standard accross the organization.

I am going to base the core configuration (while ensuring that the configuration is compatible with the vast majority of Linux distributions.):

puppet module install hardening-ssh_hardening
puppet module install hardening-os_hardening
puppet module install puppetlabs-accounts
puppet module install puppetlabs-ntp
puppet module install puppetlabs-firewall
puppet module install saz-sudo

vi /etc/puppet/modules/accounts/manifests/init.pp

import "/etc/puppet/modules/firewall/manifests/pre.pp"
import "/etc/puppet/modules/firewall/manifests/post.pp"

node default {

  ########### Replace firewalld with IPTables ############
  package { tcpdump: ensure => installed; }
  package { nano: ensure => installed; }
  package { wget: ensure => installed; }
  package { firewalld: ensure => absent; }

  service { 'firewalld':
    ensure     => stopped,
    enable     => false,
    hasstatus  => true,
  }

  service { 'iptables':
    ensure     => running,
    enable     => true,
    hasstatus  => true,
  }

  ########### Configure firewall ###########
  resources { "firewall":
    purge   => true
  }

  Firewall {
    before  => Class['fw::post'],
    require => Class['fw::pre'],
  }

  class { ['fw::pre', 'fw::post']: }
 
}

Note: We can generate the password hash with:

makepasswd --clearfrom=- --crypt-md5

We can now apply our host specific configuration:

import "/etc/puppet/manifests/nodes.pp"

and in our nodes.pp file we will define our settings for the induvidual hosts:

vi /etc/puppet/manifests/nodes.pp

node 'hostname.domain.com' {

  include privileges

  ########### Custom Firewall Rules ###########
  firewall { '100 Allow inbound access to tcp/8000':
  dport     => 8000,
  proto    => tcp,
  action   => accept,
  }
 
  firewall { '101 Allow inbound access to tcp/80':
  dport     => 80,
  proto    => tcp,
  action   => accept,
  }

  ########### Configure Groups ###########
  group { 'admins':
   ensure => 'present',
   gid    => '501',
        }
     
  ########### Configure NTP ###########
  class { 'ntp':
    servers => ['0.uk.pool.ntp.org','1.uk.pool.ntp.org','2.uk.pool.ntp.org','3.uk.pool.ntp.org']
  }
 
  ########### Configure OS Hardening ###########
  class { 'os_hardening':
    enable_ipv4_forwarding => true,
  }
 
  ########### Configure User Accounts ############
  accounts::user { 'youruser':
  ensure => "present",
  uid      => 500,
  shell    => '/bin/bash',
  password => '$11234567890987654.',
  locked   => false,
  groups   => [admins],
  }
 
  ########### SSH Hardening #############
  class { 'ssh_hardening::server': }

  ########### Configure SSH Keys #############
  ssh_authorized_key { 'user@domain.com':
  user => 'youruser',
  type => 'ssh-rsa',
  key  => '345kj345kl34j534k5j345k34j5kl345[...]',
}

}

We need to create a new module to handle sudo for us:

mkdir -p /etc/puppet/modules/privileges/manifests && cd /etc/puppet/modules/privileges/manifests

vi init.pp

class privileges {
   user { 'root':
     ensure   => present,
     password => '$1$54j534h5j345345',
     shell    => '/bin/bash',
     uid      => '0',
   }

   sudo::conf { 'admins':
     ensure  => present,
     content => '%admin ALL=(ALL) ALL',
   }

   sudo::conf { 'wheel':
     ensure  => present,
     content => '%wheel ALL=(ALL) ALL',
   }
}

And make sure our include statement is present in our nodes.pp file.

we should also create our firewall manifiests:

vi /etc/puppet/modules/firewall/manifests/pre.pp

class fw::pre {

  Firewall {
    require => undef,
  }

  # basic in/out
  firewall { "000 accept all icmp":
    chain    => 'INPUT',
    proto    => 'icmp',
    action   => 'accept',
  }

  firewall { '001 accept all to lo interface':
    chain    => 'INPUT',
    proto    => 'all',
    iniface  => 'lo',
    action   => 'accept',
  }

  firewall { '006 Allow inbound SSH':
    dport     => 22,
    proto    => tcp,
    action   => accept,
  }

  firewall { '003 accept related established rules':
    chain    => 'INPUT',
    proto    => 'all',
    state    => ['RELATED', 'ESTABLISHED'],
    action   => 'accept',
  }

  firewall { '004 accept related established rules':
    chain    => 'OUTPUT',
    proto    => 'all',
    state    => ['RELATED', 'ESTABLISHED'],
    action   => 'accept',
  }

  firewall { '005 allow all outgoing traffic':
    chain    => 'OUTPUT',
    state    => ['NEW','RELATED','ESTABLISHED'],
    proto    => 'all',
    action   => 'accept',
  }

}

vi /etc/puppet/modules/firewall/manifests/post.pp

class fw::post {

  firewall { '900 log dropped input chain':
    chain      => 'INPUT',
    jump       => 'LOG',
    log_level  => '6',
    log_prefix => '[IPTABLES INPUT] dropped ',
    proto      => 'all',
    before     => undef,
  }

  firewall { '900 log dropped forward chain':
    chain      => 'FORWARD',
    jump       => 'LOG',
    log_level  => '6',
    log_prefix => '[IPTABLES FORWARD] dropped ',
    proto      => 'all',
    before     => undef,
  }

  firewall { '900 log dropped output chain':
    chain      => 'OUTPUT',
    jump       => 'LOG',
    log_level  => '6',
    log_prefix => '[IPTABLES OUTPUT] dropped ',
    proto      => 'all',
    before     => undef,
  }

  firewall { "910 deny all other input requests":
    chain      => 'INPUT',
    action     => 'drop',
    proto      => 'all',
    before     => undef,
  }

  firewall { "910 deny all other forward requests":
    chain      => 'FORWARD',
    action     => 'drop',
    proto      => 'all',
    before     => undef,
  }

  firewall { "910 deny all other output requests":
    chain      => 'OUTPUT',
    action     => 'drop',
    proto      => 'all',
    before     => undef,
  }

}

Ensure all of our mainifests are valid:

puppet parser validate site.pp

Sources: https://docs.puppet.com/pe/latest/quick_start_sudo.html
https://www.linode.com/docs/applications/puppet/install-and-configure-puppet

Monday 24 October 2016

Quickstart: Installing and configuring puppet on CentOS 7 / RHEL

For the puppet master we will need a VM with at least 8GB of RAM, 80GB of disk and 2 vCPU.

The topology will comprise of two nodes - MASTERNODE (The puppet server) and the CLIENTNODE (the puppet client).

Firstly we should ensure that NTP is configured on both the client and server.

We'll now install the official Puppet repository:

sudo rpm -Uvh https://yum.puppetlabs.com/puppet5/puppet5-release-el-7.noarch.rpm
yum install puppetserver puppetdb puppetdb-termini
sudo systemctl enable puppet
sudo systemctl start puppetserver

We should then set our DNS name etc. for the puppet server - append / change the following in vi /etc/puppetlabs/puppet/puppet.conf:

[main]
certname = puppetmaster01.example.com
server = puppetmaster01.example.com
environment = production
runinterval = 1h
strict_variables = true

[master]
dns_alt_names = puppetmaster01,puppetdb,puppet,puppet.example.com
reports = puppetdb
storeconfigs_backend = puppetdb
storeconfigs = true
environment_timeout = unlimited

We will also need to ensure the PuppetDB service is started - although we'll firstly need to install / setup PostgreSQL before we proceed - follow the guidance here  - however stop just before the 'user creation' and instead see below:

sudo -u postgres sh
createuser -DRSP puppetdb
createdb -E UTF8 -O puppetdb puppetdb
exit

and ensure the pg_trgm extension is installed:

sudo -u postgres sh
psql puppetdb -c 'create extension pg_trgm'
exit

Restart postgres and ensure you can login:

sudo service postgresql restart
psql -h localhost puppetdb puppetdb
\q

And define the database connection details here:

vi /etc/puppetlabs/puppetdb/conf.d/database.ini

Replacing / adding the following directives:

[database]
classname = org.postgresql.Driver
subprotocol = postgresql
subname = //127.0.0.1:5432/puppetdb
username = puppetdb
password = <yourpassword>

Note: Also ensure that you are using PostgreSQL version >=9.6 otherwise the puppetdb service will fail to start. (as the epel release is at current only on 9.2) Uninstall the existing postgres install and install the newer version with: yum install postgresql-96 postgresql-server-96 postgresql-contrib-96

Important: By default the puppet master will attempt to connect ot PuppetDB via the hostname 'puppetdb' - however we can change this behaviour by defining the following on the puppet master:

vi /etc/puppetlabs/puppet/puppetdb.conf

and adding:

[main]
server_urls = https://puppetmaster01.example.com:8081

sudo service puppetdb start

Configure ssl support with:

sudo puppetdb ssl-setup

Now either use puppet to start and ensure that the db service runs on boot with:

sudo puppet resource service puppetdb ensure=running enable=true

or

sudo systemctl enable puppetdb
sudo systemctl start puppetdb

We will proceed by generating the server certificates:

export PATH=/opt/puppetlabs/bin:$PATH
sudo puppet master --verbose --no-daemonize

Once you see 'Notice: Starting Puppet master version 5.2.0' pres Ctrl + C to escape.

We can review certificates that have been created by issuing:

sudo puppet cert list -all

and start the puppet master:

sudo service puppet start

We'll also need to add an exception in for TCP/8140 and TCP/8081 (PuppetDB) (for clients to communicate with the puppet master):

sudo iptables -I INPUT 3 -i eth0 -p tcp -m state  --state NEW,ESTABLISHED -m tcp --dport 8140 -j ACCEPT
sudo iptables -I INPUT 3 -i eth0 -p tcp -m state  --state NEW,ESTABLISHED -m tcp --dport 8081 -j ACCEPT

sudo iptables-save > /etc/sysconfig/iptables

Puppet Client Installation


we should then install our client:

sudo rpm -Uvh https://yum.puppetlabs.com/puppet5/puppet5-release-el-7.noarch.rpm
sudo yum install puppet
sudo systemctl enable puppet

edit puppet.conf:

[main]
certname = agent01.example.com
server = puppetmaster01.example.com
environment = production
runinterval = 1h

and restart the puppet client:

systemctl restart puppet

Set path details:

export PATH=/opt/puppetlabs/bin:$PATH

The puppet server (master) utilizes PKI to ensure authenticity between itself and the client - so we must firstly generate a certificate signing request from the client:

puppet agent --enable
puppet agent -t

At this point I got an an error:

Error: Could not request certificate: Error 400 on SERVER: The environment must be purely alphanumeric, not 'puppet-ca'
Exiting; failed to retrieve certificate and waitforcert is disabled

This turned out due to a version mismatch between the puppet client and server.

Note: The Puppet server version must always be >= than that of the puppet client - I actually ended up removing the official puppet repo from the client and using the EPEL repo instead.

and then attempt to enable puppet and generate our certificate:

puppet agent --enable
puppet agent -t

At this point I got the following error:

Exiting; no certificate found and waitforcert is disabled.

This is because the generated certificate has not yet been approved by the puppet master!

In order to approve the certificate - on the puppet master issue:

puppet cert list

and then sign it by issuing:

puppet cert sign hostname.domain.com

We can then view the signed certificate with:

puppet cert list -all

Now head back to the client and attempt to initialise the puppet agent again:

puppet agent -t

However again - I got the following message:

Could not retrieve catalog from remote server: Error 500 on SERVER

Note: Using the following command allows you to run the puppet server in the foreground and provided a lot of help when debugging the above errors:

puppet master --no-daemonize --debug

We should (if everything goes to plan) see something like:

Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for puppetmaster.yourdomain.com
Info: Applying configuration version '1234567890'
Info: Creating state file /var/lib/puppet/state/state.yaml
Notice: Finished catalog run in 0.02 seconds

We want to install few modules firstly:

puppet module install ghoneycutt/ssh

We will extend our modules:

vi /etc/puppet/modules/firewall/manifests/ssh.pp

ssh::permit_root_login
  permit_root_login => 'no',

Now lets create our manifest:

vi /etc/puppetlabs/code/environments/production/manifests/site.pp

import "/opt/puppetlabs/puppet/modules/firewall/manifests/*.pp"

node default {

  package { tcpdump: ensure => installed; }
  package { nano: ensure => installed; }
  package { iptables-services: ensure => installed; }
  package { firewalld: ensure => absent; }

  service { 'firewalld':
    ensure     => stopped,
    enable     => false,
    hasstatus  => true,
  }

  service { 'iptables':
    ensure     => running,
    enable     => true,
    hasstatus  => true,
  }

  resources { "firewall":
    purge   => true
  }

  include common
  include ssh


We should also validate the file as follows:

sudo puppet parser validate site.pp

The puppet client (by default) will poll every 30 minutes - we can change this by defining:

runinterval=900

Where 900 is == number of seconds. (This should be appended to the 'main' section in puppet.conf

We can also test the config by issuing:

puppet agent --test

Wednesday 19 October 2016

Retrieving the top requesting hosts from the nginx access logs

We will fristly inspect the log format:

tail -f /var/log/nginx/access.log.1

89.248.160.154 - - [18/Oct/2016:21:58:38 +0000] "GET //MyAdmin/scripts/setup.php HTTP/1.1" 301 178 "-" "-"
89.248.160.154 - - [18/Oct/2016:21:58:38 +0000] "GET //myadmin/scripts/setup.php HTTP/1.1" 301 178 "-" "-"

Fortunately apache has a standardized format so we can parse the logs pretty easily - we will firstly use a regex to extract the requester IP from the log file (note the '^' is present to ensure we don't pickup the IP anywhere else e.g. the requested URL.):

grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' /var/log/nginx/access.log.1

We then want to remove any duplicates so we are presented with unique hosts and ideally sort these:

grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' /var/log/nginx/access.log.1 | uniq | sort

Now we can use a while loop to retrieve the results:

#!/bin/bash

input='/var/log/nginx/access.log.1'

grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' $input | uniq | sort | while read -r line ; do

count=$(grep -o $line $input | wc -l)

echo "Result for: " $line is $count

done


But you might only want the top 5 requesters - so we can expand the script as follows:



#!/bin/bash

input='/var/log/nginx/access.log.1'

grep -o '^[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' $input | uniq | sort | while read -r line ; do

count=$(grep -o $line $input | wc -l)

# Bash creates a subshell since we are piping data - so variables within the loop will not be available outside the loop.

echo $count for $line >> temp.output

done

echo "Reading file..."

for i in `seq 1 5`; do

cat temp.output | sort -nr | sed -n $i'p'

done

# cleanup
rm -f temp.output

Tuesday 18 October 2016

Customer - ISP BGP Lab (Removing private ASN's)

For this lab we will have a customer site (ASN 16500) that connects to an ISP over over BGP.

The customer has an inner core consisting of 3 routers (R1, R2 and R3) running OSPF as an IGP. The customer's edge has a border router that is running BGP and will peer with the ISP's router R5.

The goal here is to ensure that when clients within the core network attempt to access the internet - will be directed towards are edge router and in turn the IPS's edge router (R5.)



The GNS3 lab can be downloaded here.

The process flow is as follows:

1. Client in customer core (e.g. 172.16.0.200) attempts to ping a public IP: 8.0.0.1

2. R1 is hooked up to the OSPF backbone and is aware of a default route being advertised on R4. R2 also has a default route pointing to R5 (which is not part of the OSPF domain, rather BGP.)

3. R4 is advertising a public (BGP) prefix of 14.0.0.0/24 - so in order to make sure any packets originating from our internal subnet (172.16.0.0/23) are reachable by the remote subnets we should NAT the traffic to the 14.0.0.0/24 subnet.

4. Once NAT'd R4 will lookup the appropriate root for 8.0.0.1 in it's routing table - although we will configure prefix filtering - so R4 will only see a default route to R5.

5. Once the packet arrives at R5 the route will be looked up against the routing table and it will identify that 8.0.0.0/24 is within ASN 500 and will forward the packet out lo0.

Since we are connecting to a private ASN from the customer to ISP we will have to remove the private AS when it hits the IPS's public ASN / router.

R1>
enable
conf t

int e0/0
ip address 192.168.10.1 255.255.255.252
no shutdown
ip ospf 1 area 0

int e0/1
ip address 192.168.20.1 255.255.255.252
no shutdown
ip ospf 1 area 0

int lo 0
ip address 172.16.0.1 255.255.254.0

router ospf 1
network 172.16.0.0 0.0.1.255 area 0

Note: At this point I expected the loopback adapter subnet to be advertised via OSPF - well it was - but only as a /32 (rather than /24)... It turns out by default the router treats the address as a single IP e.g. /32 even if you have defined something other than /32 - in order to instruct the router to advertise its actual subnet we can issue:

int lo 0
ip ospf network point-to-point
do clear ip ospf pro

R2>
enable
conf t

int e0/0
ip address 192.168.20.2 255.255.255.252
no shutdown
ip ospf 1 area 0

int e0/1
ip add 192.168.30.1 255.255.255.252
no shutdown
ip ospf  1 area 0

int e0/2
ip add 192.168.40.2 255.255.255.252
no shutdown
ip ospf 1 area 0

R3>
enable
conf t

int e0/0
ip address 192.168.10.2 255.255.255.252
no shutdown
ip ospf 1 area 0

int e0/1
ip address 192.168.30.2 255.255.255.252
no shutdown
ip ospf 1 area 0

R4>
enable
conf t

int e0/0
ip address 192.168.40.1 255.255.255.252
no shutdown
ip ospf 1 area 0

int e0/1
ip address 192.178.50.1 255.255.255.252
no shutdown

Ensure that we do not advertise anything over our link to the ISP:

router ospf 1
passive-interface e0/1

Check neighbor adjacencies etc. with:

do show ip ospf ne

and

do show ip route [ospf]

Now we will setup eBGP between R4 and R5:

R4>
router bgp 16000
neighbor 14.0.0.2 remote-as 16001

R5>
enable
conf t

int e0/0
ip address 14.0.0.1 255.255.255.0
no shutdown

router bgp 16001
neighbor 14.0.0.1 remote-as 16000
network 14.0.0.0 mask 255.255.255.0

we want the customers edge router to use the ISP's edge router as the default gateway - so in order to this we need to use the 'defualt-originate' command:

R5>
neighbor 14.0.0.1 default-originate

We also want to ensure that R4 is not flooded with prefixes when its hooked up to BGP - so we can configure a prefix list to filter out all routes accept the default route R5 is advertising:

R5>
ip prefix-list default-route seq 5 permit 0.0.0.0/0
router bgp 16001
neighbor 14.0.0.1 prefix-list default-route out

This will instruct the ISP's router (R5) to inject a default route into the BGP table for the R4 peer (only).

R5>
int e0/1
ip address 192.168.60.2 255.255.255.252
no shutdown

router bgp 16001
neighbor 192.168.60.1 remote-as 500

Now review the routing table on R4 and confirm that only the default route is present:

do show ip route

R6>
int e0/0
ip address 192.168.60.1 255.255.255.252
no shutdown

int lo 0
ip address 8.0.0.1 255.255.255.0

router bgp 500
neighbor 192.168.60.2 remote-as 16001

We also want to ensure that the traffic is NAT'd to 14.0.0.0 or else return traffic from the remote subnets will not reach us (since they are unaware of our internal subnets as they are not present in our BGP table.):

R4>
int e0/0
ip nat inside

int e0/1
ip nat outside

ip access-list standard NAT
permit 172.16.0.0 0.0.1.255
ip nat inside source list NAT interface e0/1 overload

We can then attempt to ping 8.0.0.1 from R1 (source address 172.16.0.1):

R1>
do ping 8.0.0.1 source 172.16.0.1

You should see a new translation in the NAT table:

R4>
do show ip nat trans

Now lets setup R6 and configure BGP:

R6>
int e0/1
ip address 192.168.70.2 255.255.255.252
no shutdown

router bgp 500
neighbor 192.168.70.1 remote-as 600

R7>
int e0/0
ip address 192.168.70.1 255.255.255.252
no shutdown

router bgp 600
neighbor 192.168.70.2 remote-as 500

Also note that the ISP in this scenario is using a private ASN to peer with the customer - the traffic's next hop will be ASN 500 which is reversed for public use and hence we will need to ensure that the private AS number is removed before before it forwards the prefix to other public AS's. To do this we will apply the 'remove-private-as' command:

R6>
router bgp 500
neighbor 192.168.70.1 remove-private-as
do clear ip bgp *

and then check the 14.0.0.0/24 prefix in our BGP table:

do show ip bgp 14.0.0.0/24

and we should notice that the AS_PATH now only contains ASN 500.

Monday 17 October 2016

Tip: Debugging with 'debug ip packet'

The 'debug ip packet' command is a brilliant way to help diagnose problems with traffic traversing the router - although there are a few drawbacks - one being that only packets that are switched using process switching (i.e. switched with the help of the CPU) will be visible in the 'debug ip packet' output - other switching mechanisms like Fast Switching and CEF will not.

Although we can use the 'no ip route-cache' within interface mode to force packets to be switched with process switching - although note that this can have an adverse affect on the CPU is busy environments and should only be used if absolutely necessary.
int gi0/0
no ip route-cache
In larger scale environments you might be better of using tcpdump or Wireshark to inspect traffic.

Friday 14 October 2016

Simple eBGP / iBGP Topology

In this topology we will have two service providers (Customer 1,2) who have thier own ASN. Each customer has connectivity to a single ISP (ISP1,2.) In order for one customer to reach the other packets must be routed over eBGP - hence traversing ISP 1 and 2's ASN over BGP.


So we'll start by configuring eBGP between R2 (ASN 100) and R3 (ASN 100):

R3>
enable
conf t
int e0/1
ip address 10.254.0.1 255.255.255.252
no shutdown

router bgp 200
neighbor 10.254.0.2 remote-as 100

R2>
enable
conf t
int e0/1
ip address 10.254.0.2 255.255.255.252
no shutdown

router bgp 100
neighbor 10.254.0.1 remote-as 200

We should see an adjacancy alert appear after a little while e.g.:

*Mar  1 01:10:27.875: %BGP-5-ADJCHANGE: neighbor 10.254.0.1 Up

We can confirm our neighbors with:

show ip bgp summary

We now want to advertise our public network (13.0.0.0/24) to ISP 1 (ASN 100) - so we do this on R3 using the 'network' command:

R3>
router bgp 200
network 13.0.0.0 mask 255.555.255.0
do wri mem

We will now setup iBGP between R2 and R1 so that routes can be distributed to ASN 400 / R5:

Note: We will use a loopback address since we commonly have multiple paths to other iBGP peers - the advantage of this is if there are multiple paths to an iBGP peer and the BGP session is established over a physical link and the link goes down (or faults) the BGP session is terminated - while if using a loopback address the BGP session will remain active (and can use the other path instead.)

OK - so how would the each router (R2 and R3) know where each loopback address resides - another IGP of course - e.g. OSPF - so we setup OSPF:

R2>
enable
conf t
int e0/0
ip address 172.30.0.1 255.255.255.252
no shutdown

interface loopback 0
ip address 1.1.1.2 255.255.255.255

router ospf 1
log-adjacency-changes
network 172.30.0.0 0.0.0.3 area 0
network 1.1.1.2 0.0.0.0 area 0
network 10.254.0.0 0.0.0.3 area 0

R1>
enable
conf t
int e0/0
ip address 172.30.0.2 255.255.255.252
no shutdown

interface loopback 0
ip address 1.1.1.1 255.255.255.255

router ospf 1
log-adjacency-changes
network 172.30.0.0 0.0.0.3 area 0
network 1.1.1.1 0.0.0.0 area 0

we should be able to see each routers corrosponding loopback interface in each routing table:

do show ip route ospf

Now we can setup iBGP:

R1>
router bgp 100
neighbor 1.1.1.2 remote-as 100
neighbor 1.1.1.2 update-source loopback 0

Note: The 'update-source' statement instructs the BGP session to be initialized from the loopback adapter address.

R2>
router bgp 100
neighbor 1.1.1.1 remote-as 100
neighbor 1.1.1.1 update-source loopback 0

We can setup eBGP between R5 (ASN 400) and R1 (ASN 100):

R1>
int e0/1
ip address 192.168.10.1 255.255.255.252
no shutdown

int e0/2
ip address 172.16.20.1 255.255.255.0
no shutdown

router bgp 100
neighbor 192.168.10.2 remote-as 400
network 172.16.20.0 mask 255.255.255.0

R5>
int e0/0
ip address 192.168.10.2 255.255.255.252
no shutdown

router bgp 400
neighbor 192.168.10.1 remote-as 100

Now we want to ensure that the 13.0.0.0 network is accessable to Customer 1 (R5) - you will notice that we have to explicitly define which networks we wish to advertise to other AS's - we should firstly verify that the route is currently in our own BGP table on R1:

R1>
do show ip bgp 13.0.0.0/24

We should now be able to reach the 13.0.0.0 network from the 172.16.20.0 subnet:

do ping 13.0.0.1 source 172.16.20.1

Now we can move on to hooking up R4 and R8 to R3 with iBGP - as required by iBGP (to prevent loops) we will need to create a full mesh topology for all of our routers internally within the AS (or apply a workaround such as a route reflctor).

To do this we will start by configuring our interfaces on R3 and R4:

R3>
enable
conf t
int e0/0
ip address 172.16.0.1 255.255.255.252
no shutdown

and again we will use loopback addresses - so that the BGP session is initialized over them:

int loopback 0
ip address 3.3.3.3 255.255.255.255

router ospf 1
log-adjacency-changes
network 3.3.3.3 0.0.0.0 area 0
network 172.16.0.0 0.0.0.3 area 0

R4>
enable
conf t
int e0/0
ip address 172.16.0.2 255.255.255.252
no shutdown

int loopback 0
ip address 4.4.4.4 255.255.255.255

router ospf 1
log-adjacency-changes
network 4.4.4.4 0.0.0.0 area 0
network 172.16.0.0 0.0.0.3 area 0

We should now see the loopback interfaces within the corrosponding routing tables now.

So - lets setup iBGP:

R3>
router bgp 200
neighbor 4.4.4.4 remote-as 200
neighbor 4.4.4.4 update-source loopback 0

and on R4:

R4>
router bgp 200
neighbor 3.3.3.3 remote-as 200
neighbor 3.3.3.3 update-source loopback 0

Note: At this point I was unable to see the 17.0.0.0/24 network in the routing table - a good place to start troubleshooting is by running:

do show ip bgp 17.0.0.0/24

This will let you know whether the route has been recieved and if it is accessable - in my case I had not advertised the 10.254.0.0/30 subnet over OSPF:

R4(config)#do show ip bgp 17.0.0.0/24
BGP routing table entry for 17.0.0.0/24, version 0
Paths: (1 available, no best path)
  Not advertised to any peer
  100
    10.254.0.2 (inaccessible) from 3.3.3.3 (10.254.0.1)
      Origin IGP, metric 0, localpref 100, valid, internal

Notice the 'inaccessible' statement after the gateway.

So to resolve this we need to add the 10.254.0.0/30 into OSPF on R3:

R3>
router ospf 1
network 10.254.0.0 0.0.0.3 area 0

and recheck the routing table on R4:

do show ip route

Now we will hook up R4 to R8:

R4>
int e0/1
ip address 192.168.90.1 255.255.255.252
no shutdown

and ensure that the 192.168.90.0 subnet is advertised by OSPF:

router ospf 1
network 192.168.90.0 0.0.0.3 area 0

and on R8:

R8>
enable
conf t
int e0/0
ip address 192.168.90.2 255.255.255.252
no shutdown

int loopback 0
ip address 8.8.8.8 255.255.255.255

router ospf 1
log-adjacency-changes
network 192.168.90.0 0.0.0.3 area 0
network 8.8.8.8 0.0.0.0 area 0

and configure BGP:

router bgp 200
neighbor 4.4.4.4 remote-as 200
neighbor 4.4.4.4 update-source loopback 0

and also on R4:

R4>
router bgp 200
neighbor 8.8.8.8 remote-as 200
neighbor 8.8.8.8 update-source loopback 0

Now we review the routing table on R8 and find that there are no BGP routes! - Well we need to remember that iBGP requires a full mesh topology* and due to this we will need to hookup R8 to R3!

So on R3:

R3>
int e0/3
ip address 172.19.19.1 255.255.255.252
no shutdown

router ospf 1
network 172.19.19.0 0.0.0.3 area 0

R8>
int e0/1
ip address 172.19.19.2 255.255.255.252
no shutdown

router ospf 1
network 172.19.19.0 0.0.0.3 area 0

and then configure BGP on R3 and R8:

R3>
router bgp 200
neighbor 8.8.8.8 remote-as 200
neighbor 8.8.8.8 update-source loopback 0

R8:
router bgp 200
neighbor 3.3.3.3 remote-as 200
neighbor 3.3.3.3 update-source loopback 0

and then review R8's routing table and we should now see the BGP routes!

At the moment we have not separated our OSPF domain up - for example we don't want to have ASN 100 and 200 part of the same OSPF domain / part of Area 0 - so if I wanted I could ping a link interface of another router within another AS - although in this scenerio we only want to be able want to provide access to public IP space to Customer A and Customer B. So we will configure passive interfaces on R2 (e0/1), R3 (e0/1), R1 (e0/1) and R4 (XXXXXXXXXXXXXXXXXXXXXXXXXXX??):

R1>
ip ospf 1
passive-interface e0/1

R2>
ip ospf 1
passive-interface e0/1

R3>
ip ospf 1
passive-interface e0/1

R4>
ip ospf 1
passive-interface e0/2

This means that we will now need to ping our BGP advertised networks from another BGP advertised network i.e. you won't be able to ping a public address from a local p2p link - so for example if we wanted to access the 17.0.0.0/24 subnet from the 13.0.0.0/24 subnet we would do as follows:

R3>
do ping 17.0.0.1 source 13.0.0.1

The last step is to hook up R8 (AS 200) to R7 (AS 300) - although for AS 300 we will be injecting a default route from BGP into the IGP (OSPF.)

R4>
enable
conf t
int e0/2
ip address 192.168.245.1 255.255.255.252
no shutdown

router bgp 200
neighbor 192.168.245.2 remote-as 300

(we also need to advertise the new network (192.168.254.0/30) to our OSPF neighbors so that they can access the next hop (192.168.245.2) for the route.

router ospf 1
network 192.168.245.0 0.0.0.3

R7>
enable
conf t
int e0/2
ip address 192.168.245.2 255.255.255.252
no shutdown

router bgp 300
neighbor 192.168.245.1 remote-as 200

Monday 10 October 2016

firewalld / firewall-cmd quick start

We should firstly ensure that the service is running with:

firewall-cmd --state

We want to ensure any newly added interfaces will automatically be blocked before we explicitly define who can access them:

firewall-cmd --set-default-zone=block

and then configure our interface zones:

firewall-cmd --permanent --zone=public --change-interface=eno333333
firewall-cmd --permanent --zone=internal --change-interface=eno222222

We must also define the 'ZONE' variable within our interface config:

vi /etc/sysconfig/network-scripts/ifcfg-eno333333

and append:

ZONE=public

Restart the network service and ensure the firewall is reloaded:

sudo service network restart
firewall-cmd --reload

To review we can issue the following to take a look at any active zones:

firewall-cmd --get-active-zones

We will want to setup SSH access:

firewall-cmd --zone=internal --add-service=ssh --permanent
firewall-cmd --zone=public --add-service=https --permanent

and ensure the we define a source:

firewall-cmd --zone=public --add-source=0.0.0.0/0 --permanent
firewall-cmd --zone=internal --add-source=10.0.0.0/24 --permanent

if we want to lock down different sources to different ports (for example if you are using a single interface) - we could issue a 'rich rule' with provide us with more granualr control over sources / service relations:

firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="0.0.0.0/0" port protocol="tcp" port="443" accept'
firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="10.0.0.0/24" port protocol="tcp" port="ssh" accept'

And to review rules within zone we issue:

firewall-cmd --permanent --zone=public --list-all
firewall-cmd --permanent --zone=internal --list-all

and reload the firewall to ensure changes are applied:

firewall-cmd --reload

Friday 7 October 2016

iBGP: Full mesh requirement

When implementing iBGP in your AS you are required to create a full mesh topology - that is - all routers need to be logically connected to every other device in the AS via a neighbor peer relationship - hence requiring you setup individual peering sessions between them all.

The reasoning behind this while eBGP (or when routing between AS's) uses the AS_PATH field to avoid loops - by rejecting an advertised route if the AS_PATH contains it's own AS number, iBGP does not modify this field and hence can't detect loops. For example:

Lets say we have three routers A, B and C - all within a single AS and a eBGP router. The eBGP router then advertises a prefix to router A, which in turn installs it and then advertises it to router B, installs it and then advertises it to router C, installs it and then advertises it to router A - now if router A accepts the route it will cause a loop - now since the AS_PATH is not modified Router A is unsure whether it is a new route advertisement or it is simply an advertisement that has already traversed the router and is being sent back.

Although as your network become larger this can present serious scalability issues - so to combat this we can utilize either route reflectors or confederations.

Route Reflectors: Allow you to avoid having to have a full mesh topology between all of your BGP speakers, instead a cluster is formed where the BGP speakers form a session with the route reflector node - which in turn learns all routes and then advertises them to the BGP speakers. This does however introduce a single point of failure - so utilizing multiple RR's is generally good practice.

Confederations: A confederation is simply another internal AS that is used to split up the existing internal AS - that in turn then hooks up to the eBGP AS.

Typically is it good practise to establish the iBGP sesion using a loopback interface since the interface will remain up dispite any physical faults with a port going down..

Sources:

https://www.juniper.net/documentation/en_US/junos13.3/topics/concept/bgp-ibgp-understanding.html

BGP (Border Gateway Protocol) Summary

(e)BGP is a type of EGP (Exterior Gateway Protocol) - in fact the only one in use today and is used to provide roouting information accross the internet (across AS's.) opposed to an IGP (Interior Gateway Protocol) such as EIGRP or OSPF that provides routing information accross nodes in a single AS (Aoutonomous System.)

One of the fundamental differences between BGP and other routing procotols such as EIGRP, OSP etc. is that both parties (routers) must explicitly define a membership with each other.

There is also another type of BGP called iBGP - that as the name suggests is used

iBGP and eBGP both make use of AS (Autonomous System) numbers. Currently AS numbers are defined a 16-bit hence allowing a maximum of 65535 ASN's - although there are proposals for this to be raised to 16bit: https://tools.ietf.org/html/rfc6793

AS Numbers 1 - 64,495 are used for public use (eBGP) and 64,512 to 65,534 are reserved for private use (iBGP.)

The vast majority of publically (eBGP) available AS's are assigned to either ISPs and large enterprises. Although private AS's (iBGP) is typically applied within large ISP networks.

Typically we are used to routing protocols focussing on finding the optimal path to all destinations - although BGP differs somewhat as peering agreements between different ISPs can be very complex and as a result BGP carries a large number of attributes (metrics) with each IP prefix.

Some of the more commonly used are:

AS Path: The complete path outlining exactly which autonomous systems a packet would have to traverse to get to its destination.

Local Preference: Used in iBGP - if in the event there are multiple paths from one AS to another this attribute defines the preferred path.

Communities: This attribute is appended to a route being advertised to a nieghbor BGP router that provides specific instructions such as:

No-Advertise: Don't advertise the prefix to any BGP neighbors
No-Export: Don't advertise the prefix to any eBGP neighbors

BGP was built for security and scale-ability - although is certainly not the fastest routing protocol when dealing with convergence - hence internal routing is performed by a protocol with faster convergence such as OSPF and then external / internet routes are exchanged by BGP.



The diagram above shows a good (yet simplistic) representation of a service provider - we are firstly assigned our public ASN - we have a core network running iBGP between our core routers. We are also peering with two ISPs (i.e. we form an adjacency with their routers over eGBP) and also have two customers who use eBGP to peer with our network.

As a good introduction to BGP I would also highly reccomend taking a look at an article by the Internet Society.


Wednesday 5 October 2016

Setting up JunOS 12.1 (Olive) with GNS3

For this setup I am presuming you have already installed GNS3.

We will need to firstly ensure VirtualBox (https://www.virtualbox.org/wiki/Downloads) is installed and that you have downloaded the JunOS virtual appliance:

https://drive.google.com/open?id=0BzJE2w8IRXVvX0U1Y0lSdndMWVk

Note: The above you not be used for commercial purposes at all - as well as being outdated, it is not supported at all by Juniper and should be used for developmental and testing purposes only.

From VirtualBox we do a: File >> Import Appliance, specifying our ova image.

It is also worth ensuring that the NIC attached to the VM is in bridging mode:

VM >> Properties >> Network.

and adding 1 or 2 extra interfaces - since by default it only provisioned one in my testing.

Note: The VM might take a little while to initialize on lower end systems, so be patient.

At the login prompt we can simply login with root (no password.)

and verify the JunOS version with:

show version brief

We should then promptly set the root password:

conf
set system root-authentication plain-text-password
set system host-name JUNOS

and then we can configure the em0 interface with:

set interface em0 unit 0 family inet address 10.0.0.100/30
commit

We should be able to verify this with:

run show interfaces em0 detail

and then attempt to ping your default gateway to verify connectivity:

run ping 10.0.0.1

We should now hook up Virtualbox with GNS3 by firstly pointing it to our VBoxManage executable:

GNS3 >> Edit >> Preferences >> VirtualBox >> 'Path to VBoxManage'.

Note: This will typically be: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe

Then hit the 'VirtualBox VMs' tab and add our newly created VM.

You should then see the VM (JunOS Olive) available 'End Devices' tab within GNS3.

Route redistribution with OSPF and EIGRP

Route redistribution is simply the process of sharing information between two or more routing protocols - since by design OSPF and EIGRP (RIP etc.) are not compatible with each other - for example ODPF uses cost as a metric, while EIGRP uses K-Values.

For this lab we will be redistributing routing information across a EIGRP and OSPF domain - we will be using the previous lab here as a starting point.



With the addition of a EIGRP domain of which we want to redistribute information accross the OSPF and EIGRP domain:

R2>
int e0/1
ip address 172.16.30.1 255.255.255.252
no shutdown
ip ospf 1 area 0

router ospf 1
network 172.16.30.0 0.0.0.3 area 0


R3>
enable
conf t
int e0/1
ip address 10.1.0.1 255.255.255.252
no shutdown

int e0/0
ip address 172.16.30.2 255.255.255.252
no shutdown
ip ospf 1 area 0

we will create a new EIGRP AS:

router eigrp 10
no auto-summary
network 10.1.0.0 0.0.0.3

router ospf 1
router-id 3.3.3.3
network 172.16.30.0 0.0.0.3 area 0

R4>
enable
conf t
int e0/0
ip address 10.1.0.2 255.255.255.252
no shutdown

int e0/1
ip address 10.16.32.1 255.255.255.0
no shutdown

router eigrp 10
no auto-summary
network 10.1.0.0 0.0.0.3
network 10.16.32.0 0.0.0.255

We should now see a new adjacency form - confirm with:

do show ip eigrp ne

Now we need to perform the redistribution - one important thing to remember about route redistribution is that it is an outbound process for example in our topology

Since different routing protocols use different metrics we will need to convert the metric from one protocol to one that supports the destination routiong protocol - for example:

Redistribution into RIP requires the seed metric of 'Infinity.'
Redistribution into EIGRP requires the seed metric of 'Infinity.'
Redistribution into OSPF requires the seed metric of '20' or '1' if originating from BGP.
Redistribution into BGP requires the IGP metric.

Because we want to distribute our routes into our EIGRP domain we also need to ensure we have set our default K-Values as the routes will not show up without this information firstly defined:

R3>
router eigrp 10
default-metric 1500 100 255 1 1500
redistribute ospf 1 metric 1544 2000 255 1 1500

and then the other way  - redistributing OSPF routes into EIGRP:
router ospf 1
redistribute eigrp 10 metric 50000 subnets

We can then hop onto R2 and check the routes - you will notice something like the following:

O E2    10.1.0.0/30 [110/50000] via 172.16.30.2, 00:02:19, Ethernet0/1
O E2    10.16.32.0/24 [110/50000] via 172.16.30.2, 00:02:19, Ethernet0/1

Note: The E2 indicates that the network is resides outside the OSPF domain - that is - within our EIGRP domain.

And again on R4 we should see something like:

D EX    172.16.30.0 [170/2195456] via 10.1.0.1, 00:05:07, Ethernet0/0
     10.0.0.0/8 is variably subnetted, 3 subnets, 2 masks
D EX    10.0.0.0/30 [170/2195456] via 10.1.0.1, 00:05:07, Ethernet0/0