Monday 29 February 2016

Compiling NGINX 1.9.X on Debian 8 / CentOS 7 with TCP streaming support

With the introduction of NGINX 1.9.0 we now have TCP streaming support (for the opens-ource version of NGINX!) as well as NGINX plus. Unfortunately Debian 8 is still on 1.6.X - so in order enable this functionality we should compile NGINX from source:

sudo apt-get install libpcre3-dev build-essential libssl-dev libxslt-dev libxml2-dev libgd-dev libgeoip-dev

or for CentOS 7:

sudo yum install pcre-devel openssl-devel libxslt-devel libxml2-devel gd-devel geoip-devel
sudo yum groupinstall "Development Tools"

and then configure and compile:

useradd nginx
usermod -s /sbin/nologin nginx

cd /tmp
sudo wget http://nginx.org/download/nginx-1.9.2.tar.gz
tar zxvf nginx*

Run configure (specifically with the '--with-stream --with-stream_ssl_module' reference - the configure command below has been taken from Debian 8 (with the omission of WebDAV support):

sudo ./configure --user=nginx --group=nginx  --sbin-path=/usr/sbin/nginx --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt=-Wl,-z,relro --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --with-stream --with-stream_ssl_module  --add-module=/tmp/ngx_http_substitutions_filter_module-master --add-module=/tmp/nginx-upstream-fair-master --add-module=/tmp/headers-more-nginx-module-master

We should then run make and install:

sudo make
sudo make install

Systemctl script (CentOS7)

sudo vi /usr/lib/systemd/system/nginx.service

and add:
[Unit]
Description=The nginx HTTP and reverse proxy server
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
PIDFile=/run/nginx.pid
# Nginx will fail to start if /run/nginx.pid already exists but has the wrong
# SELinux context. This might happen when running 'nginx -t' from the cmdline.
# https://bugzilla.redhat.com/show_bug.cgi?id=1268621
ExecStartPre=/usr/bin/rm -f /run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=process
KillSignal=SIGQUIT
TimeoutStopSec=5
PrivateTmp=true

[Install]
WantedBy=multi-user.target
and then reload the systemd daemon:

systemctl daemon-reload

and attempt to launch nginx with:

systemctl start nginx

Unfortuantely it failed first time around - by doing a 'systemctl status nginx' I could see it was failing thev testing portion. So we can replicate this step ourself to see if we can identify why it's failing:

sudo /usr/sbin/nginx -t

We find: 'nginx: [emerg] mkdir() "/var/lib/nginx/body" failed (No such file or directory)' - It looks like the make script did not create this for us!

To resolve this we simply need to create the directory:

mkdir /var/lib/nginx

and attempt to launch nginx again:

systemctl start nginx

Main Mode vs Aggressive Mode and PFS

During ISAKMP negotiation the peer initiating the SA will attempt to use main mode or aggressive mode (or both) to establish the SA.

Aggressive mode is the less secure of the two (although quicker) due to the fact that the negotiation is squeezed into three UDP packets - the first packet contains the proposal, key material and ID, the second packet is then sent back to the initiator from the responder with the DH secret and then the third packet is sent to the responder from the initiator with identity and has payloads. The downside to this is that the initiator, responder IDs and pre-shared key's (identities) are sent in plain text and hence visible in plain text (Unless you use PKI!) By sniffing the connection you can extract the hashed PSK and run it against a database of cracked hashes.

Main mode is the more secure of the two as the sender and receiver ID's, PSK's are NOT sent in plain-text. Although this comes at a cost of bandwidth / packets - as main mode uses more packets to achieve the ISAKMP SA.

Using PFS (Perfect Forward Secrecy) ensures that a new DH key is generated each time a phase 2 (IPSec) negotiation reoccurs - hence causing generation of new phase 1 (ISAKMP) keys. If the peer supports this - this option should always remain turned on!

Friday 26 February 2016

Create VMDK disks that are larger than 4TB without the vSphere Web Client

If you are using the free version of the ESXI hypervisor you will not have access to the vSphere web client and unfortunately this comes at a disadvantage when performing certain tasks - for example creating virtual disks that are larger than 4TB in size.

Fortunately we can use the CLI (vim-cmd) to create the disk for us - we should firstly enable SSH on the host and then logon to the ESXI shell and grab the VM ID:

vim-cmd vmsvc/getallvms

From the output we can identify the VM ID - we then issue the following command to create the disk on the specific VM:

vim-cmd vmsvc/device.diskadd 1 52428800 scsi0 2 datastore1

* where '1' is the ID number of the VM and '52428800' equals the disk size (in KB), 'scsi0' identifies the disk controller and '2' equals the port on the controller)

Connecting two AWS VPC Regions together with LibreSwan (OpenSWAN) on Debian

In this scenario we have two VPC's each in a separate AWS region - of which we wish to communicate across directly.

Unfortunately VPC peering currently only works across VPC's within the same region.

Firstly enable the af-key module on the kernel:

sudo modprobe af_key

sudo nano /etc/modules

and add:

af_key

We should also ensure that  redirects are not sent or accepted  by setting / disabling:

 /proc/sys/net/ipv4/conf/*/accept_redirects

and

/proc/sys/net/ipv4/conf/*/send_redirects

to '0'.

So to achieve this we will be using LibreSWAN (OpenSWAN). So on our first VPC (in Ireland) AND are second VPC (in Singapore) we shall deploy a new Debian VM with the following security group settings:

Allow UDP 4500 (IPSec/UDP) from 0.0.0.0/0
Allow UDP 500 (IKE protocol) from 0.0.0.0/0
Allow TCP 22  (SSH protocol) from 0.0.0.0/0

and enable some pre-reuqueites (port forwarding etc.) as the VM will be acting as a router in this scenerio:

sudo sysctl -w net.ipv4.ip_forward=1

sudo apt-get update
sudo apt-get install build-essential libnss3-dev libnspr4-dev pkg-config libpam-dev libcap-ng-dev libcap-ng-utils libselinux-dev libcurl4-nss-dev libgmp3-dev flex bison gcc make libunbound-dev xmlto libevent-dev libnss3-tools

Unfortuantely the latest Debian stable (jessie) does not currently have LibreSwan packaged yet - so we will need to compile manually:

cd /tmp
wget https://download.libreswan.org/libreswan-3.16.tar.gz
tar zxvf libre*
cd libre*

make programs
make install

We should copy the init script to our init folder:

cp /lib/systemd/system/ipsec.service /etc/init.d/
chmod 0755 /etc/init.d/ipsec.service

systemctl enable ipsec.service

Ensure /etc/ipsec.conf has an include statement for /etc/ipsec.d/* (should be at the bottom) and als uncomment 'version 2' and finally add / amend the following statements to the 'config setup' section:

    protostack=netkey
    interfaces=%defaultroute
    nat_traversal=yes
    force_keepalive=yes
    keep_alive=60
    oe=no
    nhelpers=0

We can now create a configuration for our site to site VPN on VPC1:

sudo vi /etc/ipsec.d/s2s.conf

conn sg-to-ire
type=tunnel
authby=secret
left=%defaultroute
leftid=6.6.6.6
leftnexthop=%defaultroute
leftsubnet=10.10.10.0/24
right=7.7.7.7
rightsubnet=172.16.0.0/24
pfs=yes
auto=start

* Where EIP = Elastic IP.

Create our secrets file:

sudo vi /etc/ipsec.d/sg-to-ire.secrets

and enter:

<SGIP> <IREIP>: PSK "mysecretkey"

and then on VPC2 we do:

sudo vi /etc/ipsec.d/s2s.conf

conn ire-to-sg
type=tunnel
authby=secret
left=%defaultroute
leftid=7.7.7.7
leftnexthop=%defaultroute
leftsubnet=172.16.0.0/24
right=6.6.6.6
rightsubnet=10.10.10.0/24
pfs=yes
auto=start

* Where EIP = Elastic IP.

Create our secrets file:

sudo vi /etc/ipsec.d/ire-to-sg.secrets

and enter:

<IREIP> <SGIP>: PSK "mysecretkey"

Now on both hosts run to create the tunnel run:

sudo service ipsec restart

We can verify VPN connectivity with:

tcpdump -n -i eth0 esp or udp port 500 or udp port 4500

We should also run the following command on both hosts to ensure IPSec will function correctly on them:

sudo ipsec verify

I had some problems starting ipsec:

/usr/local/sbin/ipsec start

After reviewing 'journalctl -xn' I noticed the following error:

Failed to initialize nss database sql:/etc/ipsec.d

So I proceeded to test nss:

/usr/local/sbin/ipsec checknss

And noticed the following error:

/usr/local/sbin/ipsec: certutil: not found

So we can install certutil with:

sudo apt-get install libnss3-tools

And then re-check IPSec with:

sudo ipsec verify

and finally if all OK - start the service:

sudo service ipsec restart


Thursday 25 February 2016

Reclaiming disk space / shrinking VMDK disks

For linux: Using the cat for linux we can recover white space by reclaiming unused blocks with:

cat /dev/zero > zero.fill;sync;sleep 1;sync;rm -f zero.fill

The same can be achieved on Windows with sdelete:

sdelete -c C:\

Now within the ESXI shell we can issue:

And then use vmkfstools -punchzero your-disk.vmdk

If you are using VMWare Player / Workstation you can use vmware-vdiskmanager.exe instead (within the VMWare Worksation program files) like follows:

vmware-vdiskmanager.exe -k "D:\path\to\your.vmdk"

Resizing ext2/3/4 filesystems with resize2fs

This technique will come in handy for those who are not using LVM!

Warning: Always take a backup of the entire system before performing anything like this!

Firstly identify the partition you wish to shrink (MAKE A RECORD OF THIS DATA!):

df -h

Verify details with fdisk (or use parted for GP (MAKE A RECORD OF THIS DATA!):

fdisk -l

or (for GPT)

parted /dev/sda2
unit c
p

and it's file system:

parted /dev/sda2 -l

Now if you are attempting to resize a system partition you will need to download some kind of linux live cd distro e.g. the debian live rescue cd or if it's a data partition we can simply omit this step.

Unmount the partition:

umount /dev/sda2

At the time (to my surprise) I got a message returned the device was busy - so to identify anything still accessing the disk we can run:

lsof /dev/sda2

Now we ensure that the filesystem is clean:

e2fsck -f /dev/sda2

We should now proceed by resizing the filesystem with resize2fs:

resize2fs -p /dev/sda2 500G

This could take several hours dependent on disk speeds etc. - it took me around 4 - 5 hours on relatively slow disks to resize a 5TB filesystem to 500GB.

You should see the output as something like:

'The filesystem on /dev/sdb1 is now 131072000 (4K) blocks long'

So the total cylinder count will be: 131072000 * 4 = 524288000 blocks. Which if you convert from KB to GB roughly equals 525GB (this is a good sanity test!)

Now we should delete the old partition using fdisk (or parted for GPT):

parted /dev/sdb

Lets get a list of partitions by running 'p'

Identify the partition number and issue the following to delete the partition:

rm
X (where X is the partition number)

Ensure the unit is set too cylinders!:

unit GB

Then create the new partition:

So start cylinder is 2048 - which is 1049KB - converted to GB is 0.001049 - 0GB rounded to the nearest GB.

For the end sector we know it's going to be 525GB with an extra  of at least 5% or so - so for my example I chose 550GB:

mkpart primary ext4 0 550

Now we ensure that the filesystem is clean:

e2fsck -f /dev/sda2

Finally re-mount / restart your system.

Reverse-engineering the OpenVPN AS login process

When logging into OpenVPN AS via it's web based frontend it automatically connects by instructing the VPN client to do so. I wondered how exactly it was doing this - so running Fiddler I observed it was using some kind of RPC call over HTTPS (on port 946) to a server named openvpn-client.openvpn.yourdomain.com - which funnily enough is a lookback address (172.27.232.2) on the local machine.

It appears that when installing the OpenVPN client on a users computer it add the host to the hosts file.

So me being me - I was not paticulary happy with the OpenVPN AS web-based interface and it's somewhat lack of aesthetic appeal I decided to implement by own version - this post briefly describes how the login process works so others can build their own versions if desired.

So on the OpenVPN AS web login - after the user has entered thier credentials - the browser sends an RPC call as follows to intruct the client to connect to the VPN server in non-interacrtive mode:

HTTP POST to https://openvpn-client.openvpn.yourdomain.com:946/RPC2

NOTE: The 'X-OpenVPN' header MUST be present in all RPC requests and must equal to '1'.

Headers:

Host: openvpn-client.openvpn.yourdomain.com:946
Connection: keep-alive
Content-Length: 784
X-OpenVPN: 1
Origin: https://openvpn-client.openvpn.yourdomain.com:946
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36
Content-Type: text/xml
Accept: */*
DNT: 1
Referer: https://openvpn-client.openvpn.yourdomain.com:946/?_ts=1456406537206
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8
Cookie: __utma=158273187.1431551106.1455640284.1456310756.1456331497.4; __utmc=158273187; __utmz=158273187.1456310756.3.2.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not%20provided)


<?xml version="1.0" encoding="UTF-8"?>
<methodCall>
   <methodName>Connect</methodName>
   <params>
      <param>
         <value>
            <struct>
               <member>
                  <name>type</name>
                  <value>
                     <string>dynamic</string>
                  </value>
               </member>
               <member>
                  <name>profile_id</name>
                  <value>
                     <string>openvpn_yourdomain_com_dynamic_p3151</string>
                  </value>
               </member>
               <member>
                  <name>non_interactive</name>
                  <value>
                     <boolean>0</boolean>
                  </value>
               </member>
            </struct>
         </value>
      </param>
      <param>
         <value>
            <array>
               <data>
                  <string>STATE</string>
                  <string>PASSWORD</string>
                  <string>ACTIVE</string>
                  <string>CERT_APPROVAL</string>
                  <string>INFO</string>
                  <string>CONNECTED_USER</string>
                  <string>BYTECOUNT</string>
                  <string>FATAL</string>
                  <string>SCRIPT</string>
                  <string>CHALLENGE</string>
                  <string>DELETE_PENDING</string>
               </data>
            </array>
         </value>
      </param>
      <param>
         <value>
            <struct />
         </value>
      </param>
   </params>
</methodCall>

We can replay this with something like Advanced Rest Client addon for chrome.

Response:

<?xml version='1.0'?>
<methodResponse>
<params>
<param>
<value><string>sess_openvpn_yourdomain_com_dynamic_p3151_vmXGe0u27f6ebAGD_1</string></value>
</param>
</params>
</methodResponse>

I beleive this is then returning a session ID that we can then use a reference point for any further RPC's we do.

It then sends a poll with our session ID:

<?xml version="1.0" encoding="UTF-8"?>
<methodCall>
   <methodName>Poll</methodName>
   <params>
      <param>
         <value>
            <string>sess_openvpn_yourdomain_com_dynamic_p3151_vmXGe0u27f6ebAGD_1</string>
         </value>
      </param>
      <param>
         <value>
            <int>10</int>
         </value>
      </param>
   </params>
</methodCall>

We then get a reponse asking us for the credentials:

<?xml version="1.0" encoding="UTF-8"?>
<methodResponse>
   <params>
      <param>
         <value>
            <array>
               <data>
                  <value>
                     <struct>
                        <member>
                           <name>status</name>
                           <value>
                              <string>need</string>
                           </value>
                        </member>
                        <member>
                           <name>timestamp</name>
                           <value>
                              <int>1456409871</int>
                           </value>
                        </member>
                        <member>
                           <name>need</name>
                           <value>
                              <array>
                                 <data>
                                    <value>
                                       <string>username</string>
                                    </value>
                                    <value>
                                       <string>password</string>
                                    </value>
                                 </data>
                              </array>
                           </value>
                        </member>
                        <member>
                           <name>type</name>
                           <value>
                              <string>PASSWORD</string>
                           </value>
                        </member>
                        <member>
                           <name>auth_type</name>
                           <value>
                              <string>Dynamic</string>
                           </value>
                        </member>
                     </struct>
                  </value>
               </data>
            </array>
         </value>
      </param>
   </params>
</methodResponse>

So we send another RPC this time sending the login details:

<?xml version="1.0" encoding="UTF-8"?>
<methodCall>
   <methodName>SubmitCreds</methodName>
   <params>
      <param>
         <value>
            <string>sess_openvpn_mydomain_com_dynamic_p3151_vmXGe0u27f6ebAGD_1</string>
         </value>
      </param>
      <param>
         <value>
            <struct>
               <member>
                  <name>username</name>
                  <value>
                     <string>myuser</string>
                  </value>
               </member>
               <member>
                  <name>password</name>
                  <value>
                     <string>SESS_ID_fWrz/SDV511111111ZDA==</string>
                  </value>
               </member>
            </struct>
         </value>
      </param>
      <param>
         <value>
            <string>Dynamic</string>
         </value>
      </param>
      <param>
         <value>
            <boolean>1</boolean>
         </value>
      </param>
   </params>
</methodCall>

We then need to keep POST'ing the 'POLL' method until we get an XML node == '<string>CONNECTED</string>':

We need to look out (apply error handling) for '<string>pyovpn.client.asxmlcli.AuthError</string>' - which indicates that there is an authentication problem and as a result the VPN will drop!

We should also look out for '<string>twisted.internet.defer.TimeoutError</string>' which indicates a connection problem e.g. timeout, dns lookup problems and so on.

<?xml version="1.0" encoding="UTF-8"?>
<methodCall>
   <methodName>Poll</methodName>
   <params>
      <param>
         <value>
            <string>sess_openvpn_yourdomain_com_dynamic_p3151_vmXGe0u27f6ebAGD_1</string>
         </value>
      </param>
      <param>
         <value>
            <int>10</int>
         </value>
      </param>
   </params>
</methodCall>

Wednesday 24 February 2016

Forwarding the real IP address of nginx clients to apache backends

Firstly we should add the following directives (in red) to our proxy configuration in our virtual host:

# Reverse proxy configuration
     location / {
     proxy_pass  https://192.168.0.1;
     proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
     proxy_redirect off;
     proxy_buffering off;
     proxy_set_header        Host            $host;
     proxy_set_header        X-Real-IP       $remote_addr;
     proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;

}

Although unfortunately be default apache does not see the forwarded IP addresses (at least not on Debian) - we need to install and configure the libapache2-mod-rpaf package:

sudo apt-get install libapache2-mod-rpaf

We then configure it:

nano /etc/apache2/mods-available/rpaf.conf

And ensure that the RPAFproxy_ips are set to your upstream (proxy / nginx) server(s.)

Restart apache:

sudo service apache2 restart

We should then be able to see the real IP address of the client in the logs e.g.:

tail -f /var/log/apache2/access.log

Force DNS changes to propagate through AD with repadmin and dnscmd

In order to ensure a DNS change is propagated quickly through AD we should firstly perform replication between the DC's with the following command:

repadmin /syncall /AdeP

(The above will actually update all partitions)

Although this is not enough - we also need to instruct the DC you are on to poll the directory with:

dnscmd /zoneupdatefromds myzone.com

You should now be able to see DNS changes / additions on the DC immediately. 

Shrinking a partition with an EXT3 filesystem

Warning: Always take a backup of the entire system before performing anything like this!

Firstly identify the partition you wish to shrink (MAKE A RECORD OF THIS DATA!):

df -h

Verify details with fdisk (MAKE A RECORD OF THIS DATA!):

fdisk -l

and it's file system:

parted /dev/sda2 -l

Now if you are attempting to resize a system partition you will need to download some kind of linux live cd distro e.g. the debian live rescue cd or if it's a data partition we can simply omit this step.

Unmount the partition:

umount /dev/sda2

Now we ensure that the partition is clean:

fsck -n /dev/sda2

Because you are unable to shrink EXT3 filesystems we must instead remove the partitions journal - effectively making it an EXT2 filesystem - which can be shrunk!

tune2fs -O ^has_journal /dev/sda2

And then we should force a filesystem check with:

e2fsck -f /dev/sda2

We should now shrink the partition size using the resize2fs utility:

WARNING: Be very careful here to ensure that the new size will have enough room to cater for the exsiting data in use on the partition - there's no safety net here! e.g. if sda2 was a total of 500GB and the used space was 150GB we would need to ensure that the new disk allocation we be something like 155GB:

resize2fs /dev/sda2 155G

IMPORTANT: Ensure you keep the output of the resize2fs command as you will need to take note of the amount of blocks allocated (and block size) later! ***

Sample output:

resize2fs 1.XX
Resizing the filesystem on /dev/sda2 to 5120000 (4k) blocks.

We will now have to delete our partition (sda2) from the partition table using fdisk (this will not lose any data!):

fdisk /dev/sda

Press 'd' >> Specify partition number: '2' >> Press 'n' to create a new partition >> Press 'p' for primary partition (or logical).

We are now asked for the size of the partition - the start and finish cylinders - we already know what the start cylinder is (as it's on our earlier 'fdisk -l' output - but we do not know the end cylinder - this can be calculated as follows:

Blocks (from the resize2fs output): 5120000 x 4(4k block) x 1.05 = 21504000

Note: The extra 5% is to ensure the partition is big enough.

We should also ensure it's in the proper formatting - so we would enter it as:

+21504000K

We should also ensure that our partition is set too active or not - if it was originally (you can toggle this by pressing 'a')

Press 'w' to write changes.

Then either reboot (if you are using a live CD) or simply continue if you are running this on the OS itself.

Run a filesystem check on the partition:

fsck -n /dev/sda2

And then convert it to EXT3 (create a journal):

tune2fs -j /dev/sda2

Reboot your system with:

shutdown -r now

and then check the mount points and partitions:

df -H

fdisk -L




Tuesday 23 February 2016

Migrating an ESXI virtual machine to AWS EC2

AWS allows you to migrate existing machines to it's EC2 platform - although there are specific pre-requisites we should ensure are in place before attempting to migrate a machine:

- SSH must be enabled on the host

- The IP configuration must be set to DHCP

- OS must be Linux or Windows - although I know BSD distro's are available - but don't see any reference to thme and support

- Partitions for Windows and Linux must be MBR for system volumes (no GPT).

- Filesystems for Windows must be NTFS - Linux should be ext2, ext3, ext4, Btrfs, JFS, or XFS.

- Linux VM's must be imported as x64!

There are quite a few others (i've tried to cover the main ones here) - more info here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html#vmimport-limitations

We will start by creating an OVA file - from the vSphere GUI (or Web Client) client we need to shutdown the VM - select it and then go to: File >> Export >> Export OVF Template...

Ensure that 'OVA' format is selected, specify a name and output directory and hit OK.

Download and run the Amazon AWS CLI installer: https://s3.amazonaws.com/aws-cli/AWSCLI64.msi

Open a command prompt here: C:\Program Files\Amazon\AWSCLI

We then configure are account information by issuing:

aws configure

We will need to create a specific IAM role to allow VM import/export operations - as when running the vmimport function it requires access to other services such as S3:

So we should a file as follows: trust-policy.json

With the following content:

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"",
         "Effect":"Allow",
         "Principal":{
            "Service":"vmie.amazonaws.com"
         },
         "Action":"sts:AssumeRole",
         "Condition":{
            "StringEquals":{
               "sts:ExternalId":"vmimport"
            }
         }
      }
   ]
}

And then create the role in the CLI:

aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json

We should also create a service role - by creating a new file called: role-policy.json

and adding the following to it (remembering to replacing both instances of '<disk-image-file-bucket>' with your bucket you wish to upload the OVA to):

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":[
            "arn:aws:s3:::<disk-image-file-bucket>"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObject"
         ],
         "Resource":[
            "arn:aws:s3:::<disk-image-file-bucket>/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ec2:ModifySnapshotAttribute",
            "ec2:CopySnapshot",
            "ec2:RegisterImage",
            "ec2:Describe*"
         ],
         "Resource":"*"
      }
   ]
}

and creating the policy via the CLI:

aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json


If your logged in with your IAM user you will additionally need the following permissions in your policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListAllMyBuckets"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:CreateBucket",
        "s3:DeleteBucket",
        "s3:DeleteObject",
        "s3:GetBucketLocation",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:PutObject"
      ],
      "Resource": ["arn:aws:s3:::mys3bucket","arn:aws:s3:::mys3bucket/*"]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:CancelConversionTask",
        "ec2:CancelExportTask",
        "ec2:CreateImage",
        "ec2:CreateInstanceExportTask",
        "ec2:CreateTags",
        "ec2:DeleteTags",
        "ec2:DescribeConversionTasks",
        "ec2:DescribeExportTasks",
        "ec2:DescribeInstanceAttribute",
        "ec2:DescribeInstanceStatus",
        "ec2:DescribeInstances",
        "ec2:DescribeTags",
        "ec2:ImportInstance",
        "ec2:ImportVolume",
        "ec2:StartInstances",
        "ec2:StopInstances",
        "ec2:TerminateInstances",
        "ec2:ImportImage",
        "ec2:ImportSnapshot",
        "ec2:DescribeImportImageTasks",
        "ec2:DescribeImportSnapshotTasks",
        "ec2:CancelImportTask"
      ],
      "Resource": "*"
    }
  ]
}

We should now ensure we have uploaded the OVA file into our bucket - you can do this by logging into S3 on the AWS portal or use the CLI to upload directly to your S3 bucket.

and then we proceed by importing the OVA file from the bucket into EC2:

aws ec2 import-image --cli-input-json "{  \"Description\": \"Windows 2008 OVA\", \"DiskContainers\": [ { \"Description\": \"First CLI task\", \"UserBucket\": { \"S3Bucket\": \"my-import-bucket\", \"S3Key\" : \"my-windows-2008-vm.ova\" } } ]}"

Note: Where 'my-windows-2008-vm.ova' is the OVA you have uploaded to your S3 bucket.

You can checkup on the import progress by running the following command:

ec2 describe-import-image-tasks

When it has completed - we can then go to the AWS Console and should see our imported in the 'user-created' AMI's section.

Monday 22 February 2016

Secure nginx (1.10+) server configuration with reverse proxy

Below is an nginx (version 1.10+) configuration that I like to use as a baseline. It has been adapted from several sources (all mentioned below.)

server {
  # We bind to a specific interface - NOT 0.0.0.0
  listen X.X.X.X:443 ssl;
  server_name yourwebsite.com www.yourwebsite.com;

  ssl_certificate /etc/nginx/ssl/yourwebsite_cert.pem;
  # Ensure that your key has the correct its set e.g. chmod 600.
  ssl_certificate_key /etc/nginx/ssl/yourwebsite_key.key;

  # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate
  ssl_session_timeout 1d;
  ssl_session_cache shared:SSL:50m;
  ssl_session_tickets off;

  # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits
  ssl_dhparam /etc/nginx/ssl/dhparam.pem;

  # enables server-side protection from BEAST attacks
  ssl_prefer_server_ciphers on;
  # disable SSLv3 - we only want TLSv1/1.1/1.2 (ideally just 1.2 if we can get away with it.)
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  # ciphers chosen for forward secrecy and compatibility: https://mozilla.github.io/server-side-tls/ssl-config-generator/
  ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS";

  # enable ocsp stapling (mechanism by which a site can convey certificate revocation information to visitors in a privacy-preserving, scalable manner)
  ssl_stapling on;
  ssl_stapling_verify on;

  # verify chain of trust of OCSP response using Root CA and Intermediate certs
  ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates;

  # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
  add_header Strict-Transport-Security max-age=15768000;

  # Prevent clickjacking: https://www.owasp.org/index.php/Clickjacking
  add_header X-Frame-Options "SAMEORIGIN";

  # Disable sniffing in browsers: https://blogs.msdn.com/b/ie/archive/2008/09/02/ie8-security-part-vi-beta-2-update.aspx?Redirected=true
  add_header X-Content-Type-Options nosniff;

  # X-XSS Protection header: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection
  add_header X-XSS-Protection "1; mode=block";

  # Define DNS resolver   resolver 8.8.8.8

  access_log  /var/log/nginx/yourdomain.log  combined;
  error_log  /var/log/nginx/yourdomain.error.log;

  # Reverse proxy configuration
     location / {
     proxy_pass  https://10.11.12.13:443;
     proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
     proxy_redirect off;
     proxy_buffering off;
     proxy_set_header        Host            $host;
     proxy_set_header        X-Real-IP       $remote_addr;
     proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
     client_max_body_size 10M;
     allow      1.2.3.4;
     deny       all;
   }

}

# redirect all http traffic to https
server {
  listen 80;
  server_name yourserver.com;
  return 301 https://$host$request_uri;
}


Sources:

https://mozilla.github.io/server-side-tls/ssl-config-generator/ (nginx 1.10.1 | intermediate profile | OpenSSL 1.0.1e)
https://geekflare.com/add-x-frame-options-nginx/
https://blog.dnsimple.com/2017/01/introducing-caa-records/

Prioritize SIP / voice traffic with QoS / traffic-shaping on the ASA

The configuration following demonstrates how you can shape traffic on the outside interface - limiting it to 5mbps for the outside interface - although ensuring SIP traffic is being prioritized:
priority-queue outside
queue-limit 2048
tx-ring-limit 256
class-map Voice
match dscp ef sip rtp
exit
policy-map p1_priority
class Voice
priority
exit
exit
policy-map p1_shape
class class-default
shape average 5000000
service-policy p1_priority
exit
exit
Note: 'class-default' defines everything - except any other traffic matched in the policy.

Setting up QoS with the ASA 5510

We can use two methods to control traffic with the ASA - policing the traffic and shaping the traffic - in this post I will describe each method and provide some real-world examples.

Traffic Policing: This allows you to set a limit of throughput (in bits/second) and anything above that will be dropped - it also allows you to set maximum burst limits.

For example we may wish to limit HTTP traffic to a public facing  web server to prevent a specific web server from saturating all of the bandwidth by limiting www traffic to 5mbps - to do this we apply MFP:

access-list WEBSITE-LIMIT permit tcp host any 66.77.88.99 eq www

class-map WEBSITE-TRAFFIC
match access-list WEBSITE-LIMIT
exit

policy-map WEBTRAFFIC-POLICY
class WEBSITE-TRAFFIC
police output 5000000 conform-action transmit exceed- action drop
exit
exit

service-policy WEBTRAFFIC-POLICY interface outside

Traffic Shaping: This allows you to restrict traffic throughput - but rather than dropping it will attempt to buffer the data and send it later on. For example:

access-list SHAPED-ACL permit ip interface DMZ interface OUTSIDE

class-map SHAPED-TRAFFIC
match access-list SHAPED-ACL
exit

policy-map qos_outside_policy
class SHAPED-TRAFFIC
shape average 2000000 
exit
exit

service-policy qos_outside_policy interface outside

Sunday 21 February 2016

Hardening / threat mitigation with Cisco ASA

This post will briefly describe a number of techniques that can be used to harden / help mitigate attacks targeted against the ASA.

There are types of threat-detection that we can utilize - basic (which is enabled by default - applied system-wide) and advanced - which usually implies that you have more granular control e.g. specifying src/dst)

conn-limit: This variable allows you to set the maximum amount of connections to the device before it will start to drop connections:

threat-detection rate conn-limit-drop rate-interval 1000 average-rate 3 burst-rate 3

We can review the configuration with:

show threat-detection rate

We can also use policy maps to police connections on a more granular level i.e. a specific source / destination:

access-list CUSTOMER-TRAFFIC extended permit tcp any host CUSTOMER-WEBSITE-IP eq www

class-map CONNECTIONS
match access-list CUSTOMER-TRAFFIC
exit

policy-map CONNECTION-POLICY
class CONNECTIONS
set connection per-client-max 30 per-client-embryonic-max 10

service-policy CONNECTION-POLICY interface OUTSIDE

We can then review the policy with (allowing you to check for current connections and how many have been dropped):

show service-policy interface OUTSIDE

SMURF Attacks: This is when an attacker broadcasts a large amount of ICMP messages that have their source spoofed so that other hosts on the network all respond - hence generating a large amount of traffic directed towards the target.

This attack can be mitigated by using the following command:

no ip directed-broadcast

By default this directed broadcast is now turned of by default in IOS 12+.

SYN Flood Attacks: This kind of attack is where an attacker start the three-way handshake with the victim by sending a SYN packet - the victim then responds with a SYN-ACK - although the attacker then never responds (or spoofs the source) with an ACK - leaving the session open - doing this continually can use up all of the connection state table if not mitigated properly.

By default the basic threat detection defines the following:

threat-detection rate syn-attack rate-interval 600 average-rate 100 burst-rate 200
threat-detection rate syn-attack rate-interval 3600 average-rate 80 burst-rate 160


Saturday 20 February 2016

ASA: Layer 2 Security

This post will outline some of the most common attacks at layer 2 and how to help prevent / mitigate such attacks.

VLAN Hopping - Switch Spoofing: This is when an attacker attempts to connect to a switchport in the hope that dynamic negotiation of trunking is turned on - the attacker will then use DTP to spoof switch in order to then access the VLANs in the trunk.

This attack can be mitigated by simply ensuring that the switchports are set to access mode e.g.:

switchport mode access

or simply shutting down any used ports!

VLAN Hopping - Double Tagging: This is where an attacker sends a frame through a switchport connected to the native VLAN (otherwiseit won't work) - this frame has an outer and inner tag - the outer one matches the access port on the first switch and the outer tag is then stripped leaving the inner one in place (which has the VLAN you are trying to get access to) - the frame then gets flooded out all ports because it's destination is unknown - including the switchport connected (trunked) to another switch where the residing VLAN you want is - the second switch will then see that VLAN tag and forward that frame out on the target VLAN to the victim.

Note: This attack is unidirectional - as when the receiving host receives the frame it will have no path back to the source VLAN!

In order to mitigate this attack you should ensure that access ports are never assigned to the native VLAN ever!

ARP Spoofing: This occurs when an attacker spoofs ARP packets - by responding to ARP requests of which it should not be - so that the attacker claims its MAC address as corresponding to an IP of which it is not assigned.

Since ARP technically involves layer 2 and 3 this kind of attack will only affect layer 3 devices e.g. routers, firewalls, servers and so on.

Each type of device will typically have it's own ARP cache timeout - most Cisco routers hold the ARP details in it's table for 4 hours.

MAC (CAM) Spoofing: This is when an attacker sends a frame with someone else's MAC address in the source - in turn the CAM table is updated with this information and then all frames are forwarded out to the attacker.

Typically Cisco devices hold / update this information for 5 minutes by default.

This kind of attack can be mitigated using port-security - by defining a single ma address for the switchport:

switchport port-security mac-address sticky

MAC (CAM) Flooding: This is when an attacker sends a huge amount of frames from spoofed source mac addresses in the hope of exhausting the CAM table in a VLAN. This in turn prevents the majority of the legitimate traffic from being re-learnt / added to the CAM table and as a result traffic is flooded out of all ports (except the source obviously!)

MAC Flooding attacks can be mitigated with port security in a number of ways - for example by limiting the number of source MAC addresses entering the switchport:

switchport port-security maximum 10

or simply only allowing only a single mac address on the switchport:

switchport port-security mac-address sticky

and assign a corresponding action:

switchport port-security violation shutdown

STP - BPDU Denial of Service: This is simply where the switch is flooded with TCN (Topology Notification Change) frames or other information - which hits the CPU.

STP - Root Bridge Takeover: This is where a rouge switch claims to be root - hence changing the physical paths / patterns of traffic.

We can deploy the 'root guard' feature on a specific switchport to ensure that if any better BPDU packets from a switch will be rejected and the switch will remain the root. Or we can use BPDU guard that will close down the port when it see's BPDU's entering it - preventing someone from plugging a switch into the port.

Thursday 18 February 2016

Setting up route redundancy on the ASA

For this tutorial we will have an ASA with an inside inferface and two outside interfaces as below:

conf t
int g0/0
nameif inside
security-level 100
ip address 10.0.0.2 255.255.255.0
no shut

int g0/1
nameif outside
security-level 0
ip address 55.66.77.2 255.255.255.240
no shut

int g0/2
nameif outsidebackup
security-level 0
ip address 66.77.88.2 255.255.255.240
no shut
exit

We should now configure dynamic PAT:

object network Inside_Network
 nat (inside,outside) dynamic interface
object network inside_network
 nat (inside,outsidebackup) dynamic interface

Proceeding by defining primary default route:

route outside 0.0.0.0 0.0.0.0 55.66.77.2 1 track 1

Note: The route will not be present in the route table until we have setup the SLA monitor - don't worry! The 'track 1' defines which SLA tracking number the route will be tied to.

We should now define the backup / secondary default route with a metric of 254

route outsidebackup 0.0.0.0 0.0.0.0 66.77.88.2 254

Proceed by creating an SLA monitor that will use ICMP to check whether the remote gateway is available:

sla monitor 100
 type echo protocol ipIcmpEcho 55.66.77.1 interface outside
 num-packets 3
 frequency 10

Schedule the monitoring process to start now:

sla monitor schedule 100 life forever start-time now

Now assosiate the tracked static route we created with the SLA.

track 1 rtr 100 reachability

We can now review the state of the monitor with:

show sla monitor operational-state

We are specifically intested in 'Latest operation return code' - which should equal 'OK' if all is good.

Finally we can review view and debug SLA configuration with:

show sla monitor configuration

and

debug sla monitor

Wednesday 17 February 2016

NAT hairpinning configuration with ASA 8.3+

A NAT hairpin is used to access hosts in a network from the same network that are using an outside address. Traffic goes from the inside to outside and then back to the inside - visualizing this it resembles a 'hairpin.'

Now lets say we had a server that was resolved to 66.77.88.99 internally (this might be a server attached to the outside interface or simply one on the internet - instead we want this IP address to translate to an internal IP on our side interface.

Now firstly in order to allow an inside address to access the outside address and then re-translate itself you are required to use the following command (which permits traffic between interfaces with the same security level):

same-security-traffic permit intra-interface

IOS Pre-8.3

static (inside,inside) 192.168.0.100 66.77.88.99 netmask 255.255.255.255

on IOS 8.3+

We can do this with twice NAT:

object network outside-server
host 66.77.88.99

object network inside-server
host 192.168.0.100
exit

object network inside-network
subnet 192.168.240.0 255.255.255.0

nat (inside,inside) source dynamic inside-network interface destination static outside-server inside-server

or alternatively with autonat:

object network myWebServer
host 66.77.88.99
nat (inside,inside) static 192.168.0.100

In conclusion - when we ping 66.77.88.99 we should see we are being translated to 192.168.0.100 - the easiest way to verify is to use either packet tracer or attempt to connect to the web server's HTTP port.

Tuesday 16 February 2016

Setting up an active/standby failover with the ASA

For this guide I will be using two of the ASAv 9.21 appliance's in an ESXI environment - although the procedure should be very similar on other models such as the ASA5510,5540 etc.

There are some requirements we must also meet for setting up a failover cluster of ASA's.

- Firstly: The two devices should be IDENTICAL - that is: the same model, same amount of interfaces, same licenses and so on.

- If you are using ASAv's to create a failover cluster you are limited to Active/Standby - you are not able to do an Active/Active setup.

- Ensure you are running exactly the same IOS version on each device and also that you have the same ASDM images in flash / set on each device.

- Obviously all of the physical port setup on ASA1 should be mirrored on ASA2 - i.e. if int g0/0 on ASA1 is connected to the GZ LAN switch - so should int g0/0 on ASA2.

For this tutorial I will have a simple topology of two ASA's that are both connected to the inside and outside networks - the config looks something like the below:

We should now make a backup of ASA1 config:

copy run flash:/orig_config.cfg

And then setup the interface we will use for failover (management 0/0 in my case):

clear configure interface m0/0
int m0/0
no shut

And configure the inside and outside interfaces:

conf t
int g0/0
nameif inside
duplex full
security-level 100
ip address 10.0.0.1 255.255.255.0 standby 10.0.0.2
no shut

int g0/1
nameif outside
duplex full
security-level 0
ip address 192.168.240.1 255.255.255.0 standby 192.168.240.2
no shut
exit

We should proceed to setup dyanmic NATing:

Pre IOS 8.3:
nat (inside) 1 192.168.240.0 255.255.255.0

and enable the outside interface for NAT:
global (outside) 1 80.90.110.121

On IOS 8.3+
object network obj-10.0.0.0
subnet 192.168.240.0 255.255.255.0
nat (inside,outside) dynamic interface

Now setup the ASA1 as the primary unit in the failover:

conf t
failover lan unit primary

We should now define the interface that will be used for the failover:

failover lan interface FAILOVER m0/0

Set the failover link IP addresses:

failover interface ip FAILOVER 192.168.5.1 255.255.255.0 standby 192.168.5.2

For security we should also ensure a shared key is set:

failover key 212121

And then turn on failover:

failover

Now we can active stateful failover and save changes:

failover link FAILOVER m0/0
write memory

Now we have setup ASA1 - we should proceed to setup ASA2 as follows:

conf t
clear config int m0/0
no shut
exit

and turn on the interface for the failover:

failover lan int FAILOVER m0/0

and set the failover IP address (note: that although this is duplicated on ASA1 - this is expected)"

failover interface ip FAILOVER 192.168.5.1 255.255.255.0 standby 192.168.5.2

Set the pre-shared key and intruct the ASA2 to be the secondary / standby unit:

failover lan key 212121
failover lan unit secondary

Finally turn on the failover feature:

failover

We can then verify with (on each ASA):

show failover

A good / quick test to check everything is working is too power of ASA1 - wait 30 seconds and issue the 'show failover' command on ASA2 again - you should see that is has now taken up ASA1's / the primary interface IP's.

We can then turn on ASA1 again and manually change it back to the active of the pair by issuing:

failover active

Note: If you have a backup link (e.g. for the internet) and have setup SLA monitoring - you will probably want to disable the failover from monitoring the primary outside / internet interface - or else it will failover to the ASA when the primary internet link goes down!

This can be done with:

no monitor outside

Monday 15 February 2016

Dissociating an Exchange mailbox with it's AD user / changing the AD user

We should firstly disconnect the mailbox (this will not delete the mailbox immediately - but it will be marked for deletion - 30 days by default.)
Disable-Mailbox "user@example.com"
This will remove all of the ad-related attributes with the mailbox.

We should then connect the mailbox again - ensuring we use the '-User' switch to direct it to our new user (else it will automatically attempt to pickup a user from AD)
Connect-Mailbox -Database "Mailbox Database Name" -Identity "Old User" -User "newuser"
*** Where "Old User" == the display name of the mailbox.

Setting up licensing on the ASAv Appliance

Since ASAv no longer supports the manual 'activation-key' type anymore we are required to license it via the smartcall feature.

In order to setup licensing we should firstly ensure that DNS has been setup:

conf t
dns domain-lookup outside enable

and then simply specify the DNS server:

dns name-server 8.8.8.8

We can now issue the following commands to setup smartcall:

conf t
call-home
profile License
destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService

and specify the license type ('Standard' is currently the only type that is supported!):

license smart
feature tier standard
throughput level 100M

finally exit out of configuration mode to apply the changes.

Tunneling VPN out of a secondary interface

Just a quick checklist to ensure you have performed when you wish to have a VPN tunnel that you wish to terminate on a secondary internet uplink.

In my case there were two internet uplinks - one active and another standby - being controlled by an SLA monitor / static routing.

- Ensure that ISAKMP has been enabled on the secondary interface.

- Ensure you have a static route in place to route the remote VPN subnet / traffic out of the secondary interface: route outsideSecondary 10.11.0.0 255.255.0.0 <secondary-int-default-gw>

- Ensure you have a static route in place that will route VPN traffic destined for the other side's endpoint e.g. route outsideSecondary 80.70.60.50 255.255.255.255 <secondary-int-default-gw>

Friday 12 February 2016

ASA Service Policies

Policy maps allows us to apply specific actions on traffic that is defined by a class map.

They are applied to either one, or all (globally) interfaces.

For this scenerio I would like to apply a global policy that will DENY any DNS traffic that is attempting to lookup the domain name testing.com orginating from the source IP of 10.0.0.182.

To do this we must firstly create a service policy we will firstly have to build a class-map to identify the traffic:

access-list mytraffic extended permit tcp 10.0.0.182 255.255.255.255 any eq 53
access-list mytraffic extended permit udp 10.0.0.182 255.255.255.255 any eq 53

class-map myclassmap
match access-list mytraffic

We will then create an inspection policy map:

regex urllisttest "testing.com"

policy-map type inspect dns strictdns
parameters
match domain-name regex urllistest
drop log
exit

policy-map global_policy
class myclassmap
inspect dns strictdns

(global_policy is the default service policy that is applied to all interfaces)

Note: Also ensure that if you have any global source / dst rules in the 'global policy' that the new policy map we are creating is before these (otherwise they could cause issues.)

Thursday 11 February 2016

Setting up and configuring an identity certificate on the ASA

I  want to demonstrate three scenerios:

- A: Where we need to generate a self-signed certificate

The following command generates a new RSA key
crypto key generate rsa label ssl-vpn-key modulus 2048

We should then create a 'trustpoint' (this is simply a container that holds certificates):
crypto ca trustpoint localtrust

And then set the certificate type
enrollment self

Specify the FQDN:
fqdn mysslvpn.test.internal

Specify the subject:
subject-name CN=mysslvpn.test.internal

Specify the private key:
keypair ssl-vpn-key

Enroll the trustpoint:
crypto ca enroll localtrust

and finally apply the trustpoint to the interface:
ssl trust-point localtrust outside

To review any trustpoint configurations we can issue:
show ssl

- B: Where we need to apply a certificate from a public CA (or local CA like AD Certificate Authority)

The following command generates a new RSA key
crypto key generate rsa label ssl-vpn-key modulus 2048

We should then create a 'trustpoint' (this is simply a container that holds certificates):
crypto ca trustpoint publictrust

And then set the certifcate type (in this case we want it to be in interactive mode so we can copy the CSR)
enrollment terminal

Specify the FQDN:
fqdn mysslvpn.mydomain.com

Specify the subject:
subject-name CN=mysslvpn.mydomain.com

Specify the private key:
keypair ssl-vpn-key

Enroll the trustpoint:
crypto ca enroll publictrust

This will then generate the CSR we can copy and paste to our public CA portal from the terminal.

Once we have a certificate from our CA - we should then proceed to get hold of the Root Certificate and intermediary certificates and export them to BASE64 format - we should then copy all of them (chained) into the terminal:

crypto ca authenticate publictrust

Finally we should then proceed by importing the identity certificate with:

crypto ca authenticate publictrust certificate

(again BASE64 needed)

To review any trustpoint configurations we can issue:
show ssl

** Import a CA root certificate:

Backing up / exporting SSL certificates:

We can generate a PCKS12 file (includes both private and public key) using something like:
crypto ca export publictrust pkcs12 NotAStrongPassword

Wednesday 10 February 2016

NAT'ing with IOS 8.3+

With the introduction of IOS 8.3 there were some fundemental changes to the why NAT'ing was done.

One of these changes was that NAT exemptions (NAT 0) no longer existed - rather you are now required to do a Identity NAT instead - depedent on scenerio this can either be done using Auto NAT OR Manual NAT. For example if we wanted to ensure traffic between two networks is ommited from being 'NAT'd' - we could define a Policy Base Identity NAT - e.g:
object network internal_network
subnet 10.0.0.0 255.255.255.0
object network vpn_network
subnet 172.30.20.0 255.255.255.0
exit 
nat (dmz,outside) source static internal_network internal_network destination static vpn_network vpn_network no-proxy-arp route-lookup
The NAT rule above basically translates the source to itself if the destination matches up - otherwise if the destination is different it simply won't be used.

The way in which ACL's are applied on interfaces has also changed with 8.3 - Pre 8.3 when allowing traffic that was to be NAT'd on an interface you would define an explicit rule to allow the untranslated packet access inbound - for example:

In the event that a packet was destined for your outside interface of which was assigned a public IP of 88.77.66.55 - that had a NAT rule to then forward this packet by NAT translation to an IP (192.168.10.10) in your DMZ - you would add an ACL to permit traffic to 88.77.66.55. Although in 8.3 the packet is now untranslated before checking the interface ACL's - this means we would rather add a rule allowing access to the DMZ IP instead! (192.168.10.10)

Auto NAT is configured within a network object. An advantage of Auto NAT is that it will automatcially organize NAT rules, preventing any collisons. Although this comes at a price of granularity as you are unable to make a translation decision based on destination unlike that of manual NAT.

An example of auto nat that provides dynamic PAT for inside clients out to the internet:

object network inside-subnet
 subnet 10.0.0.0 255.255.255.0
 nat (inside,outside) dynamic interface

Manual NAT (twice NAT) 

An example of manual NAT:

object network inner_ip
host 10.0.0.100

object network outside_ip
host 44.55.66.77
exit

source static inner_ip outside_ip

Tuesday 9 February 2016

Setup AnyConnect VPN for ASA



The AnyConnect VPN allows clients to establish a VPN that is tunneled over TLS / SSL rather than the traditional method of using an VPN Client that utilizes IPSec to protect the traffic.

It appears that the AnyConnect method seems to be picking up in popularity - no doubt because of some of it's advantages over the traditional client - to name a few:

- No existing client required
- Everything tunneled over HTTPS - so will work in corporate environments
- Will work on much larger array of platforms than the client

So to setup AnyConnect we must firstly grab hold of the appropriate AnyConnect client:

copy tftp flash
anyconnect-win-2.3.0254-k9.pkg

We should now allocate an IP pool for remote VPN clients:

ip local pool ANYCON-POOL 10.11.0.10-10.11.0.200 mask 255.255.255.0

And then create an new object defining the client VPN subnet:

object network VPNClientSubnet
subnet 10.11.0.0 255.255.255.0
exit

We should then turn on the webvpn and enable it on our relevant interface:

webvpn
enable outside
INFO: WebVPN and DTLS are enabled on 'outside'.
tunnel-group-list enable
exit

We will now need to instruct the ASA which anyconnect image it should use from the flash:

anyconnect image disk0:/anyconnect-anyconnect-win-3.1.12020.pkg 1
anyconnect enable
exit

Now we should configure an ACL for split tunneling - this will define which traffic will traverse through the VPN:

access-list SPLIT-TUNNEL standard permit 10.0.0.0 255.255.255.0

Note: I had issues with pinging the loopback on Windows 7 clients, so I ended up need to add the IP pool range into the split tunnel ACL to:

access-list SPLIT-TUNNEL standard permit 10.11.0.0 255.255.255.0

Now proceed by creating an internal group policy which will define.

Note: A group policy is a set of attributes that can be applied to an IPSec connection.

group-policy GroupPolicy_ANYCONNECT-PROFILE internal
group-policy GroupPolicy_ANYCONNECT-PROFILE attributes
vpn-tunnel-protocol ssl-client
dns-server value 10.0.0.254
default-domain value domain.internal

We will also now apply the ACL for split tunneling we created before:

split-tunnel-policy tunnelspecified
split-tunnel-network-list value SPLIT-TUNNEL

Note: We could simply omitt the above two lines if we wished to tunnel all client traffic.

Finally we should create a tunnel group:

tunnel-group ANYCONNECT-PROFILE type remote-access
tunnel-group ANYCONNECT-PROFILE general-attributes

Point it to the group policy we created:

default-group-policy GroupPolicy_ANYCONNECT-PROFILE

and to the IP pool we created earlier:

address-pool ANYCONNECT-POOL

and finally:

tunnel-group ANYCONNECT-PROFILE webvpn-attributes
group-alias ANYCONNECT-PROFILE enable

We will want a test user - so we can do:

user remote password access
user remote attribute
service-type remote-access

We will also want to ensure the user's connection profile is locked down:
group-lock value ANYCONNECT-PROFILE
NOTE: Even if you DON'T have NAT enabled on the device (for example a lab environment) - in my testing I found it necessary to  add the identity NAT / nat exemption in place! Now you should also ensure that traffic going to (and from) the remote clients is not NAT'd:

nat (inside,outside) 2 source static any any destination static OBJ-ANYCONNECT-SUBNET OBJ-ANYCONNECT-SUBNET no-proxy-arp route-lookup

To test - I had already installed the pre-deploy client (anyconnect-win-3.1.12020-pre-deploy-k9.iso)

I also made extensive use of syslog - that when enabled will help no end when debugging! :)

As I was using an unlicensed / trial copy of ASAv I was limited to only 2 sessions - and hence I was receiving a 'login failed' message from the client sometimes when attempting to connect.

Filtering

There are a number of ways of filtering the VPN traffic:

We can apply the system option "sysopt connection permit-vpn" - this will instruct the ASA NOT to filter / apply ACL's against the VPN traffic - so it can freely get where it wants.

We can also apply a VPN filter that will restrict VPN traffic based on an ACL e.g. the following allows an anyconnect client (10.11.12.13) to access SSH on a local subnet:

access-list vpn-filter permit tcp 10.11.12.13 255.255.255.255 192.168.1.0 255.255.255.0 eq 22

group-policy VPN-POLICY attributes
vpn-filter value VPN-FILTER


Upgrading / applying a new license for ASA (CLI and GUI)

To register a license on your device we can either activate the key from privileged mode on the CLI:

activation-key xxxxxx xxxxxxx xxxxxxx xxxxxxx xxxxxxx

When generating a new license we are required to provide the devices serial - this can be obtained as follows:

show version | include serial

Alternatively we can also activate the license from within ASDM:

ASDM >> Device Management >> Licensing >> Activation Key.

You will need to reload the device once the license has been applied.

Note: Some newer versions of IOS now only support smart-licensing - which requires a remote connection to Cisco's activation servers. So instead you have to run:

licensing smart register <id-token>

Setting up SSH on the Cisco ASA

We should firslty ensure we have setup AAA on the ASA by creating a server group:

aaa-server myTACASServers protocol tacacs+

(and add the relevent AAA servers)

Instruct local SSH authneitcation to be performed by the server group:

aaa authentication ssh console myTACASServers LOCAL

* The 'LOCAL' keyword allows the authentication mechanism to fallback to local users on the device if there are no available aaa servers. *

We should now create a local account as a backup:

username cisco password myStr0ngP@55w0rd! privilage 15
username cisco attributes
service-type nas-prompt
aaa authorization exec authentication-server

Create an RSA key and set SSH version:

crypto key gen rsa modulus 768

ssh version 2

and finally set access-control up:

ssh 10.0.0.0 255.255.255.0 management

Monday 8 February 2016

Control Plane Protection (CPPr)

CPPr provides the ability to restrict and/or police traffic destined for the route processor of the IOS device. Simply put it provides a way of hardening your IOS device.

It is fairly similar to CoPP, although provides the benefit of being able to provide more granular control.

CPPr divides the control-plane into three categories, known as subinterfaces:

Host subinterface: This subinterface receives incoming traffic from one of the devices interfaces - this traffic is typically services like SSH, Telnet, OSPF, VPN Terminations etc. (This excludes layer 2 traffic like CDP and ARP - they are classed within the CEF-exception subinterface)

Transit subinterface: This subinterface controls all of the IP traffic that is traversing the router that has been switched by the route processor - but not actually traffic that is destined directly for the router.
CEF-exception subinterface: This subinterface receives traffic that is either redirected as a result of a configured input feature in the CEF packet forwarding path for process switching or directly enqueued in the control plane input queue by the interface driver (that is, ARP, external BGP (eBGP), OSPF, LDP, Layer2 Keepalives, and all non-IP host traffic). Control plane protection allows specific aggregate policing of this type of control plane traffic.
CPPr comprises of three main features:

Port-filtering - This provides enhanced protection by allowing the device to drop packets destined for closed or non-listening TCP/UDP ports of the device earlier. This feature is exclusive to the host subinterface.

Queue-thresholding - This provides a ay of limiting the number of unprocessed packets a protocol can have a process-level. This provides the benifit of ensuring that no one single protocol can consume all of the bandwidth - preventing other protocols from working. Again this can only be applied on the host subinterface.

Aggregate control-plane services - Control-plane policing provides granualr control over control-plane traffic destined for any one of the subinterfaces. And hence can be used on any control-plane traffic types (including layer 2 traffic such as CDP, ARP and so on.)

Sources: http://www.cisco.com/c/en/us/td/docs/ios/12_4t/12_4t4/htcpp.html

Networking Devices and the three planes (Control, Data and Management)

Control-plane: This is where routing decisions (layer 2 and 3) and neighbor information is distributed for example OSPF, EIGRP and CDP.

 Data-plane: This is where data such as routing tables, arp caches, topology databases etc. This is the space where actions are carried out by the control-plane.

Management-plane: This is where all of the management traffic occurs, like SNMP, SSH, TELNET etc.

Probing an SNMP device, MIB's and OID's

When talking to an SNMP device it is important that you are aware of all of the information available to yourself (i.e. which information can be polled from the device) - fortunately this is where MIB's (Management Information Base) comes into play - they define a description of what exact information can be polled from a device.

MIB files are typical available for download from the hardware vendors support site - although they are not always available unfortunately. 

In some case it might be necessary to instead probe a device - again there are various tools to do this (including snmpget and snmpwalker). To list all available information we can issue something like:

snmpwalk -v2c -c public 10.0.0.1

Now OID's are a bit like MAC addresses - in that portions of the OID are carved out for specific identities:

[Vendor OID][Store Identifier].[Message element Identifier]

Vendors typically have a root (or base) OID where there information should be located - up until a specific ending value. For example Cisco's is 1.3.6.1.4.1.9 - you can find a complete list here: http://www.alvestrand.no/objectid/1.3.6.1.4.1.9.html

So alternatively (and better yet) we could also issue the following command that only scan's Cisco's namespace:

snmpwalk -v2c -c public 10.0.0.1 .1.3.6.1.4.1.9

Example output:

SNMPv2-SMI::enterprises.9.6.1.101.29.4.1.1.1774375000 = INTEGER: 1774375000
SNMPv2-SMI::enterprises.9.6.1.101.29.4.1.1.1818787036 = INTEGER: 1818787036
SNMPv2-SMI::enterprises.9.6.1.101.29.4.1.2.1774375000 = INTEGER: 1
SNMPv2-SMI::enterprises.9.6.1.101.29.4.1.2.1818787036 = INTEGER: 1

There are various tools that can help you make more sense of OID's - for example to get a description of what the following OID is (.1.3.6.1.4.1.9.6.1.101.29.4.1.1.1774375000) - we could issue:

snmpwalk -v2c -c public 10.0.1.1 .1.3.6.1.4.1.9.6.1.101.29.4.1.1.1774375000


Testing an SNMP enabled device via the command line with snmpwalk

With snmpwalk we can quickly test SNMP connectivity with devices such as SMNP enabled routers, UPS's, switches etc.

For this tutorial I will be working on CentOS - although it is worth noting that there are Windows-based implementations of snmpwalk available!

To test connectivity we can issue something like the following at the SMTP-enabled device:

snmpwalk -Os -c [community string] -v [SNMP version] [IP] [OID]

e.g.:

yum install net-snmp-utils

snmpwalk -Os -c public -v 2c 10.0.3.3 1

Setup and configure a host for SNMP with NAGIOS

With the check_snmp plugin for NAGIOS you are able to easily monitor network devices such as switches and routers that support SNMP.

By default the check_snmp is included within the plugins package provide by NAGIOS (https://www.nagios.org/downloads/nagios-plugins/) so as longs as this is installed we can proceed...

It *should* be present in the /usr/local/nagios/libexec - although in my case it was not present - even with the plugin package installed...

It actually turns out that upon compiling the nagios plugins if you do not have the following packages installed (net-snmp AND net-snmp-utils), nagios will not install the check_smnp plugin!

So to resolve the problem I ran:

yum install net-snmp net-snmp-utils

and then re-compile the plugins again:

cd /tmp/nagios-plugins-2.1.1
./configure --with-nagios-user=nagios --with-nagios-group=nagios
make
make install

and then check the plugin has been installed:

ls /usr/local/nagios/libexec | grep check_snmp

Now there are a few pre-requisites we need to do:

Ensure the device you wish to poll has SMTP enabled, that you have the community string and it's setup accordingly and the relevent OID's are known.

We should proceed by enabling the switch definitions within the nagios.cfg configuration file:

vi /usr/local/nagios/etc/nagios.cfg

and uncomment the following line:

cfg_file=/usr/local/nagios/etc/objects/switch.cfg

We should then edit the definitions:

vi /usr/local/nagios/etc/objects/switch.cfg

And then fill in / modify the revelent values - defiening the switchm, host group and it's checks.

For more information on available checks please see here: https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/monitoring-routers.html

Finally verify the configuration:

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

and restart nagios:

service nagios restart

Setup a minimal CentOS installation on ESXI

After installation we should ensure that VMWare tools is installed - so after mouting the VMWare Tools iso from the bash prompt we should issue:
mkdir -p /mount/cdrom
mount -t auto /dev/cdrom /mount/cdrom
cd /mount/cdrom
cp VMwareTools-9* /tmp
cd /tmp
tar zxvf VMwareTools-9*
cd vmware-tools-distrib
./vmware-install.pl
Although I got a message complaining about perl..

Oh no... It looks like the CentOS 'minimal' does not include any perl interpreter!

Now we could simply install the package with yum - although I opted for the VNXNET3 NIC and hence the NIC will not function on the guest OS until VMware Tools is installed!

So we can either mount a full CentOS and update the yum sources and attempt to install the perl package OR we could simply be lazy and add an oldhat E1000 adapter - configure it and download the relevent perl packages via the internet using yum.

I opted for the latter option and once the NIC has been installed - we can either using dhclient to configure a temporary IP for our uses (that is if there is an DHCP reception!) or manually configure the NIC.

For DHCP:
dhclient enoXXXXXXX
and verify with:
ip addr
To setup a static IP address:
vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
ONBOOT=yes
NETWORK=10.0.0.0
NETMASK=255.255.255.0
IPADDR=10.0.0.2
USERCTL=no
Setup a default gateway:
vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=linuxbox
GATEWAY=10.0.0.1
and DNS:
vi /etc/resolv.conf 
nameserver 8.8.8.8
nameserver 8.8.4.4
And then install perl (We should also ensure ifconfig utility is installed as it's required by VMware tools!):
yum install perl net-tools
and finally attempt to install VMWare tools again!
cd /tmp/vmware-tools-distrib
./vmware-install.pl
Once installed - give the VMXNET3 a static IP and simply remove the E1000 NIC.


Sunday 7 February 2016

Setting up Dynamic ARP Inspection (DAI)

DAI is a mechanism that is applied to prevent ARP spoofing by intercepting any ARP packets and ensure that the IP to MAC association is valid (on untrusted ports) - it does this by checking the DHCP snooping binding database.

If it identifies a bad (spoofed) ARP packet it will simply drop the packet before it is forwarded or added to the CAM table.

DAI is enabled on a per VLAN basis with:
ip arp inspection vlan 100
end
and then to verify the configuration:
show ip arp inspection vlan 100
We can also set a limit of maximum arp requests received per second - by default this is usually 10 - however on busy networks I usually like to increase this to 100 - however this will vary greatly of course dependent on your network!

int range gi0/1-48
ip arp inspection limit rate 100

By default all ports are untrusted but if for example we had a trunk to another switch that was hooked up to a load of servers with statically assigned IP's we  might wish to trust a port:
int g0/15
ip arp inspection trust
Now since it is reliant on the DHCP snooping binding database any static IP's will be absent and when DAI is enabled you will see something like the following in the logs:

*Mar  1 00:44:05.783: %SW_DAI-4-DHCP_SNOOPING_DENY: 1 Invalid ARPs (Res) on Gi0/4, vlan 100.([0001.807c.1234/10.111.111.111/54ee.7534.1234/10.111.111.1/00:44:05 UTC Mon Mar 1 1993])

So in order to exclude these hosts configured with static IP's (for example routers, servers, printers etc.) we can create an arp ACL inside a DAI filter:
arp access-list mydaifilter
permit ip host 10.0.0.1 mac host 54ee.7534.1234
exit
and then apply the filter:
ip arp inspection filter mydaifilter vlan 100 
and to verify the entry we can use:
show ip arp inspection vlan 100 
Finally we can check up and see whether any ARP packets are being dropped by defining a buffer size and turning on the logging with:
conf t
ip arp inspection log-buffer entries 512
end
and then review the log with something like:
show ip arp inspection log

Friday 5 February 2016

Packet capture via the command line with ASA 5510

For this kind of thing I typically much prefer the CLI with the ASA!

To start a new capture we can issue something like:

capture mycapture1 interface <interface> match <protocol> host <source-ip> host <destination-ip> eq <port> buffer 5242880

To view the capture we can copy it from the device to an FTP server:

copy /pcap capture:mycapture1 ftp://user:pass@123.123.123.123/mycapture1.pcap

or simply download it directly from the web interface:

https://192.168.1.1/admin/capture/mycapture1/pcap

and we have have the ability to view it in the terminal with:

show capture mycapture1

How to configure NetFlow on an ASA 5510

Unlike tools such as tcpdump, wireshark and the like that inspect all of the traffic (and its payload too) NetFlow is typically used to capture information about a specific 'stream' - this incorporates details such as source, destination, protocols and ports - which it identifies from anaylzing the packets headers only.

NetFlow is transmitted over UDP port 2055.

NetFlow is only available on IOS 8.2 and above and can be setup via the GUI or via the CLI - for the purposes of this tutorial I will be setting it up via the CLI.

We should firstly create a new class map and define an ACL to include our traffic:

access-list flow_export_acl permit ip host 10.0.0.1 host 10.0.0.2
exit

class-map flow_export_class
match access-list flow_export_acl

OR

match any (to match all trafic)

Now either create a new policy map and assign it to the global service policy:

policy-map flow_export_policy
class flow_export_class
service-policy flow_export_policy global

or use the exsiting one 'global_policy':

policy-map global_policy
class flow_export_class

We can then define a NetFlow server to reieve the data:
flow-export destination <interface> server1 2055
flow-export destination <interface> server2 2055
flow-export event-type all destination server1

To set the source interface that the NetFlow data will be sent from:
ip flow-export management

To check whether anything is being picked up you can issue:

show flow-export counters

or to review the configuration:

show flow-export