Thursday 28 July 2016

Writing your own udev rules with CentOS 7

Introduced with the 2.6 Linux kernel udev is a system that will automatically map devices that are plugged in such as hard drives, usb drives and so on and will create an entry in the dev filesystem.

There are a collection of pre-defined rules that udev uses to decide how devices will be presented and any other actions needed. These are located within:

/etc/udev/rules.d

In the below example I will be automatically mapping a specific usb mass storage device to /dev/spusb, mounting it and then executing a custom script.

Firstly we should collect some information about the device - we do this using the udevadm command:

udevadm info -a /dev/sdb

Which should return something like:
P: /devices/pci0000:00/0000:00:14.0/usb1/1-3/1-3:1.0/host8/target8:0:0/8:0:0:0/block/sdb
N: sdb
S: disk/by-id/usb-090c_1000_AA04012700008713-0:0
S: disk/by-path/pci-0000:00:14.0-usb-0:3:1.0-scsi-0:0:0:0
E: DEVLINKS=/dev/disk/by-id/usb-090c_1000_AA04012700008713-0:0
.................
What we are really after is some way of uniquely identifying the device - for example we might use the 'SERIAL' value:

AA04012700008713

Now lets create a new rule within the rules directory - although keep in mind there is a file naming convention in place - e.g.:

Rule files beginning with '60' will override any default rules and rule files beginning with '90' (or above) should be run last - refer to the man pages for more information.

KERNEL=="sd?1", ATTRS{serial}=="AA04012700008713", SYMLINK+="myspecialusb" RUN+="/usr/local/scripts/test.sh"

UDEV rules should be picked up automatically since it uses the ionotify function to detect changes in the /etc/udev/rules.d folder - although if you wish to manually force a reload (without rebooting) you can issue:

udevadm control --reload-rules

Now simply re-insert the usb drive - use dmesg to help debug any problems.

Sources:

http://unix.stackexchange.com/questions/39370/how-to-reload-udev-rules-without-reboot

Tuesday 26 July 2016

Importing a yum repository key win CentOS 6/7

More often than not (especially on CentOS) it is necessary to rely upon third-party software repositories.

The example below explains how to add an additional repository including importing it's key file.

cd /tmp

wget http://rpm.playonlinux.com/public.gpg

sudo rpm -import ./public.gpg

curl http://rpm.playonlinux.com/playonlinux.repo > /etc/yum.repos.d/playonlinux.repo

Unlike apt - yum will automatically update the repo list / packages - so you can simply install the desired software as usual:

sudo yum install playonlinux

Monday 25 July 2016

Working with the x-forwarded-for header and IIS 7.0 / 7.5

Sadly IIS 7.0 / 7.5 does not provide in-built support for the x-forwarded-for header - which when working with load balancers is a very important element when it comes to obtaining useful logging information.

In order to get x-forwarded-for headers working with IIS we must firstly install the Advanced Logging add-on - which can be obtained here:

http://www.iis.net/downloads/microsoft/advanced-logging

Click on the webserver node in the IIS manager and then double click 'Advanced Logging' and finally hit the 'Enable Advanced Logging' link in the left hand pane.

** NOTE: As far as I am aware you must create / define all logging fields from the 'Advanced Logging' feature on the Server node (not the website / web application node) you wish to enable the logging for and double click 'Advanced Logging' under the IIS section.

From the right-hand navigation pane select 'Edit Logging Fields' then hit 'Add Field' and enter as follows:

Field ID = ClientSourceIP
Category = Default
Source Type = Request Header
Source Name = X-Forwarded-For

Then click 'OK' on the 'Add Logging Field' dialog and the 'Edit source fields' dialog.

Now on the main view right-hand click on the '%COMPUTERNAME%' item and select the 'Edit log definition.'

Then under the 'Selected fields' section click 'Select fields...' and ensure 'ClientSourceIP' is ticked.

Hit apply and then reset the server with iisreset.

Generate some traffic and you should now find the logs being generated in:

X:\inetpub\logs\AdvancedLogs

Securing HAProxy - SSL/TLS Termination with HAProxy on CentOS

The following is an example configuration file that performs SSL/TLS securely (this can be verified through tools such as Qualys's SSL Server Tester):

global
    daemon
    maxconn 4000
    stats socket /var/run/haproxy.sock mode 600 level admin
    stats timeout 2m
    log 127.0.0.1 local2 notice
    user haproxy
    group haproxy
    daemon
    # Disable SSLv3 and Stateless Session Resumption (
    ssl-default-bind-options no-sslv3 no-tls-tickets
    # Explicitly define the available ciphers available for use with the server
    ssl-default-bind-ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
    # Ensure DH key size is 2048!
    tune.ssl.default-dh-param 2048

defaults
    log     global
    mode    http
    # Useful when long-lived sessions (e.g. mixed HTTP and WebSocket conneciton)
    timeout tunnel 1h
    # Amount of time before connection request is dropped
    timeout connect 5000ms
    # Amount of time before the connection is dropped while ewaiting for half-closed connection to finish
    timeout client-fin      50000ms
    # Amount of time before connection is dropped when waiting for client data response
    timeout client 50000ms
    # Amount of time before connection is dropped when waiting for server to reply.
    timeout server 50000ms
    # Amount of time before http-request should be completed
    timeout http-request 15s
    # Ensure the backend server connection is closed when request has completed - but leave client connection intact
    http-server-close
    # Enable HTTP logging
    option  httplog
    # Ensure we do not log null / empty requests
    option  dontlognull
    # Insert forward-for header into request to preserve origin ip
    option forwardfor
    # Error pages ln -s /usr/share/haproxy/ /etc/haproxy/errors
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

frontend https-in
    bind *:80
    # SSL binding and certs
    bind *:443 ssl crt /etc/haproxy/ssl/bundle.crt
    # Redirect any HTTP traffic to HTTPS
    redirect scheme https if !{ ssl_fc }
    log global
    default_backend webserver_pool
    http-response set-header Strict-Transport-Security max-age=31536000;\ includeSubdomains;\ preload
    http-response set-header X-Frame-Options DENY
    http-response set-header X-Content-Type-Options nosniff

backend webserver_pool
    mode http
    log global
    balance source
    cookie SERVERID insert indirect nocache
    # Perform get request
    option httpchk GET /WebApp/GetStatus.php
    # Check whether response is 200 / OK
    http-check expect status 200
    server serverA 10.0.0.1:80 check cookie serverA inter 5000 downinter 500
    server serverB 10.0.1.1:80 check cookie serverB inter 5000 downinter 500
    server serverB 10.1.1.1:80 check cookie serverB inter 5000 downinter 500

Setting up HAProxy and RSyslog with CentOS 7

HAProxy provides built-in syslog support and can be enabled pretty easily - in your configuration within the 'defaults' stanza add something like the following:

log         127.0.0.1 haproxy notice

and then in your rsyslog.conf we add / uncomment:

$ModLoad imudp
$UDPServerAddress 127.0.0.1
$UDPServerRun 514

We will also want to add the following to ensure that haproxy logs are stored elsewhere -

sudo vi /etc/rsyslog.conf

and insert the following at the top of the '#### RULES ####' section:

if $programname startswith 'haproxy' then /var/log/haproxy.log

Reload rsyslog and haproxy:

sudo service rsyslog reload
sudo service haproxy reload

and then generate some traffic on the proxy and then review the logs:

tail -f /var/log/haproxy.log

Friday 22 July 2016

Script: Drop / identify all orphaned users from all databases - MSSQL

Sometimes when importing / recovering from backups onto another MSSQL server instance you will encounter orphaned users - the following script (full source mentioned at the bottom) allows you to identify all orphaned users:

/*************************
*
* Script written by Dale Kelly 11/23/2011
* Revision 1.0
* Purpose: This script searches all databases for orphaned users
* and displays a list. If desired the orphaned users can be deleted
*
***********************/
Use master
Go
Create Table #Orphans 
 (
  RowID     int not null primary key identity(1,1) ,
  TDBName varchar (100),
  UserName varchar (100),
  UserSid varbinary(85)
 )
SET NOCOUNT ON 
 DECLARE @DBName sysname, @Qry nvarchar(4000)
 SET @Qry = ''
 SET @DBName = ''
 WHILE @DBName IS NOT NULL
 BEGIN
   SET @DBName = 
     (
  SELECT MIN(name) 
   FROM master..sysdatabases 
   WHERE
   /** to exclude named databases add them to the Not In clause **/
   name NOT IN 
     (
      'model', 'msdb', 
      'distribution'
     ) And 
     DATABASEPROPERTY(name, 'IsOffline') = 0 
     AND DATABASEPROPERTY(name, 'IsSuspect') = 0 
     AND name > @DBName
      )
   IF @DBName IS NULL BREAK
         
                Set @Qry = 'select ''' + @DBName + ''' as DBName, name AS UserName, 
                sid AS UserSID from [' + @DBName + ']..sysusers 
                where issqluser = 1 and (sid is not null and sid <> 0x0) 
                and suser_sname(sid) is null order by name'
 Insert into #Orphans Exec (@Qry)
 
 End
Select * from #Orphans
/** To drop orphans uncomment this section 
Declare @SQL as varchar (200)
Declare @DDBName varchar (100)
Declare @Orphanname varchar (100)
Declare @DBSysSchema varchar (100)
Declare @From int
Declare @To int
Select @From = 0, @To = @@ROWCOUNT 
from #Orphans
--Print @From
--Print @To
While @From < @To
 Begin
  Set @From = @From + 1
  
  Select @DDBName = TDBName, @Orphanname = UserName from #Orphans
   Where RowID = @From
      
   Set @DBSysSchema = '[' + @DDBName + ']' + '.[sys].[schemas]'
   print @DBsysSchema
   Print @DDBname
   Print @Orphanname
   set @SQL = 'If Exists (Select * from ' + @DBSysSchema 
                          + ' where name = ''' + @Orphanname + ''')
    Begin
     Use ' + @DDBName 
                                        + ' Drop Schema [' + @Orphanname + ']
    End'
   print @SQL
   Exec (@SQL)
     
    Begin Try
     Set @SQL = 'Use ' + @DDBName 
                                        + ' Drop User [' + @Orphanname + ']'
     Exec (@SQL)
    End Try
    Begin Catch
    End Catch
   
 End
**/
 
Drop table #Orphans

In order to actually drop the orphaned users - simply uncomment the bottom section of the script.

Full credit / source goes to: MSSQLTips.com /Dale Kelly 

Wednesday 20 July 2016

HAProxy Example Configurations

See: HAProxy Secure / Hardened Configuration Example

The below are some common configurations (or skeleton configurations if you like) that can be used to build upon.

Two node cluster backend running two websites in a active / passive configuration:

global
    daemon
    maxconn 4000
    stats socket /var/run/haproxy.sock mode 600 level admin
    stats timeout 2m
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    http
    # Useful when long-lived sessions (e.g. mixed HTTP and WebSocket conneciton)
    timeout tunnel 1h
    # Amount of time before connection request is dropped
    timeout connect 5000ms
    # Amount of time before the connection is dropped while ewaiting for half-closed connection to finish
    timeout client-fin      50000ms
    # Amount of time before connection is dropped when waiting for client data response
    timeout client 50000ms
    timeout server 50000ms
    # Enable HTTP logging
    option  httplog
    # Ensure we do not log null / empty requests
    option  dontlognull
    # Insert forward-for header into request to preserve origin ip
    option forwardfor

frontend www
    bind 0.0.0.0:80
    default_backend webserver_pool

backend webserver_pool
    balance roundrobin
    mode http
    option httplog
    option  httpchk    GET /someService/isAlive
    server  serverA 10.11.12.13:8080 check inter 5000 downinter 500    # active node
    server  serverB 10.12.13.14:8080 check inter 5000 backup           # passive node

Two node cluster backend running TCP application where maintaining session affinity is needed (usually in an HTTP application we can simply add a cookie into the request - since this is a TCP application we have to use the 'source' balance method instead):

global
    daemon
    maxconn 4000
    stats socket /var/run/haproxy.sock mode 600 level admin
    stats timeout 2m
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    http
    # Useful when long-lived sessions (e.g. mixed HTTP and WebSocket conneciton)
    timeout tunnel 1h
    # Amount of time before connection request is dropped
    timeout connect 5000ms
    # Amount of time before the connection is dropped while ewaiting for half-closed connection to finish
    timeout client-fin      50000ms
    # Amount of time before connection is dropped when waiting for client data response
    timeout client 50000ms
    timeout server 50000ms
    # Enable HTTP logging
    option  httplog
    # Ensure we do not log null / empty requests
    option  dontlognull
    # Insert forward-for header into request to preserve origin ip
    option forwardfor

frontend app
    bind 0.0.0.0:10000
    default_backend app_pool

backend app_pool
    balance source
    mode tcp
    option tcplog
    server  serverA 10.11.12.13:1234 check inter 5000 downinter 500
    server  serverB 10.12.13.14:1234 check inter 5000

Two node cluster backend running HTTP application where maintaining session affinity:


global
    daemon
    maxconn 4000
    stats socket /var/run/haproxy.sock mode 600 level admin
    stats timeout 2m
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    http
    # Useful when long-lived sessions (e.g. mixed HTTP and WebSocket conneciton)
    timeout tunnel 1h
    # Amount of time before connection request is dropped
    timeout connect 5000ms
    # Amount of time before the connection is dropped while ewaiting for half-closed connection to finish
    timeout client-fin      50000ms
    # Amount of time before connection is dropped when waiting for client data response
    timeout client 50000ms
    timeout server 50000ms
    # Enable HTTP logging
    option  httplog
    # Ensure we do not log null / empty requests
    option  dontlognull
    # Insert forward-for header into request to preserve origin ip
    option forwardfor

frontend app
    bind 0.0.0.0:10000
    default_backend app_pool

backend app_pool
    balance source
    cookie SERVERID insert indirect nocache
    server serverA 192.168.10.11:80 check cookie serverA inter 5000 downinter 500
    server serverB 192.168.10.21:80 check cookie serverB inter 5000 

Filtering logs with journalctl

Coming over from Debian to CentOS / RHEL was mostly a smooth / simialr transition - although one of the little differences that I encountered was that CentOS 7 uses journalctl for general application / service logging - rather opposed to debian where everything was written to /var/log/messages (which translates to /var/log/syslog in the RHEL world.)

The journalctl tool is actually pretty cool and provides some in-built filters to allow you to quickly find the information you need rather than grepping everything!

I have included a few examples below of how information can be extracted:

To look at logs for a specific service we could issue something like the following for cron jobs:

journalctl SYSLOG_IDENTIFIER=crond

or something like follows for identifying selinux problems:

journalctl SYSLOG_IDENTIFIER=setroubleshoot

We can also filter dependent on time:

journalctl SYSLOG_IDENTIFIER=setroubleshoot --since "17:00" --until "19:00"

or filter dependent on a specific priority (e.g. Emergency to Error)

journalctl --priority 1..4

Setting up HA with HAProxy and Keepalived in AWS

Typically (or rather by default) keepalived uses multicast to make decisions dependent on host availability - although on cloud platforms like AWS, Google Developer Cloud etc. multicast is not currently supported and hence we must instruct keepalived to use unicast instead.

For this exercise there will be two HAProxy instances (a slave and a master node) that will share an elastic IP between the two of them using keepalived to perform the switch over where nescasery.

These two load balances will then interact with two backend application servers - which in turn themselves interact with it's own backend database server that have SQL replication setup.

On the master we should firstly ensure the system is up-to-date and it has the relevant version of haproxy installed (which is anything > 1.2.13.)

yum update && yum install haproxy keepalived

Ensure both of them startup on boot:

systemctl enable keepalived
systemctl enable haproxy

Now in a normal environment keepalived does a great job of automatically assigning the shared IP to the nescasery host - although due to (static IP configuration) limitations within AWS this is not possible and instead we should instruct keepalived to run a script when a failover should occur - which will simply utilize the AWS API by re-associating an elastic IP from the master to the slave (or visa versa.)

We should replace the keepalived.conf configuration as follows:

sudo mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig
sudo vi /etc/keepalived/keepalived.conf

vrrp_script chk_haproxy {
script "pidof haproxy"
interval 5
 fall 2 # fail twice before failing test
 rise 2 # ensure is successful twice before passing test
}

vrrp_instance VI_1 {
   debug 2
   interface eth0              
   state MASTER
   virtual_router_id 51        
   priority 101                
   unicast_src_ip 10.11.12.201    
   unicast_peer {
       5.6.7.8                
   }
   track_script {
       chk_haproxy
   }
   notify_master /usr/libexec/keepalived/failover.sh
}

and add the following on the slave node:

sudo cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.orig
sudo vi /etc/keepalived/keepalived.conf

vrrp_script chk_haproxy {
script "pidof haproxy"
        interval 2
}

vrrp_instance VI_1 {
   debug 2
   interface eth0              
   state BACKUP
   virtual_router_id 51        
   priority 100                
   unicast_src_ip 10.11.13.202    
   unicast_peer {
       1.2.3.4                
   }
   track_script {
       chk_haproxy
   }
   notify_master /usr/libexec/keepalived/failover.sh
   notify_fault  /usr/libexec/keepalived/failover_fault.sh
}

Now we will create the script defined in the 'notify_master' section - although before we do this we should use AWS IAM to create and configure the relevant role for our servers so they are able to use the AWS CLI to switch the elastic IP's.

I create a policy with something like:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "ec2:AssignPrivateIpAddresses",
                "ec2:AssociateAddress",
                "ec2:DescribeInstances"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}

* Although I would reccomended specifying the resource specifcially to tighten it up.

Now we will create the script (on both nodes):

sudo vi /usr/libexec/keepalived/failover.sh
chmod 700 /usr/libexec/keepalived/failover.sh

#!/bin/bash

ALLOCATION_ID=eipalloc-123456
INSTANCE_ID=i-123456789
SECONDARY_PRIVATE_IP=172.30.0.101

/usr/bin/aws ec2 associate-address --allocation-id $ALLOCATION_ID --instance-id $INSTANCE_ID --private-ip-address $SECONDARY_PRIVATE_IP --allow-reassociation

and then (on each node) configure the AWS CLI:

aws configure

For the networking side we will have a single interface on each node - although both of them will have a secondary IP (which we will use to assosiate with our elastic IP.) The IP's of the two machines will also be in separate subnet's since they are spread accross two availability zones.

We should now start keepalived on both hosts:

sudo service keepalived start
sudo service haproxy start

** You WILL almost certainly come accross problems with SELinux (if it's enabled) - ensure you check your audit.log for any related messages and resolve those problems before continuing! **

We should now see the following on the master node:

tail -f /var/log/messages

Jul 18 15:34:40 localhost Keepalived_vrrp[27585]: VRRP_Script(chk_haproxy) succeeded
Jul 18 15:34:40 localhost Keepalived_vrrp[27585]: Kernel is reporting: interface eth0 UP
Jul 18 15:34:40 localhost Keepalived_vrrp[27585]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 18 15:34:41 localhost Keepalived_vrrp[27585]: VRRP_Instance(VI_1) Entering MASTER STATE
Jul 18 15:34:59 localhost Keepalived_vrrp[27585]: VRRP_Instance(VI_1) Received lower prio advert, forcing new election
Jul 18 15:34:59 localhost Keepalived_vrrp[27585]: VRRP_Instance(VI_1) Received lower prio advert, forcing new election

and then on the slave node:

tail -f /var/log/messages

Jul 18 15:34:54 localhost Keepalived_vrrp[27641]: VRRP_Script(chk_haproxy) succeeded
Jul 18 15:34:55 localhost Keepalived_vrrp[27641]: Kernel is reporting: interface eth0 UP
Jul 18 15:34:59 localhost Keepalived_vrrp[27641]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jul 18 15:34:59 localhost Keepalived_vrrp[27641]: VRRP_Instance(VI_1) Received higher prio advert
Jul 18 15:34:59 localhost Keepalived_vrrp[27641]: VRRP_Instance(VI_1) Entering BACKUP STATE

We will now configure the HAProxy portion by replacing the existing haproxy config:

mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.orig

vi /etc/haproxy/haproxy.cfg

global
    daemon
    maxconn 4000
    stats socket /var/run/haproxy.sock mode 600 level admin
    stats timeout 2m
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    tcp
    option  tcplog
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

frontend www
    bind 172.30.0.241:80
    default_backend webserver_pool

backend webserver_pool
    balance roundrobin
    mode http
    option httplog
    option  httpchk    GET /someService/isAlive
    server  serverA 10.11.12.13:8080 check inter 5000 downinter 500    # active node
    server  serverB 10.12.13.14:8080 check inter 5000 backup           # passive node

listen admin
    bind 172.30.0.241:8777
    stats enable
    stats realm   Haproxy\ Statistics
    stats auth    adminuser:secure_pa$$word!

Finally reload both haproxy instances to apply the new configuration.

Tuesday 19 July 2016

Wednesday 13 July 2016

NSS (Name Service Switch): nsswitch.conf

The nssswitch.conf file defines how (and in what order) name resolution occurs for different types of objects (e.g. passwords) should be looked up.

Following is an example of an nsswitch.conf file (/etc/nsswitch.conf) from a base CentOS 7 system with LDAP configured:
# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.
passwd:         compat ldap
group:          compat ldap
shadow:         compat
gshadow:        files
hosts:          files dns
networks:       files
protocols:      db files
services:       db files
ethers:         db files
rpc:            db files
netgroup:       nis
If we take hosts for example we can see that 'files' is specified firstly - so the NSS would check /etc/hosts firstly and then query the locally defined dns resolver(s.)

Typically changes made should be instant (i.e. no services etc. need to be restarted) - although in the case that you have nscd - you may need to restart it before changes take effect.

Tuesday 12 July 2016

Quick and dirty sed command line reference

Below are some commonly used functions performed with sed:

Deleting / stripping the first line of a file:

sed -e '1d' /etc/services

or we can strip the last line of the file with:

sed -e '$d' /etc/services

Deleting a range of lines:

sed -e '1,50d' /etc/services

We can also use regex's with sed - for example a grep style search:

sed -n -e '/^host/p' /etc/services

(The -n option instructs sed not to print out everything else - other than the matches.)

We can also replace patterns in a file using a regex (replaces any line beginning with 'host' with 'nonhost' :

sed -e 's/^host*/nonhost/' /etc/services

If you wanted to pipe the matches out to stdout we could issue something like:

sed -n -e 's/^host*/nonhost/p' /etc/services

Thursday 7 July 2016

Configuring DansGuardian with CentOS 7

A quick and dirty note on how to get DanbsGuardian up and running with squid on CentOS 7.

Firstly download the dansguardian rpm from:

cd /tmp
curl -O ftp.gwdg.de/opensuse/repositories/home:/Kenzy:/packages/CentOS_7/x86_64/dansguardian-2.12.0.3-1.1.x86_64.rpm

install the rpm:

rpm -i dansguardian-2.12.0.3-1.1.x86_64.rpm

Edit the default configuration:

vi /etc/dansguardian/dansguardian.conf

and ensure 'filterport' is set to the appropriate port for dansguardian

and 'proxyport' is set to the one squid is currently listening on.

Finally ensure dansguardian will start on boot:

systemctl enable dansguardian

and restart the service:

sudo service dansguardian restart

To test add a domain to /etc/dansguardian/lists/bannedsitelist e.g.:

vi /etc/dansguardian/lists/bannedsitelist

and add test.com

There are some good publicly available blacklists around - one being:

http://urlblacklist.com/?sec=download 

Setting up a transparent proxy with squid and an ASA

Important Note: To stop you banging you head against a wall - please note that the WCCP server (ASA in this case) and the cache / client (squid in this case) should be on the SAME subnet otherwise WCCP will not function correctly!

Because the ASA automatically picks the WCCP router ID (picks the highest interface) - the router ID is where the ASA will send the GRE traffic from. You have to ensure that the cache can communicate with this subnet. I believe with later version of IOS you can manually define the router ID.

Firstly define your proxy server(s):

access-list wccp-servers extended permit ip host 192.168.15.25 any

and the client range:

access-list wccp-clients extended deny ip host 192.168.15.25 any
access-list wccp-clients extended permit tcp host <client ip> any eq 80
access-list wccp-clients extended deny ip any any

and then assign them to WCCP:

wccp web-cache redirect-list wccp-clients group-list wccp-servers

Enable WCCP version 2 (not required on some older IOS versions):

wccp version 2

and finally enable it on the relevent interface:

wccp interface inside web-cache redirect in

wccp interface inside 70 redirect in

Note: ('70' is a dynamic service number and represents HTTPS traffic in this case.)

** Very important: Verify what the 'Router Identifier' is that of your INSIDE interface (it takes the highest IP address of your interfaces by default) as that the 'Router Identifier' specifies where the GRE packets will originate from and if it selects another interface e.g. DMZ - and the interface is unable to access the squid server the GRE tunnel will fail! **

Install squid with:

sudo yum install squid

ensure it starts on boot:

sudo systemctl enable squid

We need to ensure that the squid installation we have has compiled with the following flags:

--enable-linux-netfilter and --enable-wccpv2

To verify we can issue something like:

squid -v

WCCP Squid Configuration

Add the following to /etc/squid/squid.conf:

# WCCP Router IP
wccp2_router 192.168.15.1

# forwarding
wccp2_forwarding_method gre

# GRE return method
wccp2_return_method gre

# ensures that wccp is not enabled during a cache rebuild
wccp2_rebuild_wait on

# used for load balancing purposes
wccp2_weight 10000

# Assignment method hash|mask
wccp2_assignment_method hash

# standard web cache, no auth
wccp2_service standard 0

# the following defines HTTPS
wccp2_service dynamic 70
wccp2_service_info 70 protocol=tcp flags=src_ip_hash,src_port_alt_hash priority=240 ports=443

Option A: MITM HTTPS Proxy

This is where HTTPS requests will be intercepted by the proxy and we can then decrypt the streams to see exactly what users are doing - however the downside of this is that you must deploy a self-signed CA certificate to your users in order for this to work correctly.

We will need to generate a new CA for dynamic certification creation to work properly:

mkdir -p /etc/squid/cert && cd /etc/squid/cert
openssl req -new -newkey rsa:2048 -sha256 -days 1095 -nodes -x509 -extensions v3_ca -keyout squidCA.pem -out squidCA.pem

We add / amend the following to the squid configuration:

http_port 3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=8MB cert=/etc/squid/cert/squidCA.pem
http_port 3126 intercept
https_port 3127 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=8MB cert=/etc/squid/cert/squidCA.pem

Note: Make sure that your SSL certificate cache path is in the appropraite location otherwise SELinux will likely complain and squid will fail to start.

sslcrtd_program /usr/lib64/squid/ssl_crtd -s /usr/lib64/squid/ssl_db -M 50MB
sslcrtd_children 5

Note: You should also ensure that the certificate is installed on each client machine - e.g. deployed through group policy.

Option B: Direct HTTPS Proxy

This is my preferred option - and works pretty well if you use DNS to block nasty sites rather than inspecting it when it traverses the proxy.

http_port 3128 ssl-bump cert=/etc/squid/cert/squidCA.pem
http_port 3126 intercept
https_port 3127 intercept ssl-bump cert=/etc/squid/cert/squidCA.pem

ssl_bump none all
sslcrtd_program /usr/lib64/squid/ssl_crtd -s /usr/lib64/squid/ssl_db -M 50MB
sslcrtd_children 5

But before we start squid we should ensure that the certificate cache is initialised:

mkdir -p /var/lib/squid

/usr/lib64/squid/ssl_crtd -c -s /usr/lib64/squid/ssl_db

and the user running squid has access the directory:

chown -R <squid-user>:<squid-user> /usr/lib64/squid/ssl_db

And then we will need to create a pseudo interface for our GRE tunnel on our squid server:

vi /etc/sysconfig/network-scripts/ifcfg-wccp0

DEVICE=wccp0
TYPE=GRE
DEVICETYPE=tunnel
ONBOOT=yes
MY_OUTER_IPADDR=192.168.15.25
PEER_OUTER_IPADDR=192.168.15.1

Note: The outer tunnel address does not matter with WCCP!

and bring it up:

ip link set wccp0 up

and on the ASA issue the following to ensure the proxy and ASA are communicating:

debug wccp packets

You should see something like:

WCCP-PKT:S00: Received valid Here_I_Am packet from 1.1.1.1 w/rcv_id 00001AF0

WCCP-PKT:S00: Sending I_See_You packet to 2.2.2.2 w/ rcv_id 00001AF1

Proceed by setting up ip forwarding, reverse path filter etc.:

sudo vi /etc/sysctl.conf

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.ip_forward = 1

and ensure changes persist:

sysctl -p

And add relevent rules to forward traffic from port 80 to 3128:

sudo iptables -A INPUT -p tcp -m tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -m tcp -p tcp --dport 3128 -j ACCEPT
sudo iptables -A INPUT -i eno123 -s <router-identifier-id> -d <squid-proxy-ip> -p gre -j ACCEPT

sudo iptables -t nat -A PREROUTING -i wccp0 -p tcp -m tcp --dport 80 -j DNAT --to-destination <squid-ip>:3126
sudo iptables -t nat -A PREROUTING -i wccp0 -p tcp -m tcp --dport 443 -j DNAT --to-destination <squid-ip>:3127
sudo iptables -t nat -A POSTROUTING -o eno123 -j MASQUERADE

Then restart squid and run tcpdump:

sudo service squid restart
tcpdump -i eno123 udp and port 2048

you should see a similar output too:

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eno123, link-type EN10MB (Ethernet), capture size 65535 bytes
15:43:55.697487 IP 10.11.12.13.dls-monitor > 10.13.14.254.dls-monitor: UDP, length 144
15:43:55.704418 IP 10.13.14.254.dls-monitor > 10.11.12.13.dls-monitor: UDP, length 140
15:44:05.697649 IP 10.11.12.13.dls-monitor > 10.13.14.254.dls-monitor: UDP, length 144
15:44:05.704967 IP 10.13.14.254.dls-monitor > 10.11.12.13.dls-monitor: UDP, length 140
15:44:10.704542 IP 10.11.12.13.dls-monitor > 10.13.14.254.dls-monitor: UDP, length 336

and you should also see GRE packets on the main interface:

tcpdump -i eno123 proto gre

** Both nodes communicate with each other via udp/2048 **

** Important: Also ensure that you see the RX counter on the tun0 interface increasing e.g.

ifconfig tun0

Now on your ASA we can issue:

show wccp

and you should notice that the 'Number of Cache Engines:' and 'Number of routers' has now showing as 1.

Now attempt to connect from the client:

telnet google.com 80

Reviewing the tunnel device and the your NIC:

tcpdump -i wccp0 port 80

tcpdump: WARNING: wccp0: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on wccp0, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
14:55:29.026958 IP 10.0.0.122.57628 > vhost.ibiblio.org.http: Flags [S], seq 4203563477, win 29200, options [mss 1460,sackOK,TS val 4078141366 ecr 0,nop,wscale 7], length 0
14:55:29.027136 IP 10.0.0.122.57630 > vhost.ibiblio.org.http: Flags [S], seq 640886751, win 29200, options [mss 1460,sackOK,TS val 4078141366 ecr 0,nop,wscale 7], length 0
14:55:29.027235 IP 10.0.0.122.57632 > vhost.ibiblio.org.http: Flags [S], seq 2568663163, win 29200, options [mss 1460,sackOK,TS val 4078141366 ecr 0,nop,wscale 7], length 0
14:55:29.027345 IP 10.0.0.122.57628 > vhost.ibiblio.org.http: Flags [.], ack 692773047, win 229, options [nop,nop,TS val 4078141366 ecr 21645736], length 0

And then finally to see the content being forwarded to your client / ingress interface:

tcpdump -i eth0 host vhost.ibiblio.org.http

14:57:20.593381 IP vhost.ibiblio.org.http > 10.0.0.122.57646: Flags [S.], seq 4206693825, ack 3973858615, win 28960, options [mss 1460,sackOK,TS val 21757302 ecr 4078252932,nop,wscale 7], length 0
14:57:20.593507 IP vhost.ibiblio.org.http > 10.0.0.122.57648: Flags [S.], seq 1070691703, ack 1857333684, win 28960, options [mss 1460,sackOK,TS val 21757302 ecr 4078252932,nop,wscale 7], length 0
14:57:20.607557 IP vhost.ibiblio.org.http > 10.0.0.122.57622: Flags [.], ack 4079575675, win 270, options [nop,nop,TS val 21757316 ecr 4078252946], length 0
14:57:20.608243 IP FORWARDPROXY01.hlx.int.41274 > vhost.ibiblio.org.http: Flags [S], seq 3249023064, win 29200, options [mss 1460,sackOK,TS val 21757317 ecr 0,nop,wscale 7], length 0

14:57:20.696048 IP vhost.ibiblio.org.http > FORWARDPROXY01.hlx.int.41274: Flags [S.], seq 3393942316, ack 3249023065, win 14480, options [mss 1380,sackOK,TS val 1828797864 ecr 21757317,nop,wscale 7], length 0

So the process flow is as follows:

- Client requests website (goes to default gateway / ASA)
- The ASA redirects the request (WCCP) to the proxy server over the GRE tunnel
- The proxy server receives the GRE traffic from it's physical interface and terminates it through our GRE interface
- The web traffic on the GRE tunnel is then redirected to our local proxy server - which serves the request
- When the local proxy server gets a reply it forwards the request directly out to the client (i.e. not via the GRE tunnel again) - in this case straight out of the client interface.

Sources:

http://wiki.squid-cache.org/Features/DynamicSslCert

Wednesday 6 July 2016

Setting up a GRE tunnel between two CentOS 7 instances

GRE provides a way of encapsulating traffic between two endpoints (not encrypting it.)

It provides a way of ensuring that data is not tampered with - although in order to encrypt the traffic it would need to go over an IPSec tunnel.

One of the major advantages of GRE over other tunneling protocols like IPSec is that it supports multicast traffic (for example routing protocols such as OSPF and EIGRP.) while IPSec only support unicast traffic.

In this tutorial I will be setting up a point-to-point like between two linux instances running CentOS 7.

We should firstly ensure that the relevant gre modules are loaded with:

lsmod | grep gre

and if in the event they are not we should issue:

sudo modprobe ip_gre

(although unless you are running a custom kernel it's unlikely that you will have this missing.)

PUPPETTEST #1

sudo ip tunnel add tun0 mode gre remote 10.0.2.152 local 10.0.2.154 ttl 255
sudo ip link set tun0 up
sudo ip addr add 10.10.10.1/24 dev tun0

PUPPETTEST #2

sudo ip tunnel add tun0 mode gre remote 10.0.2.154 local 10.0.2.152 ttl 255
sudo ip link set tun0 up
sudo ip addr add 10.10.10.2/24 dev tun0

We should also ensure that our IPTables chains are setup correctly - typically you will want to add something like the following before the default-deny statement in the filter table:

sudo iptables -A INPUT -p gre -j ACCEPT

or to lock it down even further:

sudo iptables -A INPUT -i eth0 -s <remote-endpoint> -j ACCEPT

and then attempt to ping each other e.g. from 10.10.10.1:

ping 10.10.10.2

** Note: tcpdump is your friend here - e.g.:

tcpdump -i eth0 proto gre
or
tcpdump -i eth0 icmp

Now we should add the interface configuration permanently on HOST #1:

vi /etc/sysconfig/network-scripts/ifcfg-tun0

DEVICE=tun0
BOOTPROTO=none
ONBOOT=no
TYPE=GRE
PEER_INNER_IPADDR=10.10.10.2 # This is the tunnel IP address (e.g. p2p link) of the remote peer.
MY_INNER_IPADDR=10.10.10.1 This is the tunnel IP address (e.g. p2p link) of the current peer.
PEER_OUTER_IPADDR=10.0.2.152 # This is the outer (e.g. eth0 network) interface ip address that the actual tunnel is going over.

and then bring the interface up:

ip link set tun0 up.

and then on HOST #2:

vi /etc/sysconfig/network-scripts/ifcfg-tun0

DEVICE=tun0
BOOTPROTO=none
ONBOOT=no
TYPE=GRE
PEER_INNER_IPADDR=10.10.10.1
MY_INNER_IPADDR=10.10.10.2
PEER_OUTER_IPADDR=10.0.2.154

and then bring the interface up:

ip link set tun0 up.

to kill the tunnel and bring down the interface we can issue:

ip link set tun0 down
ip tunnel del tun0

Tuesday 5 July 2016

Setting up route maps with the ASA (squid proxy)

Route maps are a convenient way of re-routing traffic dependent on specific criteria (like source, destination and so on.)

The other day I came across a good use case for them when implementing a transparent proxy with squid.

I wanted to ensure that a specific subnet would get there web traffic (tcp/80 and tcp/443) re-routed to a different hop (rather than the default gateway.)

To create the relevant route map we should firstly create the following ACL's to define our traffic:

access-list squidfilter extended permit tcp 10.11.12.0 255.255.255.0 any eq www
access-list squidfilter extended deny ip any any

route-map squidredirect permit 10

match ip address squidfilter

set ip next-hop <squid-ip>

Setup VRRP on CentOS 7 with Keepalived

VRRP commonly used with routers and firewalls in an alternative to Cisco's own HSRP and similar to CARP in many ways.

Keepalived allows us to utilize VRRP on Linux systems - which in this case will be a cluster of NGINX servers.

In this scenerio we want to ensure that clients are accessing the reverse proxy cluster from a single IP - and if in the event that one of the nodes in the cluster goes down that the other one will take over the shared IP address.

I would reccomend using a dedicated interface on each node that will have the shared IP address assigned to it and a separate management interface for administrative purposes.

We will firstly need to install the following on each node:

yum install keepalived

and then create a new keepalived configuration:

mv /etc/keepalived/keepalived.conf /etc/keepalived/_keepalived.conf
vi /etc/keepalived/keepalived.conf

and add the following to NODE 1 (primary) - replacing where necessary:

! Configuration File for keepalived

global_defs {
   notification_email {
     alerts@yourdomain.com
   }
   notification_email_from keepalived@yourdomain.com
   smtp_server 10.11.12.13
   smtp_connect_timeout 30
}

vrrp_instance VI_1 {
    state MASTER
    interface eth1
    virtual_router_id 10
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass yoursecurepassword
    }
    virtual_ipaddress {
        10.11.12.254
    }
}

and then add the following on NODE 2 (secondary):

mv /etc/keepalived/keepalived.conf /etc/keepalived/_keepalived.conf
vi /etc/keepalived/keepalived.conf

global_defs {
   notification_email {
     alerts@yourdomain.com
   }
   notification_email_from keepalived@yourdomain.com
   smtp_server 10.11.12.13
   smtp_connect_timeout 30
}

vrrp_instance VI_1 {
    state MASTER
    interface eth1
    virtual_router_id 10
    priority 101
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass yoursecurepassword
    }
    virtual_ipaddress {
        10.11.12.254
    }
}

** Some points to keep in mind - the node with the highest priority should take presidence over any other nodes with lower priorities. The virtual_router_id attribute should be the same for each router part of the same set.

and then - on BOTH nodes we should ensure that the services starts up automatically at boot:

sudo systemctl enable keepalived
sudo systemctl start keepalived

Now we can verify the ip configuration with:

ip addr show

and then turn of the primary node and ensure that we can still ping the shared ip address we setup.

You can also verify the failover by tailing the messages file:

tail -f /var/log/messages

Monday 4 July 2016

How to 'generlize' a linux system

I came across this situation the other day while creating a vTemplate with vSphere on CentOS 7 Core.

I wanted a way of automatically ensuring that all unique attributes of the system would not remain after deployment of the vTemplate (for example disk GUID's, machine ID's and so on.)

Fortunately there is a tool called 'virt-sysprep' that comes within the 'libguestfs-tools-c' package:

yum install libguestfs-tools-c

And then I tend to run the following to 'generalize' the system:

virt-sysprep --enable crash-data firewall-rules hostname logfiles lvm-uuids machine-id net-hostname random-seed ssh-hostkeys tmp-files yum-uuid

* Although there are a lot of further options to play around with - so take a look at the man page.