Thursday, 29 September 2016

Understanding OSPF stub, NSSA, totally stub and totally NSSA's

Stub Areas
OSPF makes use of stub areas to control the advertisements of routes into an area - it does this by designating a stub interface on an ABR (Area Border Router.) By doing this you are able to supress external route advertisements through the ABR (keeping LSA flooding to a minimum) otherwise the routing table could get very large - instead of advertising the external routes the ABR will advertise itself as a default route instead.



For example OSPF has several LSA types:

Type 1: Router LSA – This LSA is generated by each router in each area it is present – the LSA contains the router’s ID.

Type 2: Network LSA -  These LSA’s are generated by the DR (Designated Router) and contain the router-id of the DR and all of the routers attached to the transit network – they are only flooded within the originating area only. They are not generated on non-broadcast / point to point links since a DR will not be elected.

Type 3: Summary LSA – These LSA’s are generated by the ABR (Area Border Router) – they represent all of the connected networks in the current area and the adjacent area to it and are passed to other ABR’s to inform them of these networks.

Type 4: Summary ASBR – These LSA’s are generated by ABR’s and contain routes to ASBR (Autonomous System Boundary Routers.) The link-state ID is that of the ASBR being advertised.

ASBR (Autonomous System Border Router):
This router is elected to communicate between different autonomous systems.

Type 5: External LSA – This type of LSA is generated by ASBR’s and contain routes networks that are not part of the AS (autonomous system.) – for example another OSPF or EIGRP domain.

Type 6: Multicast LSA – Used for multicast applications.

Type 7: NSSA External LSA – These LSA’s allow you to inject external routes into a NSSA (Not-so-stubby-area) – usually Type 5 LSA’s do this job, but they are not allowed in stub areas so we used Type 7 instead.

Stub Areas do not accept type 5 (external) LSA’s or ASBR’s – meaning that they can’t accept external routes.

Totally Stub Areas
These areas block type 3 (summary LSA’s), type 5 (external LSA’s) and do not accept ASBR’s – like a normal stub area external routes are not accepted with the addition of type 3 LSA’s as well – meaning that the ABR will simply (and only) present a default gateway (of itself) to the devices on the network.

NSSA (Not so stubby area)
NSSA’s are the same as stub areas although they allow ASBR’s to be present within the area – since stub areas do not allow type 5 (external LSA’s) we need to use type 7 (NSSA external LSA’s) LSA’s instead.

Totally NSSA
This type of area block type 3 (summary) and 5 (external) LSA’s and allows ASBR in the same area.

Route redistribution
Route redistribution is the process of redistributing routes from one OSPF domain to another or routing protocol to another – for example EIGRP to OSPF – I will cover this process in a later tutorial.

HAProxy Remote Desktop Services Example Configuration

The below configuration load balances between two RDS servers - one used for trusted clients (defined in untrustedservers.lst) and another for trusted clients (defined in trustedservers.lst) - anything else attempting to connect to the RDS server is rejected.
frontend localnodes
    bind *:3389
    mode tcp
    default_backend restricted
    timeout client          1h
    option tcpka
    acl trustedclients src -f /etc/haproxy/trustedservers.lst
    acl untrustedclients src -f /etc/haproxy/untrustedservers.lst
    acl world src 0.0.0.0/0
    tcp-request connection reject if !trustedclients !untrustedclients
    tcp-request inspect-delay 2s
    tcp-request content accept if RDP_COOKIE
    use_backend unrestricted if trustedclients
    use_backend restricted if untrustedclients
backend unrestricted
    mode tcp
    balance source
    option tcpka
    server rds-server-01 10.0.0.1:3389 check port 3389 weight 256 inter 2s
    server rds-server-02 10.0.0.2:3389 check port 3389 weight 1 inter 2s
    timeout connect        10s
    timeout server          1h
backend restricted
    mode tcp
    balance source
    option tcpka
    server rds-server-02 10.0.0.2:3389
    timeout connect        10s
    timeout server          1h
Important: option tcpka - This ensures that the TCP session from client to frontend (and proxy to backend) are kept alive - since RDP sessions can remain idle for long periods of time.

What is SPF and how to set it up for your domain

SPF (Sender Policy Framework) is used as an anti-spoofing system that provides a way for an administrator of a domain to authorize mail servers that can send mail on behalf of the domain. The list of authorized hosts / mail servers are published as TXT records on the domains DNS zone.

For example if mail server 123.456.789.123 sends mail on behalf of domain.com and the mail server that receives the email is able to perform an SPF lookup on domain.com it will verify that 123.456.789.123 is present within the authorized hosts - if it is not it will be rejected, otherwise delivered to the desired recipient.

There are some fallbacks however - as not all mail servers are able to perform SPF lookups - so your milage maye somewhat vary.

Now to setup an SPF record for your domain - this is performed by adding a TXT file to your DNS zone - for example:
v=spf1 mx a ip4:123.456.789.123 ~all
The above instructs SPF aware mail servers that:

- Any MX servers provided in your DNS zone will be able to send mail on behalf of your domain
- In addition the IP 123.456.789.123 will be able to send mail on behalf of your domain
- The tilde (~) symbol instructs the receiving server to mark any mail that has failed SPF validation to be delivered - but be marked.

There are three types of SPF actions:

- Hard Fail: Simply reject (do not deliver) the mail.
- Soft Fail: Deliver the email, although will be marked.
- Neutral: Mails will usually be delivered.

SPFWizard provides a great tool for automatically generating SPF records for your domain.

Friday, 23 September 2016

Enabling access to legacy public folders from Exchange Online

This topic - although there is a lot of information available; can be quite overwhelming since there are so many potential scenarios and the sheer amount of documentation available.

This post is a quick summary of the steps I performed to provide access to Exchange Online users (who were part of a hybrid Exchange setup) to an on-premis Exchange 2010 (SP3) organization.

In my experience after I completed the initial hybrid wizard when first setting up the hybrid environment I found that users within Exchange Online did not have any access to public folders which were located on the on-premis site - in this particular setup public folders must be located either on-premis or on our Exchange Online tenant.

To begin we should firstly create a proxy mailbox that will allow you to access legacy public folders from office 365:

(Step 2) https://technet.microsoft.com/en-gb/library/dn249373(v=exchg.150).aspx

Then synchronize our mail enabled public folders (you don't need to do this for normal public folders):

(Step 3) https://technet.microsoft.com/en-gb/library/dn249373(v=exchg.150).aspx

We should then ensure that the mailboxes are available remotely from Office 365:

Set-OrganizationConfig -PublicFoldersEnabled Remote -RemotePublicFolderMailboxes PFMailbox

ensure that the directory synchronization is up to date:

Start-ADSyncSyncCycle -PolicyType Delta

and then finally set a default public folder mailbox for our office 365 users e.g.:

$session = New-PSSession -ConfigurationName:Microsoft.Exchange -Authentication:Basic -ConnectionUri:https://ps.outlook.com/powershell -AllowRedirection:$true -Credential:(Get-Credential)

Import-PSSession $session

Set-Mailbox –Identity [email protected] –defaultpublicfoldermailbox PFMailbox

Note: It took a good hour for the public folders to appear on my Outlook client (even after I forced the directory synchronization) - so be patient!

Thursday, 22 September 2016

RDP re-connection / disconnection events when running behind haproxy

In my case client's were being disconnected during RDP sessions and although appeared to be random - sometimes there was a noticeable pattern e.g. disconnection occurs ever x minutes - which led to me believe that the problem might be related to server / client configuration (e.g. keep alive / timeout issues).

This kind of issue could also be related to problems such as an unreliable network connection somewhere between the backend server.

We should firstly verify whether RDP server keep alive / timeout configuration is setup as needed on the RDS server:

https://technet.microsoft.com/en-us/library/cc754272(v=ws.11).aspx

There are also several settings that could potentially cause these kind of problems with haproxy itself:
timeout server
timeout client
To further isolate the issue we should monitor the RDP session when directly accessing the RDP server - using the 'Applications and Services logs >> Microsoft >> Windows >> TerminalServices-LocalSessionManager >> Operational' event log we can monitor disconnection / re-connection events on the local RDS server by filtering the following events:

Event ID 25: Re-connection
Event ID 24: Disconnection

Since in my case the RDP session is actually reestablishing itself part way through a user session this information should come in handy when diagnosing issues affecting the application layer.

We can utilize a utility such as mtr (or smokeping) to monitor latency and a utility that will monitor bandwidth availability for us such as iperf - so we will use these utilities to helps us gain an overall picture of the connection quality across several different paths:

Client >> Reverse Proxy
Reverse Proxy >> RDP Server

We should firstly use mtr (winmtr for Windows machines) to monitor a connection between the client and reverse proxy - such as:

mtr 1.2.3.4

We will firstly monitor the bandwidth throughput between the client and reverse proxy - on the reverse proxy we should issue something like:

iperf3 -s -p 5003 &

and on the client:

iperf3 -c 1.2.3.4 -i 1 -t 720 -p 5003

* Where -t defines how long (in seconds) to run the test for and -i defines how often data should be reported to the user.

Now if (hopefully) we can recreate the problem or wait for it to happen at least we can use tcpdump (or something similar) to capture the traffic from the above paths while this is happening to get a deeper view of what exactly is going on e.g. on the server we could issue something like:

tcpdump -i ethX -w out.pcap host 1.2.3.4 and port 3389

* where 1.2.3.4 is the client.

Interestingly it appeared at least from the pcap dump that RDP seems to only send data when my RDP session window was active and stopped sending data when a visible RDP window was idle for more than 60 seconds - which I imagine is part of the RDP bandwidth optimization techniques - after several minutes I noticed a tcp segment was being sent with a FIN flag - that (obviously) was closing the TCP session and hence forcing the end-user to re-establish the connection  to the reverse proxy again.

The solution involved increasing the 'timeout server' and 'timeout client' directives to something a little higher than the defaults. Although this does carry a heavy risk (especially with public facing endpoints) of client's potentially performing a Slowloris style attack.

Another (more suitable) alternative it to set a keepalive on the RDS server side to ensure they connection remains open.

Thursday, 1 September 2016

Anti-DDoS Setup for IPTables and Linux Kernel (CentOS 7)

Fornote: This article is for my reference - full credit goes to the source found here: https://javapipe.com/iptables-ddos-protection. The below has been tested on CentOS 7 - although different OS's / kernel version might not be completely compatible with everything below.

Kernel Tweaks:

kernel.printk = 4 4 1 7
kernel.panic = 10
kernel.sysrq = 0
kernel.shmmax = 4294967296
kernel.shmall = 4194304
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
vm.swappiness = 20
vm.dirty_ratio = 80
vm.dirty_background_ratio = 5
fs.file-max = 2097152
net.core.netdev_max_backlog = 262144
net.core.rmem_default = 31457280
net.core.rmem_max = 67108864
net.core.wmem_default = 31457280
net.core.wmem_max = 67108864
net.core.somaxconn = 65535
net.core.optmem_max = 25165824
net.ipv4.neigh.default.gc_thresh1 = 4096
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 16384
net.ipv4.neigh.default.gc_interval = 5
net.ipv4.neigh.default.gc_stale_time = 120
net.netfilter.nf_conntrack_max = 10000000
net.netfilter.nf_conntrack_tcp_loose = 0
net.netfilter.nf_conntrack_tcp_timeout_established = 1800
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 10
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 20
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 20
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 20
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 20
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 10
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.ip_no_pmtu_disc = 1
net.ipv4.route.flush = 1
net.ipv4.route.max_size = 8048576
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.tcp_congestion_control = htcp
net.ipv4.tcp_mem = 65536 131072 262144
net.ipv4.udp_mem = 65536 131072 262144
net.ipv4.tcp_rmem = 4096 87380 33554432
net.ipv4.udp_rmem_min = 16384
net.ipv4.tcp_wmem = 4096 87380 33554432
net.ipv4.udp_wmem_min = 16384
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 400000
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 2
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
net.ipv4.tcp_ecn = 2
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 10
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.rp_filter = 1
IPTables Rules
### 1: Drop invalid packets ###
/sbin/iptables -t mangle -A PREROUTING -m conntrack --ctstate INVALID -j DROP

### 2: Drop TCP packets that are new and are not SYN ###
/sbin/iptables -t mangle -A PREROUTING -p tcp ! --syn -m conntrack --ctstate NEW -j DROP

### 3: Drop SYN packets with suspicious MSS value ###
/sbin/iptables -t mangle -A PREROUTING -p tcp -m conntrack --ctstate NEW -m tcpmss ! --mss 536:65535 -j DROP

### 4: Block packets with bogus TCP flags ###
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags FIN,SYN FIN,SYN -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags SYN,RST SYN,RST -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags FIN,RST FIN,RST -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags FIN,ACK FIN -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags ACK,URG URG -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags ACK,FIN FIN -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags ACK,PSH PSH -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags ALL ALL -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags ALL NONE -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP
/sbin/iptables -t mangle -A PREROUTING -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP

### 5: Block spoofed packets where ppp0 is the internet facing interface###
/sbin/iptables -t mangle -A PREROUTING -i ppp0 -s 224.0.0.0/3 -j DROP
/sbin/iptables -t mangle -A PREROUTING -i ppp0 -s 169.254.0.0/16 -j DROP
/sbin/iptables -t mangle -A PREROUTING -i ppp0 -s 172.16.0.0/12 -j DROP
/sbin/iptables -t mangle -A PREROUTING -i ppp0 -s 192.0.2.0/24 -j DROP
/sbin/iptables -t mangle -A PREROUTING -i ppp0 -s 192.168.0.0/16 -j DROP
/sbin/iptables -t mangle -A PREROUTING -i ppp0 -s 10.0.0.0/8 -j DROP
/sbin/iptables -t mangle -A PREROUTING -i ppp0 -s 0.0.0.0/8 -j DROP
/sbin/iptables -t mangle -A PREROUTING -i ppp0 -s 240.0.0.0/5 -j DROP
/sbin/iptables -t mangle -A PREROUTING -s 127.0.0.0/8 ! -i lo -j DROP

### 6: Drop ICMP (you usually don't need this protocol) ###
/sbin/iptables -t mangle -A PREROUTING -p icmp -j DROP

### 7: Drop fragments in all chains ###
/sbin/iptables -t mangle -A PREROUTING -f -j DROP

### 8: Limit connections per source IP ###
/sbin/iptables -A INPUT -p tcp -m connlimit --connlimit-above 111 -j REJECT --reject-with tcp-reset

### 9: Limit RST packets ###
/sbin/iptables -A INPUT -p tcp --tcp-flags RST RST -m limit --limit 2/s --limit-burst 2 -j ACCEPT
/sbin/iptables -A INPUT -p tcp --tcp-flags RST RST -j DROP

### 10: Limit new TCP connections per second per source IP ###
/sbin/iptables -A INPUT -p tcp -m conntrack --ctstate NEW -m limit --limit 60/s --limit-burst 20 -j ACCEPT
/sbin/iptables -A INPUT -p tcp -m conntrack --ctstate NEW -j DROP

### 11: Use SYNPROXY on all ports (disables connection limiting rule) ###
#/sbin/iptables -t raw -D PREROUTING -p tcp -m tcp --syn -j CT --notrack
#/sbin/iptables -D INPUT -p tcp -m tcp -m conntrack --ctstate INVALID,UNTRACKED -j SYNPROXY --sack-perm --timestamp --wscale 7 --mss 1460
#/sbin/iptables -D INPUT -m conntrack --ctstate INVALID -j DROP
### 12. SSH brute-force protection ###
/sbin/iptables -A INPUT -p tcp --dport ssh -m conntrack --ctstate NEW -m recent --set
/sbin/iptables -A INPUT -p tcp --dport ssh -m conntrack --ctstate NEW -m recent --update --seconds 60 --hitcount 10 -j DROP

### 13. Protection against port scanning ###
/sbin/iptables -N port-scanning
/sbin/iptables -A port-scanning -p tcp --tcp-flags SYN,ACK,FIN,RST RST -m limit --limit 1/s --limit-burst 2 -j RETURN
/sbin/iptables -A port-scanning -j DROP

Protection against SYN Flooding with SYNPROXY

The problem: SYN Flood attacks (while quite unsophisticated in nature) can be devastating to systems that do not have the relevant protection mechanisms in place - the basic premis behind a SYN flood attack is to exhaust the connection state table with invalid (or partially established handshakes) from (more often than not) spoofed sources.

For example

A synproxy is a mechanism for protection against SYN flooding and is built into (or implemented rather) many popular firewalls like iptables / pfsense (pf) and so on.

The basic principle of a synproxy is as follows:

1. The client send an initial SYN to the server.

2. When this packet hits the firewall on the server it is marked with an 'UNTRACKED' state.

3. A SYN ACK is sent back to the client and then upon the final ACK from the client the connection is validated.

4. If the connection is valid the synproxy then automatically initiate a three-way handshake with the real server (e.g. apache) by spoofing the SYN packets so that the real server will see the connecting client. Otherwise the connection will be marked as 'INVALID' and be dropped.

5. Communication is then left for the client and real server to perform between themselves.

We can setup synproxy with iptables as follows (we will use apache running on port 80 in this example):

we need to tweak the kernel by ensuring that the connection tracking system is stricter it it's categorization:

sudo sysctl -w net/netfilter/nf_conntrack_tcp_loose=0

and tcp timestamps:

sysctl -w net/ipv4/tcp_timestamps=1

and finally ensured it survives a reboot:

sysctl -p

Proceed by ensuring that the packets destined for apache are not tracked:

sudo iptables -t raw -A PREROUTING -i eth0 -p tcp --dport 80 --syn -j NOTRACK

And then redirect the traffic destined for apache to synproxy:

iptables -A INPUT -i eth0 -p tcp -m tcp --dport 80 -m state --state INVALID,UNTRACKED -j SYNPROXY --sack-perm --timestamp --wscale 7 --mss 1460

and instruct iptables to drop any invalid packets with:

sudo iptables -A INPUT -m state --state INVALID -j DROP

Increasing the connection table capacity in CentOS 7

Another consideration is to increase the connection tracking table capacity - firstly by increasing the has size (this is for RHEL/CentOS):

echo 1200000 > /sys/module/nf_conntrack/parameters/hashsize

and then setting the table size:

sudo sysctl net.netfilter.nf_conntrack_max=2000000

(My default on Debian 8 was 15388 and on CentOS 7 262144)

Note: On Debian systems I had to load the nf_conntrack module by adding a reference to 'nf_conntrack' within /etc/modules and reboot the system for changes to take effect.

Sources:

http://rhelblog.redhat.com/2014/04/11/mitigate-tcp-syn-flood-attacks-with-red-hat-enterprise-linux-7-beta/
https://lwn.net/Articles/563151/