Friday, 30 March 2018

Tuesday, 27 March 2018

Policing / Shaping traffic in CentOS with tc

This can be achieved using the tc command - below is a simple example that polices all traffic on the interface. However you can also tag traffic with iptables and apply throttling based on these tags for more complicated scenarios.

Policing Example

tc qdisc add dev enp0s25 handle ffff: ingress
tc filter add dev enp0s25 parent ffff: protocol ip prio 50 u32 match ip src 0.0.0.0/0 police rate 256mbit burst 0k drop flowid :1
tc qdisc add dev enp0s25 root tbf rate 256mbit latency 25ms burst 0k

Friday, 23 March 2018

QoS Congestion Avoidance

An excellent article on QoS Congestion Avoidance from netcerts:

QoS Congestion Avoidance

Tail Drop
Tail Drop is when the packets are dropped when they arrive on a congested interface.Tail Drop is not just bad for voice packets but for data packets as well. It also impacts the efficiency of network bandwidth utilization. When the Output Queue is full and packets  arrive in on the Input Queue, then the packets which are arriving on the interface will be dropped. It does not matter if its a voice packet or a data packet, everything will be dropped by default when Tail Drop is in action. TCP has a mechanism where it can identify the packet loss and send those packets again.  TCP Senders will start sending packets, and they will increase the rate at which they send packets when there is no congestion and no packet loss. The increase is exponential over a time interval. When packets are being dropped, they sense that network is congested and then TCP Slow start kicks in, the sending rate is  suddenly dropped by a big margin until there is no more packet loss and again the sending rate picks up slowly after certain time interval where TCP does not see any packet loss.
When TCP Slow Start kicks in, all senders on the network back off and you can see a drop in the bandwidth, then slowly everyone starts sending packets at higher rate as they see no more packet loss, so all senders on the network start sending the packets again at higher rate and you see peaks in the network bandwidth.  At this time the interfaces can get congested again and packets can be dropped, which then makes all senders to drop their sending rate and wait for certain time interval where they see no more packet loss, this leads TCP Senders to again increase the sending rate. This goes on in cycles and this behavior  means a lot of bandwidth is just getting wasted. If you are monitoring the bandwidth with a graph, you will something like below graph in  the utilization charts. This behavior is also called as “Global TCP Synchronization” and it is responsible for a lot of network bandwidth wastage.
Tail Drop is the default congestion avoidance mechanism. More efficient congestion avoidance techniques can be configured on Cisco routers which are
1. RED – Random Early Detection
2. WRED – Weighted Random Early Detection
Congestion avoidance techniques basically monitor the network traffic and analyze the bandwidth loads, then they anticipate and avoid congestion at common network bottle-necks by dropping packets in advance.
Random Early Detection (RED)
With RED, the packets are discarded randomly before the Queue gets full, and increase the random drop rate when the queue is getting full. With RED implementation the problem of global TCP Synchronization can be effectively avoided.
RED Terminology
1.Minimum Threshold:
Is the point where the average queue hits or goes over this threshold value and random packet dropping starts.
2. Maximum Threshold:
Is the point where tail drop behavior comes into action.
3. Mark Probability Denominator:
Is the fraction of packets to be randomly dropped. That is how many packets to be dropped.
Weighted Random Early Detection (WRED)
WRED is Cisco implementation of RED. Cisco does not do the random dropping using RED algorithm, instead it used its own WRED.
All WRED does is to make sure that important packet does not get dropped in the random dropping process, instead a not so important packet is dropped.
With WRED multiple RED profiles are automatically created by the router based on IP Precedence or DSCP. If you decide that WRED is based on IPP then router will generate 8 different profiles for the dropping decision. If you decide to use DSCP then the router will create 64 different profiles for the dropping decision.
WRED Profile consists of :
1. Minimum Threshold: Is the point where random packet dropping starts
2. Maximum Threshold : Is the point where tail drop behavior comes into action.
3. Drop Probability: Determines how many packets to be dropped.
All these three values are calculated for each of  the profile – which is 8 profiles when IPP is used and 64profiles when DSCP is used.
By Default:
1. EF will get a very high minimum threshold.
2. AF Classes get their minimum thresholds as per their Drop Priorities.
These values can also be set manually.
Command to Configure WRED
# random-detect
Class Based WRED (CB-WRED)
When CB-WRED is used in conjuction with CBWFQ, it allows DiffServ Assured Forwarding Per Hop Behavior.
Configuration of CB-WRED
under policy-map (MQC)
# random-detect ! –> This will give the IPP based WRED by default
To give DSCP based CB-WRED
# random-detect dscp-based
To modify the thresholds
# random-detect precedence <precedencevalue> <min-threshold> <max-threshold> <mark-probability>
# random-detect dscp <dscpvalue> <min-threshold> <max-threshold> <drop-probability>
Explicit Congestion Notification (ECN )
DSCP uses 6 bits of ToS Byte in IP header, and the last 2 bits are used for ECN.
ECN marks the packets when the average queue length exceeds a specified value and the Routers and the end hosts can lower the sending rates based on ECN. But all routers and the end hosts in the network must support this feature to work.
ECN-Bits:     Description:
00                  Not ECN Capable
01                   ECN Capable
10                   ECN Capable
11                    Indicates Congestion
If the average queue length is between minimum threshold and maximum threshold then ECN bits are marked and ECN process begins. If end-points (routers and hosts) are not ECN capable then packets may be dropped
Configuration
Under Policy-map
# random-detect ecn
To allows for bursts
Exponential Weighting Constant is the value which determines the burst rate. You can control the WRED sensitivity to bursts.
Command: # random-detect exponential-weighting-constant <N>
By Default the value of N is 9
Lower Values of N – long bursts that is WRED will be more burst sensitive and can cause more packet loss.
Higher Values of N – allows for short bursts
Note:
1. If the value of n gets too low, WRED will overreact to temporary traffic bursts and drop traffic unnecessarily.
2. If the value of n gets too high, WRED will not react to congestion. Packets will be sent or dropped as if WRED were not in effect.
Monitoring CB-WRED
# show policy-map interface 
Full credit to netcerts.net

Wednesday, 21 March 2018

Example: Applying QoS on a Cisco 3650 / 3850

Note: In older models / OS's we needed to issue 'mls qos' in order to enable QoS on the switch - however with the 3650 QoS is enabled by default.

The aim is to tag voice traffic with DSCP that hits either gi0/1 or gi0/2 from VLAN 10 and ensuring data traffic is even spread between vlan20 and vlan30. Note: In most cases AutoQoS is a much better solution and greatly simplifies the configuration - however I created this lab to demonstrate a simple example manually:

vlan 10
desc priority traffic
name VLAN10

vlan 20
desc non-priority data traffic 1
name VLAN20

vlan 30
desc non-priority data traffic 1
name VLAN20

We'll need to use MQC (Modular QoS CLI) in order to apply the QoS - so let's firstly create our class map:

####### Set ingress service policy

class-map qos_vlan10_cm_in
match vlan 10
exit

class-map qos_vlan20_cm_in
match vlan 20
exit

class-map qos_vlan20_cm_in
match vlan 30
exit

policy-map qos_pm_in
class qos_vlan10_cm_in
set ip dscp 40
class qos_vlan20_cm_in
set ip dscp 0
class qos_vlan30_cm_in
set ip dscp 0
exit

int gi1/0/1
service-policy input qos_pm_in

int gi1/0/2
service-policy input qos_pm_in

And now for the egress QoS:

class-map qos_vlan10_cm_out
match ip dscp 40
exit

class-map qos_vlan20_cm_out
match ip dscp 0
exit

class-map qos_vlan30_cm_out
match ip dscp 0
exit

policy-map qos_pm_out
class qos_vlan10_cm_out
priority 1 percent 10
class qos_vlan20_cm_out
bandwidth percent 20
class qos_vlan30_cm_out
bandwidth percent 20
class class-default
bandwidth remaining percent 100
exit

# The 'bandwidth percent' command provides 20% of the interfaces bandwdith to vlan20 and 20% to vlan30 - however during times were there is no contention other classes can utilise additional bandwidth. I.e. The command doesn't reserve bandwidth. While the remaining 100% of the bandwidth is shared between other traffic.

# The 'priority level' command always ensures that traffic from VLAN10 is always served before anything else and reserves 10% of the link (i.e. 100 megabits) for the voice traffic - however it is not able to utilise any greater than 10%!

Finall apply the service policy to the outgoing interface with:

int gi1/0/24
service-policy output qos_pm_out









Monday, 12 March 2018

Understanding VFS (Virtual File System), inodes and thier role

VFS (Virutal File System)

The virtual file system manages all of the real filesystems mounted at a given time for example xfs, ext4 etc. Each file system registers itself with the VFS during initialization. Each real file system either requires support built directly into the kernel or in the form or modules.

Each filesystem mounted by the VFS has a corrosponding superblock (that is a VFS superblock - not an EXT3 superblock - they are similar in nature however distinct.) A VFS superblock contains the following information:

- Device (e.g. /dev/sda1)
- Inode Pointers (The mounted inode pointers points to the first inode on the filesystem)
- Block size of the filesystem
- Superblock operations (A pointer to a set of superblock routines for this file system)
- File system type (A pointer to the mounted file system's file_system_type  data structure e.g. XFS)
- File system specific (A pointer to information needed by this file system)

VFS inodes

Inodes are used to describe each file system object on a systems - for example a folder or directory. They consist of the following information:

- File Size
- File Type (e.g. regular file, directory, block device etc.)
- Group
- Number of links
- Permissions
- File Access / Modify and Change times
- Extended Attributes
- ACL's

Each inode is identified by a unique inode number - for example we can inspect the inode information about '/etc/passwd' with the stat command:

>> stat /etc/passwd
File: ‘/etc/passwd’
Size: 1578            Blocks: 8          IO Block: 4096   regular file
Device: ca01h/51713d    Inode: 18082       Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:passwd_file_t:s0
Access: 2018-03-12 11:46:01.201484317 +0000
Modify: 2017-01-19 11:45:27.790745205 +0000
Change: 2017-01-19 11:45:27.793745185 +0000

It's also important to realise that the inode itself does not hold any data - it's purely used for descriptive purposes. 

We can delete a file system object directly with it's assosiated inode number - for example:

cd / && find . -inum 18082 -exec rm -i {} \;

Note: inodes in VFS are different from that of filesystems such as ext2, ext3 etc. An inode within the VFS can be referencing a file situated on one  several different file systems.


Friday, 9 March 2018

Manually adding a host SSH fingerprint into known_hosts

I noticed that when connecting to a non-standard port for SSH e.g. host.com:2020 that the SSH host fingerprint was not being added to the users known_hosts file. So in order to perform this manually we should issue:

ssh-keygen -p 2020 host.com

host.com ssh-rsa AAAABBBBBBCCCCCC.....

and then append it to our known_hosts

echo AAAABBBBBBCCCCCC..... >> ~/.ssh/known_hosts