Wednesday 30 November 2016

Setting up mutt notes

Mutt will by default lookup the $MAIL variable in order to identify where the user mailbox is created e.g.:

echo $MAIL
/var/mail/username

If for some reason this is not set we can issue:

export $MAIL=/var/mail/username

and to make it permanent:

echo ~/.bashrc >> 'export $MAIL=/var/mail/username'

On first launch if your mail directoy does not exist ask you whether you would like it to create a new mail directory.

Sometimes if after first launch the mailbox (or it's folder) is deleted you might get the following error message:

/var/mail/username: No such file or directory

To re-create the mailbox we should generate a test mail to our self e.g.:

echo Test Email | mail $USER

and verify again:

mutt

Tuesday 29 November 2016

Mount point persistence with fstab

We should firstly identify the block device with dmesg:

dmesg | grep sd

[611156.2271561] sd 2:0:3:0: [sdd] Attached SCSI disk

Create a new partition table:

sudo fdisk /dev/sdd
o (to create a new / empty DOS partition table.)
n (to create a new primary ext3 partition.)
w (to write changes.)

Lets create the filesystem with:

mkfs.ext3 /dev/sdd1

Now grab the UUID of the partition with:

blkid /dev/sdd1

and then perform a test mount of the partition e.g.:

mkdir -p /mount/mountpoint

mount -t auto /dev/sdd1 /mount/mountpoint

and if all goes well - add our fstab entry in e.g:

echo 'UUID=1c24164-e383-1093-22225-60ced4113179  /backups  ext3 defaults  0  0' >> fstab

and reboot the system to test.

Friday 25 November 2016

Setting up highly available message queues with RabbitMQ and Cent OS 7

Since RabbitMQ runs on Erlang we will need to install it from the epel repo (as well as a few other dependancies):

yum install epel-release erlang-R16B socat python-pip

Download and install rabbitmq:

cd /tmp
wget https://www.rabbitmq.com/releases/rabbitmq-server/v3.6.6/rabbitmq-server-3.6.6-1.el7.noarch.rpm
rpm -i rabbitmq-server-3.6.6-1.el7.noarch.rpm

ensure that the service starts on boot and is started:

chkconfig rabbitmq-server on
sudo service rabbitmq-server start

Rinse and repeat on the second server.

Now before creating the cluster ensure that both servers have the rabbitmq-server is a stopped state:

sudo service rabbitmq-server stop

There is a cookie file that needs to be consistent across all nodes (master, slaves etc.):

cat /var/lib/rabbitmq/.erlang.cookie

Copy this file from the master to all other nodes in the cluster.

And then turn on the rabbitmq-server service on all nodes:

sudo service rabbitmq-server start

Now reset the app on ALL slave nodes:

rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app

and the following on the master node:

rabbitmqctl stop_app
rabbitmqctl reset

and create the cluster from the master node:

rabbitmqctl join_cluster rabbit@slave

* Substituting 'slave' with the hostname of the slave(s.)

NOTE: Make sure that you use a hostname / FQDN (and that each node can resolve each others) otherwise you might encounter problems when connecting the nodes.

Once the cluster has been created we can verify the status with:

rabbitmqctl cluster_status

We can then define a policy to define to provide HA for our queues:

rabbitmqctl start_app
rabbitmqctl set_policy ha-all "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'

Ensure you the pika (python) library installed (this provides a way of interacting with AMQP):

pip install pika

Let's enable the HTTP management interface so we can easily view our exchanges, queues, users and so on:

rabbitmq-plugins enable rabbitmq_management

and restart the server with:

sudo service rabbitmq-server restart

and navigate to:

http://<rabitmqhost>:15672

By default the guest user should be only useable from localhost (guest/guest) - although if you are on a cli you might need to remotely access the web interface and as a result will need to enable the guest account temporrily:

echo '[{rabbit, [{loopback_users, []}]}].' > /etc/rabbitmq/rabbitmq.config

Once logged in proceed to the 'Admin' tab >> 'Add a User' and ensure they can access the relevent virtual hosts.

We can revoke guest access from interfaces other than localhost by remove the line with:

sed -i '/loopback_users/d' /etc/rabbitmq/rabbitmq.config

NOTE: You might need to delete the

and once again restart the server:

sudo service rabbitmq-server restart

We can now test the mirrored queue with a simple python publish / consumer setup.

The publisher (node that instigates the messages) code should look something like:

import pika

credentials = pika.PlainCredentials('guest', 'guest')

connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='rabbitmaster.domain.internal',port=5672,credentials=credentials))
channel = connection.channel()

channel.queue_declare(queue='hello')

channel.basic_publish(exchange='',
                      routing_key='hello',
                      body='Hello World!')
print(" [x] Sent 'Hello World!'")
connection.close()

We should now see a new queue (that we have declared) in the 'Queues' tab within the web interface.

We can also check the queue status accross each node with:

sudo rabbitmqctl list_queues

If all goes to plan we should see the queue length in consistent accross all of our nodes.

Now for the consumer we can create something like the following:

import pika

credentials = pika.PlainCredentials('guest', 'guest')

connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='rabbitmaster.domain.internal',port=5672,credentials=credentials))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
   print(" [x] Received %r" % (body,))
channel.basic_consume(callback,
                     queue='hello',
                     no_ack=True)
print(' [*] Waiting for messages. To exit press CTRL+C')
channel.start_consuming()

Now the last piece of the puzzles is to ensure that the network level has some kind of HA e.g. haproxy, nginx and so on.

* Partly based of / adapted from the article here: http://blog.flux7.com/blogs/tutorials/how-to-creating-highly-available-message-queues-using-rabbitmq

Tuesday 22 November 2016

Troubleshooting certificate enrollment in active directory

Start by verifying the currently published CA(s) with:

certutil -config - -ping

and also adsiedit:

CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=yourdomain,DC=internal

Confirm whether the CA is entrpise or standalone with:

certutil –cainfo

The CA type must be Enterprise otherwise MMC enrollment will not work.

We can also verify the permissions on the CA itself by gonig to the Certificate Authority snapin:

CertSrv.msc

and right-hand clicking on the server node >> Security >> and ensuring the relevant users have the 'request' permission - which should typically be 'Authenticated Users' and that Domain Admins, Enterprise Admins and Administrators have the 'enroll' permission.

We can pull down a list of certificates from  issue the retrieve a list of templates from the CA:

certutil –template

Example output:

EFSRecovery: EFS Recovery Agent ACCESS DENIED
CodeSigning: Code Signing
CTLSigning: Trust List Signing
EnrollmentAgent: Enrollment Agent
EnrollmentAgentOffline: Exchange Enrollment Agent (Offline request)

Verify whether the relevant template you are trying to issue has 'Access Denied' appended to it - if so it is almost certainly a permissions issue on the certificate template - go to:

certtmpl.msc

and right-hand click on the certificate >> Security tab and verify the permissions.

Also within the Certificate Template mmc snapin - check the Subject Name tab and ensure that 'Build from the active directory information' is selected if you are attempting to request the certificate from MMC - as the MMC snapin does not support provide a common name with the request! (This caught me out!)

Last one: An obvious one - but ensure that the certificate template is actually issued:

CA Authority >> Right-hand click Certificate Templates >> New >> Certificate template to issue.

Manually (painfully) generating a server certificate for LDAPS on Server 2003.

This is a bit of an odd one - as this process can be automated - but if you like me - prefer to do this manually I have documented the steps (briefly) below.

Firstly add the CA role by going to 'Add and Remove Programs' from the control panel and selecting the 'Add/Remove Windows Components' and ensure that 'Certificate Services' is checked as well as ensuring that the 'CA Web Enrollment' feature is installed as well (click on the details button.)

Now lets create a certificate template for this purpose - so go to:

mmc.exe >> 'Add Snapins' >> Certificate Authority >> Right-hand click on the 'Certificate Templates' node and select 'Manage.' We will now duplicate an existing template (as required) - so right-hand click on 'Domain Controller Authentication' and hit 'Duplicate Template.' I then named my new template: 'Domain Controller Authentication Manual' and in the 'Subject Name' tab ensure 'Obtain from Active Directory' is selected. In the 'Security' tab ensure that only the 'Domain Admins' user group has the enroll permissions and in the 'Extensions' tab that 'Server Authentication' (OID: 1.3.6.1.5.5.7.3.1) and finally in the 'Request Handling' tab ensure that 'Allow private key to be exported' is ticked.

Click apply / OK etc. and finally hit OK on the new template form to create the template.

Then on the CA authority snapin - right-hand click the 'Certificate Templates' node >> New >> 'Certificate Template to Issue' and select the relevant template. NOTE: In my case the template wasn't present so I added the template via CLI:

certutil -SetCAtemplates +DomainControllerAuthenticationManual

Restart certificate services.

Now we need to ensure that the FQDN of the server is within the trusted sites zone in IE e.g.:

myca.domain.internal

(If you do not add the FQDN to the trusted zone you will get an 'access denied' message when attempting to generate the certificate - which can be quite misleading!)

and then browse to:

http://myca.domain.internal/certsrv

and enter your domain credentials.

Then from the task list select 'Request a certificate' >> 'Advanced certificate request' >> 'Create and submit a request to this CA'. At this point you should be prompted to install an active-x control - ensure this is installed before proceeding.

Select the 'Domain Controller Authentication Manual' template and ensure that the subject names matches that of the DC you wish to setup LDAP for and also ensure 'Store certificate in the local computer certificate store
' is ticked and finally hit submit to import the certificate into your computer's certificate store.

We should also ensure that the "HTTP SSL" service has started and will be started automatically at boot!

and then test our ldaps connection:

cmd.exe ldp

and connect to the server using the DC's FQDN (the IP will NOT work) e.g.

mydc.mydomain.internal

For more information about troubleshooting ldap over SSL see below:

https://support.microsoft.com/en-gb/kb/938703

Friday 18 November 2016

Troubleshooting netlogon problems with Windows 7/10/2008/2012

Firstly verify any DNS servers:

ipconfig /all

Sample output: 10.1.1.1

and ensure they are (all) responding with e.g.:

cmd
nslookup
server 10.1.1.1
google.com

if fails check with telnet e.g. (assuming the DNS server is running over TCP):

cmd
telnet 10.1.1.1 53

and verify you get a response.

We can check if the netlogon service is able to communicate with our DNS server with:

nltest /query

we can also verify the last state of the secure channel created between the client and DC with:

nltest /sc_query:yourdomain.internal

(This will also inform you of which DC the channel was created with.)

We can also attempt to reset this with:

nltest /sc_reset:yourdomain.internal

or alternatively use sc_verify (this won't break the exisiting secure channel unless it's not established):

nltest /sc_verify:yourdomain.internal

If the issue is related to more than one client it could be due to loss of network connectivity or a DC related issue - to check the DC we can issue:

dcdiag /a

Turning on logging with UFW

If you are unfortuante enough to be working with Ubuntu you might have come accross UFW - a wrapper for IPTables that aims to 'simplify' management of the firewall.

To enable logging in UFW you should firstly ensure its not already turned on with:

sudo ufw status verbose | grep logging

and if not enabled issue:

sudo ufw logging on

We can also adjust the logging level with:

ufw logging [low] | [medium] | [full]

Low: Provides information on all dropped packets and packets that are setup to be logged.

Medium: Matches all low level events plus all Invalid packets and any new connections.

High: Matches all medium level events plus all packets with the exception of rate limiting.

Full: Logs everything.

The logs are typically located within:

/var/log/ufw

e.g. tail -f /var/log/ufw/ufw.log

Monday 14 November 2016

Email spoofing, SPF and P1/P2 headers

SMTP message headers comprise of two different headers types: P1 and P2.

The way I like to conceptualize it is relating a P1 header to network frame and a P2 header to an IP packet - the frame is forwarded via a network switch (which is unaware of any lower level PDU's encapsulated within the frame) - it is only until the frame reaches a layer 3 device that the IP packet is inspected and a decision is made.

By design SPF only checks the P1 headers - not the P2 header. This presents a problem when a sender spoofs the sender within a P2 headers - while the sender in the P1 header could be completely legitimate.



The below diagram demonstrates an example of a spoofed email abusing the P2 header:



A logical approach to this would be to simply instruct your anti-spam to compare the relevant P1 and P2 headers and if a mismatch is encountered simply drop the email. Although however there are a few situations where this would cause problems - such as when a sender is sending a mail item on behalf of another sender (think mail group) or when an email is forwarded - in the event of the email bouncing the forwarder should receive the bounce notification rather than the original sender.

So instead we can pre-define IP addresses that are allowed to send on behalf of our domain:

In Exchange: In order to resolve this problem we can block inbound mail from our own domain by removing the 'ms-exch-smtp-accept-authoritative-domain-sender' permission (this blocks both 'MAIL FROM' (P1) and 'FROM' (P2) fields) from our publicly accessible (anonymous) receive connector - although however, this will cause problems if there any senders (e.g. printers, faxes or bespoke applications) that send mail on behalf of the domain externally - so a separate receive connector (with the ms-exch-smtp-accept-authoritative-domain-sender permission) should be setup to cater for these devices.

So we should firstly block your sending domain with:

Set-SenderFilterConfig -BlockedDomains mydomain.com

and ensure internal mail can flow freely with:

Set-SenderFilterConfig -InternalMailEnabled $true

and to remove the 'ms-exch-smtp-accept-authoritative-domain-sender' permission we can issue:

Get-ReceiveConnector "Public Receive Connector" | Get-ADPermission -user "NT AUTHORITY\Anonymous Logon" | where {$_.ExtendedRights -like "ms-exch-smtp-accept-authoritative-domain-sender"} | Remove-ADPermission

and if needed - ensure that your receive connector for printers, faxes etc. can receive email from them:

Get-ReceiveConnector "Internal Receive Connector" | Add-ADPermission -user "NT AUTHORITY\Anonymous Logon" -ExtendedRights "ms-Exch-SMTP-Accept-Any-Sender"

and finally restart Microsoft Exchange Transport Services.

Setting up client certificate authentication with Apple iPhones / iPads

Client certificates can come in very handy when you wish to expose internal applications that you wish to make publicly accessible to specific entities.

Fortunately most reverse proxies such as IIS, httpd, nginx and haproxy provide this functionality - although for this tutorial I will concentrate on nginx since the configuration is pretty straight forward and I (personally) tend to have less cross-platform problems when working with it.

* For this tutorial I am already assuming that you have your own server certificate (referred to as server.crt)

So lets firstly create our CA that we will use to issue our client certificates:

openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 365 -key ca.key -out ca.crt

and then generate our client private key and CSR:

openssl req -out client.csr -new -newkey rsa:2048 -nodes -keyout client.key

and then self-sign our new certificate with:

openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt

Now we want to import the key pair into our iPhone / iPad - this can be performed by the Apple Configuration Utility or much more easily by simply sending an email to the device with the key pair attached.

However we must firstly create a .pfx package with both the private and public key in it - to do this we should issue:

openssl pkcs12 -inkey client.key -in client.crt -export -out client.pfx

and setup our nginx configuration:

server {
    listen        443;
    ssl on;
    server_name clientssl.example.com;

    ssl_certificate      /etc/nginx/certs/server.crt;
    ssl_certificate_key  /etc/nginx/certs/server.key;
    ssl_client_certificate /etc/nginx/certs/ca.crt;
    ssl_verify_client on;

    location / {
        proxy_pass http://1.2.3.4:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

test the configuration with:

nginx -t

and if correct reload the server with:

sudo service nginx reload