Friday 29 April 2016

Perform a mailbox migration request between Exchange 2010 and Exchange Online

Firstly ensure that the admin user is within the 'Recipient Management' (and also 'Organizational Management') role group:
Add-RoleGroupMember "Recipient Management" -Member admin1
Verify with:
We should also ensure that our MRS proxy service is enabled with EWS by issuing:
Get-WebServicesVirtualDirectory | Set-WebServicesVirtualDirectory -MRSProxyEnabled $true
Now lets connect to our Exchange Online tenant:
$UserCredential = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection
Import-PSSession $Session
and then from the shell we can run something like:
$RemoteCredential = Get-Credential
New-MoveRequest -Identity <user-to-migrate> -Remote -RemoteHostName <your-on-premis-exchange-server> -TargetDeliveryDomain <destination-domain> -RemoteCredential $RemoteCredential
** If you get any access denied messages while performing the move request ensure that the account you are performing the operation with ($RemoteCredentaial) is inheriting security permissions from parent objects. See here for more information: https://support.microsoft.com/en-us/kb/2975731 **

We can get the progress of our move request with something like:
Get-MoveRequestStatistics <username>

Wednesday 27 April 2016

Setting up client SSL/TLS authnetication / certificate with NGINX / apache for the Amazon API Gateway

The Amazon API gateway has the ability to authenticate itself to a remote (back-end) API. This comes in very handy as in order for your back-end API to work with the Amazon API gateway you will need to make your site accessible to the entire AWS netblock for your region in order for it to function correctly.

Fortunately NGINX makes this process very easy - when setting up a client certificate on NGINX we must also ensure (as usual) that the relevant server key and certificates are defined and then add the 'ssl_client_certificate' which should point to the public key provided by the Amazon API Gateway portal.
ssl_certificate      /etc/nginx/certs/server.pem;
ssl_certificate_key  /etc/nginx/certs/server.key;
ssl_client_certificate /etc/nginx/certs/ca.pem;
ssl_verify_client on;

On apache / httpd you will need something like the following:

SSLVerifyClient require
SSLCACertificateFile /etc/nginx/certs/ca.pem

Adding an additional network interface onto an EC2 instance

In this example I will be adding a secondary interface onto an EC2 instance running Linux.

AWS allows you to assign up three (as of current) additional IP addresses to each interface - with the potential of allowing a machine to have up to four public IP's per interface.

We should firstly go to the EC2 Portal >> 'Network and Security' >> 'Network Interfaces' >> Right-hand click on the relevant interface and click on 'Manage Private IP Addresses' and hit 'Assign new IP'.

Unfortunately this restricts us when it comes to security groups - as they can only be applied on a per interface basis and can't be applied to individual destination IP's.

Once it has been created proceed by going to 'Network and Security' >> Elastic IP's >> 'Allocate New Address' >> Select the relevant interface and then assign it to our newly created private IP address.

Lastly we should create a sub-interface on our Linux box - so test we can issue something like:
ifconfig eth0:0 10.11.12.123 netmask 255.255.255.0
And to ensure the changes persist after reboot we should append the following into /etc/network/interfaces (presuming you are using Debian or a variant):
auto eth0:0
iface eth0:0 inet static
address 10.11.12.123
netmask 255.255.255.0
And finally restart networking:

sudo service networking restart

Tuesday 26 April 2016

MRSProxy and NGINX = Fail

Unfortunately MRSProxy requires that you enable 'Windows Authentication' (NTLM) of which is not supported by the free version of NGINX - I have not had a chance to test it on NGINX enterprise yet (that does support NTLM.)

If anyone manages to get it working please drop me a line!

Sunday 24 April 2016

Android Studio: Plugin with id 'com.android.application' not found.(Fedora)

When attempting to open a sample project on a fresh installation of Android Studio I got the following message when attempting to compile the solution:

Plugin with id 'com.android.application' not found.

This is typically because Gradle is not installed and fortunately as i'm on Fedora it's a matter of doing something like:

sudo dnf install gradle

or debian / ubuntu:

sudo apt-get install gradle

Friday 22 April 2016

Setting up a hybrid Exchange 2010 environment with Exchange Online (Office 365)

We will need to ensure a few-prerequisites are met:

- Exchange 2010 should be running SP3
- Ensure your current exchange environment is accessible externally (can verify using the Remote Connectivity Analyzer)

We will firstly setup dirsync (now Azure AD Connect) between our on-premise environment and Azure:

Download and install: Azure AD Connect

** During the installation of Azure AD Connect ensure 'Exchange hybrid deployment' is selected under 'Optional Features. ***'

Download and install the Azure Powershell addon

Download and install Microsoft Online Services Sign-In Assistant for IT Professionals RTW 

Download and install Azure Active Directory Module for Windows PowerShell (64-bit version)

Once installed launch PowerShell with administrative privileges and import the Azure model:

Import-Module Azure

authenticate yourself with:

$login = Get-Credential

and connect to Exchange Online:

Connect-MsolService -Credential $login

and enable dirsync:

Set-MsolDirSyncEnabled -EnableDirSync $true

We should proceed by hooking up our Exchange Online environment with our on-premis install.

** Note: I have had problems sometimes connecting this way and received the following message:

Format of the Exchange object version is wrong parameter name: ExchangeBuild

Apparently Microsoft is working on a 'fix' for this - but they haven't provided any update now since late February - so please refer to below:

I had to obtain the Office 365 Hybrid Wizard and run it on the local Exchange server instead:

http://aka.ms/HybridWizard

(Run the above link from Internet Explorer - not Chrome, Firefox etc..)

After launching the wizard enter your local Exchange details (i.e. a user in the 'Organizational Management' security group.)

You will then need to enter a TXT record on your domain's zone file / DNS for verification of ownership of your domain name.

Configure the HUB transport service and specify the external IP address you would like to use to communicate with the Exchange Online service.

** Note that you should update your firewall ruleset in order to allow the Exchange Online group of IP's to communicate with the above IP - more info can be found here: https://support.office.com/en-gb/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2#BKMK_EXO

I encountered the following error at the last hurdle:

HCW8073 - 
PowerShell failed to invoke 'Set-EmailAddressPolicy': The recipient policy "Default Policy" with mailbox manager settings cannot be managed by the current version of Exchange Management Console. Please use a management console with the same version as the object.

This was because the email policy had not been upgraded when migrating from an older version of Exchange. The specific feature causing problems is the 'Mailbox Manager' (now defunct in later versions of Exchange.) To fix this we should refer to the following article and then run the following command to upgrade our email address policy:

Get-EmailAddressPolicy | where {$_.RecipientFilterType –eq “Legacy”} | Set-EmailAddressPolicy –IncludedRecipients AllRecipients

After this I re-ran the configuration wizard and it finally went though.

Launch the Exchange Management Console and review the Send Connectors and Receive Connectors - you should notice that there are two Office 365 connectors (inbound and outbound.)

Ensure you have included all of the whitelisted hosts on the receive connector in your firewall config! 

Also ensure that your firewall will allow outbound access to EOP (Exchange Online Protection) as this is what your send connector will use - the IP's are here.

If you are sharing a domain between your on premis environment and Exchange Online (i.e. both have @domain.com addresses) we need to ensure that the 'remote domain' entry for the domain under the 'Office 365 Tenant' domain tab has 'Use this domain for my Office365 tenant' ticked.

We should also ensure the 'accepted domain' entry for the domain is 'Internal Relay' if it is a shared domain.

We can now create a new user in our Exchange Online Administration Portal - so go and login to your Office 365 portal and then go to Users >> Active Users >> New User and once the user is created - select it and hit the 'Assign Licenses' from the right hand navigation pane and select 'Exchange Online.'

Now proceed to go to Admin >> Exchange to launch the Exchange admin center and then go to: Recipients >> and you should now see your new user.

We can quickly propagate settings across our on-premise and Office 365 with the following commands (make sure you run this with administrative privileges!):

$OnPremisesCreds = Get-Credential
$TenantCreds = Get-Credential

Update-HybridConfiguration -OnPremisesCredentials $OnPremisesCreds -TenantCredentials $TenantCreds

To view federation information for your domain we can issue:

Get-FederationInformation yourdomain.mail.onmicrosoft.com

and to view organizational relationships we can issue:

Get-OrganizationRelationship

We can connect to our Exchange Online tenant via PS like follows:

$session = New-PSSession -ConfigurationName:Microsoft.Exchange -Authentication:Basic -ConnectionUri:https://ps.outlook.com/powershell -AllowRedirection:$true -Credential:(Get-Credential)

Import-PSSession $session


Wednesday 20 April 2016

Configuring CORS on apache / httpd

Web development is an area I typically experience the most trauma - and that is no exception when it comes to CORS (or Cross-origin resource sharing).

I was build a pretty simple interface for an API the other day - of which it used basic HTTP authentication and the originating domain was different that that of the API's - hence CORS comes into play.

I spent a good while experimenting with configurations trying to get CORS to function properly with my jquery interface - although kept receiving 401 errors claiming that the request was unauthorized.

The jquery get request was something like follows:

$.ajax
({
  xhrFields: {
    withCredentials: true
  },
  type: "GET",
  url: "http://mydomain.com/api.cgi",
  dataType: 'json',
  async: false,
  username: 'myuser',
  crossDomain: true,
  password: 'mystr0ngpa55w0rd',
  //data: { 'param1': '12345', 'param2': '67890' },
  success: function (){
    alert('Success!');
  },
  failure: function (response, status) {
    alert('Error!');
  }
});
I finally got it working with the following configuration applied on the apache virtual host:

   # CORS configuration for apache
   Header always set Access-Control-Allow-Origin "http://mysource.domain.com"
   Header always set Access-Control-Allow-Methods "POST, GET, OPTIONS, DELETE, PUT"
   Header always set Access-Control-Max-Age "1000"
   Header always set Access-Control-Allow-Headers "x-requested-with, Content-Type, origin, authorization, accept, client-security-token"
   Header always set Access-Control-Allow-Credentials true

Tuesday 19 April 2016

Using sudo to allow root access to a specific file

There are sometimes situations where you might need to allow a user root privileges for a specific command - a typical case of this is with script / script users.

We can define the following in your /etc/sudoers file:
nagios ALL=(ALL) NOPASSWD:/usr/local/nagios/libexec/check_xyz
Which will allow the 'nagios' user to execute '/usr/local/nagios/libexec/check_xyz' as root.

Monday 18 April 2016

AWS: The specified VPC does not support DNS resolution, DNS hostnames, or both.

After creating a new VPC and attempting to use the RDS service I came across the following error when attempting to launch the new database instance:

The specified VPC does not support DNS resolution, DNS hostnames, or both.

You can enable DNS and hostname resolution  with the AWS CLI - by downloading the AWS Tools for PowerShell and running the following commands:

aws ec2 modify-vpc-attribute --vpc-id vpc-123456 --enable-dns-support --enable-dns-hostnames

aws ec2 modify-vpc-attribute --vpc-id vpc-123456 -enable-dns-hostnames

and re-running the step that flagged up the error initially.

NAGIOS: Handling state changes with event handlers

Event handlers allow us NAGIOS to perform action based on the state of a specific check.

For example if a host goes into hard problem state we could instruct nagios to execute a remote command through the NRPE plugin to restart the service.

We also have the ability to define a global host (or service) event handlers - for example if you wanted to run a script every time a host changes state. This can be performed by adding either (or both):

global_host_event_handler

and

global_service_event_handler

within nagios.conf.

Although personally - I am much more interested in performing an action based on when a specific service changes state. To do this we need to firstly define our new command (that will restart our service) in commands.cfg e.g.

define command{
command_name restart-httpd
command_line /usr/local/nagios/libexec/eventhandlers/restart-servicexyz  $SERVICESTATE$ $SERVICESTATETYPE$ $SERVICEATTEMPT$
}

and add the event_handler variable which defines our command to our services configuration e.g.:

event_handler restart-httpd

Lastly we need to create a script that will act as our event handler - there is a good example here: https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/eventhandlers.html

Friday 15 April 2016

Breaking (resetting) into a Cisco IOS Router from the console

If in the event you have purchased a second hand Cisco router or switch and the seller has happened to leave the existing configuration on it (with a password mind!) or worse yet you have forgotten your login details! - We can reset the device from the console.

Connect to the device's console and as soon as the device starts booting - issue the 'Break' key (this is can be done with 'Alt + B' in HyperTerm or in Putty you can right hand click the title bar >> Special Commands >> and select 'Break'.

This will get you into RomMon mode - here we should issue the following to erase the startup configuration:

confreg 0x2142

Then reload the router as normal and once booted in we should then run the following to ensure that our new configuration is not erased upon the next reboot:

config-register 0x2102

Monday 11 April 2016

Monitoring vSphere with NAGIOS

In this tutorial we will look at how NAGIOS can be used to monitor different parts of your vSphere environment.

To get started we should firstly ensure that we have the relevent pre-requisites:

- vSphere SDK for Perl: https://developercenter.vmware.com/web/sdk/60/vsphere-perl
- Linux host (Debian, CentOS, RH etc.)
- check_vmware_esx (NAGIOS plugin): https://exchange.nagios.org/directory/Plugins/Operating-Systems/*-Virtual-Environments/VMWare/check_vmware_esx-2Epl/details

Firstly install the vSphere SDK

** Make sure you copy the perl modules within the check_vmware_esx to something like '/usr/share/perl5' **

Once the vSphere SDK for perl is installed - firstly ensure its working OK with something like:

/usr/bin/vmware-cmd --server vcenter.domain.com -l -h esxhost1 --username [email protected] --password 'yourpassword'

Which should return a list of your virtual machines on the specific ESXI host.

Now we will need to install a few pre-requisites:

yum install perl-Time-Duration
cpan install perl-Time-Duration-Parse

./check_vmware_esx.pl -D vcenter.host -u nagios -p mySec0reP@55w0rd* --select=volumes --subselect=Datastore1 --spaceleft --gigabyte -w 10% -c 5%

The above command will check the available free space on the specified datastore and return the percentage of free space.

Add the command to your command definitions and create a new service definition e.g.:

check_vmware_esx!vcenter.host!-u nagios -p mySec0reP@55w0rd* --select=volumes --subselect=Datastore1 --spaceleft --gigabyte -w 10% -c 5%

Sunday 10 April 2016

Troubleshooting SELinux on CentOS / RHEL / Fedora

We should firstly verify whether SELinux is turned on with:

sestatus

By default SELinux writes to /var/log/audit/audit.log file when it blocks a process.

As this log can get pretty noisy we should probably clear it before-hand to make it a little easier for ourselves:

> /var/log/audit/audit.log

We should now launch the suspected program / process that is being triggered and monitor this file:

tail -f /var/log/audit/audit.log

We can also produce more human readable output with the ausearch tool:

ausearch -m avc --start recent (looks at events from the last hour)

or the last 24 hours with:

ausearch -m avc --start today | audit2why

Most of the time we can use audit2why to help us tweak the SELinux configuration to get things working again - for example I often find that web servers (particularly reverse proxies will often use odd ports for downstream servers)

We can take a few approaches:

1. Change the backend server ports (this is not always possible obviously!)

2. Add the ports in the SELinux module ruleset - we can see existing ports with:

semanage port -l | grep -w "http_port_t"

http_port_t                    tcp      80, 81, 443, 488, 8008, 8009, 8443, 9000

and then add an additional port in with:

semanage port -a -t http_port_t -p tcp 8888

However this is not always possible if the port number is defined elsewhere i.e. in another policy.

3. As a last resort you can tweak the module to disable specific functions - such as allowing the web server to connect to any remote port:

setsebool -P httpd_can_network_connect 1

4. We can also create our own custom policies by installing some utilities to help us analyze the logs:

dnf install setroubleshoot setools

This will generate a report explaining to you what process and action has triggered the alert and how can you remidiate it - if we want to add an exception (system wide) in for a function we can use setsebool - for example:

setsebool -P selinuxuser_execheap 1

or better yet we can generate a custom rule for SELinux with the '-M' switch on audit2allow:

cat /var/log/audit/audit.log | audit2allow -a -M myselinuxcustomrules

and then import it with semodule:

semodule -i myselinuxcustomrules.pp

Saturday 9 April 2016

Setting up NFS on CentOS 7 / Fedora 23

We should firstly install the relevant packages from yum / dnf:

dnf install nfs-utils nfs-utils-lib

Ensure that the NFS service will start on boot:

systemctl enable nfs-server
systemctl start nfs-server
systemctl status nfs

We now want to define which directories (exports) we want to provide to our NFS clients - we define this in the /etc/exports file:

sudo vi /etc/exports

and add something like:

/home 11.12.13.14(rw,sync,no_subtree_check)
/srv/nfs/anonaccess 11.12.13.14(rw,sync)

The options are explained below:

rw - Provides read/write access to the client.

sync - Ensure that any calls that write data to the mount point are flushed (committed) on the server before control is given back to the user space.

no_subtree_check - When a directory you are sharing is part of a heirachy / larger filesystem NFS will scan each directory above it to check its permissions / details. Disabling it is typically discouraged as it can be a security risk - although on root filesystems like /home you can generally safely turn this off (as above) - although on the anonymous share I have excluded 'no_subtree_check' (by default it is set to 'subtree_check').

It is very important to ensure that anonymous access to NFS shares use UID and GUID of 65534 when working accross different Linux varients - this is because they will quite often use different ID's for the 'nobody' user - so on our open share we can issue:

mkdir -p /srv/nfs/anonaccess
chown 65534:65534 /srv/nfs/anonaccess
chmod 755 /srv/nfs/anonaccess

When you have finished defining your exports we should use the exportfs utility to apply our configuration:

exportfs -a

Now we can move onto the client portion - you should firstly install the following packages on the NFS client machine:

dfn install nfs-utils nfs-utils-lib

and mount it on the client:

mkdir -p /mnt/nfs/home
mount -t auto 1.2.3.4:/home /mnt/nfs/home

Fedora: The nouveau kernel driver is currently in use by your system

When attempting to install the official NVIDIA drivers for my graphics card I received the following message during the installation:

'The nouveau kernel driver is currently in use by your system'

The nouveau driver is built into the kernel and is an attempt by the open-source community to build a unified driver that will support a wide range of NVIDIA cards.

In order to install the official drivers we must firstly disable or rather ensure the nouveau module is not loaded into the kernel at startup.

To do this we can simply make an exclusion:

sudo vi /etc/modprobe.d/blacklist.conf

and add:

blacklist nouveau

Rebuild the init ram disk with something like:

mkinitrd /boot/initramfs-4.2.3-300.fc23.x86_64.img 4.2.3-300.fc23.x86_64

and simply reboot and attempt the installation again.

Friday 8 April 2016

(Painfully) installing the vSphere SDK for Perl on CentOS 7

We should firstly install cpan (if not already installed) so that the installer can download the relevent Perl modules:

yum install cpan

cd /tmp
wget https://download2.vmware.com/software/sdk/VMware-vSphere-Perl-SDK-6.0.0-2503617.x86_64.tar.gz
tar zxvf VMware-vSphere-Perl-SDK-6.0.0-2503617.x86_64.tar.gz
cd VMware-vSphere-Perl*
./vmware-install.pl

Ensure you select 'NO' when the following prompt appears:

'Do you want to install precompiled Perl modules for RHEL?'

This is because the perl version differs from that of the one that compiled the 'precompiled' modules.

So instead the installer will attempt to retrieve the precompiled modules using cpanm.

Although unfortautnely I then got a complaint complaining that the following modules were available but the installer was unable to get the following versions of the modules:

Module: ExtUtils::MakeMaker, Version: 6.96
Module: Module::Build, Version: 0.4205
Module: LWP, Version: 5.837

So we will have to install these older versions manually with cpan:

cpan BINGOS/ExtUtils-MakeMaker-6.96.tar.gz
cpan LEONT/Module-Build-0.4205.tar.gz
cpan GAAS/libwww-perl-5.837.tar.gz

and then attempt the installation again:

./vmware-install.pl

I then received yet another error complaining that it couldn't find any of the following modules:

Class::MethodMaker 2.10 or newer
UUID 0.03 or newer
XML::LibXML::Common 0.13 or newer
XML::LibXML 1.63 or newer

So we do:

cpan install YAML
cpan install -f GnuPG::Interface
cpan install Fatal
cpan install Env
yum install uuid-devel libuuid-devel libxml2-devel
cpan install UUID
cpan install XML::LibXML
cpan Class::MethodMaker

and finally ran the install again:

./vmware-install.pl

I then got a message complaining about some missing / outdated modules:

The following Perl modules were found on the system but may be too old to work
with vSphere CLI:

MIME::Base64 3.14 or newer
LWP::Protocol::https 5.805 or newer
Socket6  0.23 or newer
IO::Socket::INET6 2.71 or newer

So we do an install:

cpan install MIME::Base64
cpan install LWP::Protocol::https
cpan install Socket6
cpan install IO::Socket::INET6

and we are ready to go!

Configure NGINX with Exchange 2010, 2013 and 2016 (including RPC / Outlook Anywhere access)

I have seen many threads on the internet with people complaining about RPC and Exchange (getting Outlook Anywhere to work.)

I have also seen several configurations all of which did not work correctly for me.

My configuration should work for 2010, 2013 and 2016:

server {
  listen 192.168.0.1:443 ssl;
  server_name owa.myserver.com;
  ssl_certificate /etc/nginx/ssl/cert.pem;
  ssl_certificate_key /etc/nginx/ssl/key.key;
  access_log  /var/log/nginx/mydomain.access.log  combined;
  error_log  /var/log/nginx/mydomain.error.log;
  client_max_body_size 3G;
  proxy_request_buffering off;
  ssl_session_timeout     5m;
  tcp_nodelay on;
    proxy_http_version      1.1;
    proxy_read_timeout      360;
    proxy_pass_header       Date;
    proxy_pass_header       Server;
    proxy_pass_header      Authorization;
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For  $proxy_add_x_forwarded_for;
    proxy_pass_request_headers on;
    more_set_input_headers 'Authorization: $http_authorization';
    proxy_set_header Accept-Encoding "";
    more_set_headers -s 401 'WWW-Authenticate: Basic realm="fqdnofyourexchangeserver"';
    proxy_buffering off;
    proxy_set_header Connection "Keep-Alive";
  location / {
  return 301 https://owa.myserver.com/owa;
  }
  location ~* ^/owa { proxy_pass https://fqdnofyourexchangeserver; }
  location ~* ^/Microsoft-Server-ActiveSync { proxy_pass https://fqdnofyourexchangeserver; }
  location ~* ^/ecp { proxy_pass https://fqdnofyourexchangeserver; }
  location ~* ^/rpc { proxy_pass https://fqdnofyourexchangeserver; }
}
# redirect all http traffic to https
server {
  listen 80;
  server_name owa.myserver.com;
  return 301 https://$host$request_uri;
}

** Note: Remember to use 'BASIC' authentication within the Outlook Anywhere connection setup - as NGINX does not support NTLM authentication - that is unless you have the 'Enterprise' edition!.

** Note 2: Also ensure that 'Windows Authentication' is disabled in your IIS application settings for EWS, OWA etc. as NGINX will return an error 401 if 'Basic Authentication' is not enabled.! **

Wednesday 6 April 2016

How to remediate the logjam vulnerability with IIS

The logjam attack is conducted by downgrading the key strength used in the TLS connection using a man-in-the-middle style attack.

This happens when the server/client is negotiating which cipher suites should be used - the MiTM attack occurs when sending a list of supported cipher suites back to the server - the attack attempts to remove all of the strong cipher suites - leaving less secure / vulnerable cipher suites present.

Although unfortunately (to my knowledge) it is not possible to increase the DH key size on Windows - so instead we should disable all Ephemeral Diffie-Hellman (DHE) cipher suites.

Fortunately this is fairly simply to do - we should firstly open up the local group policy console:

gpedit.msc

Then expand: Computer Configuration >> Administrative Templates >> Network >> SSL Configuration Settings.

Now double click the 'SSL Cipher Suite Order' setting and remove any DH or DHE entries from the string (i.e. cipher suites beginning with 'TLS_DHE'.)

For example on a Windows Server 2008 R2 system it looked like as follows upon removal of the cipher suites:

TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P384,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384_P384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256_P384,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384_P384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256_P384,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P384,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P384,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_RC4_128_MD5,TLS_RSA_WITH_NULL_SHA256,TLS_RSA_WITH_NULL_SHA,SSL_CK_RC4_128_WITH_MD5,SSL_CK_DES_192_EDE3_CBC_WITH_MD5

Restart the server and re-check the server.

Performing a remote wipe of a device associated with a mailbox in Exchange 2007, 2010, 2013+

Exchange 2007 - 2010:

To view all devices associated with a specific mailbox we can issue:
Get-ActiveSyncDeviceStatistics -Mailbox "[email protected]"

and then to once you have found the relevant device - take a note of its "Identity" and run the following to remotely wipe the device when it next syncs with Exchange:

Clear-ActiveSyncDevice -Identity "domain.com/Users ../My User/ExchangeActvieSyncDevices/iPad"

Exchange 2013 and above:

The commands have changed with the release of Exchange 2013 - to list mobile devices associated with a mailbox we can issue:

Get-MobileDevice -Identity "MyUser"

and to clear it we can issue:

Clear-MobileDevice -Identity User1iPad -NotificationEmailAddresses "[email protected]"


For security purposes you will need to remove the device after the device has been successfully wiped. To do this we can run:

Remove-ActiveSyncDevice -Identity "domain.com/Users ../My User/ExchangeActvieSyncDevices/iPad"

or

Remove-MobileDevice -Identity User1iPad

Tuesday 5 April 2016

Increasing the size of a virtual disk with CentOS / LVM

After expanding the virtual disk / vmdk within VMWare vSphere, Player or Workstation.

If you have the ability to dynamically increase the disk size (i.e. without rebooting the system) we will need to rescan the SCSI bus - so we should find out what the device is with:

ls /sys/class/scsi_device/

and then tell the kernel to rescan it:

echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan

Now leave it for a minute and the changes should then be reflected. (You can verify this with 'fdisk -l'.)

We should now proceed by creating a new partition (you can alternatively delete the existing partition and expand it - although this will require dismounting the storage device and hence if it was the OS disk would require you to do it via a live CD.)

fdisk /dev/sdX

** Note ensure that there is not currently more than three primary partitions on the disk or this will fail **

'n' (for new partition.)

'p' (for primary partition type.)

You can usually accept the defaults for the sector size as fdisk will typically use all available sectors.

't' (to change disk type.)

and ensure '8e' (Linux LVM) is defined.

Finally hit 'w' to write the changes.

We now need to get the kernel to pickup the new partition tables / block device (/dev/sda3 in my case) - we can either reboot the system or simply use the partprobe command:

partprobe -s

We then want to create a new physical volume for LVM:

pvcreate /dev/sda3

and run pvdisplay to review the addition:

pvdisplay

We now want to add the new physical volume to our volume group - we should firstly identify what it's called with:

vgdisplay

In my case it's 'centos.'

So to add the physical volume in we issue:

vgextend centos /dev/sda3

and then issue lvdisplay to get the relevent logical volume we wish to extend:

lvdisplay

In my case it's called '/dev/centos/root' - so to extend it we issue:

lvextend /dev/centos/root /dev/sda3

and finally resize the file system:

resize2fs /dev/mapper/centos-root

OR if you are using CentOS 7 - which uses XFS by default you will need to use:

xfs_growfs /dev/mapper/centos-root

Keeping your App Pools under control with IIS (CPU Monitoring)

IIS provides a built-in feature for monitoring and automatically restarting an app pool if a rouge process decides to consume a large amount of CPU time.

In order to set this up we should go to the IIS Console >> Server >> Application Pools and right hand click on the relevent application pool and hit 'Advanced Settings'.

From here we have a 'CPU' caption - we are looking at the following specific settings:

'Limit': Defines the threshhold for the 'Limit Action' will occur - this is measured in 1/1000 in % e.g. 100 would equal to a threhold of 10%.

'Limit Action': This settings defines what will happen when the 'Limit' threshold is reached - there are a few options to choose from: KillW3wp (restarts the application pool), Throttle (when 'Throttle' is set the app pool CPU time will be thorttled according to what is defined in 'Limit') and ThrottleUnderLoad (which is similar to the 'Throttle' option but only actually throttles when there is contention on the CPU.


Setting up and using X Forwarding

SSH provides the ability to stream X sessions over it, allowing remote administrators to access a desktop environment without having to logon to the console.

Firstly on the machine we wish to stream - we should ensure X11 forwarding is enabled within the SSH daemon configuration:

sudo nano /etc/ssh/sshd_config

and ensure the following line is present / uncommented:

X11Forwarding yes

We should also ensure that the user we will be connecting as has the ability to launch X sessions (otherwise we might be something like: X: user not authorized to run the X server, aborting.) - this is defined within /etc/X11/Xwrapper.config.

sudo nano /etc/X11/Xwrapper.config

There are three available values - root, anybody and console - of which often it is set to console only - so as we are performing this remotely we will need to set it to anybody.

Now on the remote machine we should ensure that the SSH client is configured to allow X11 forwarding:

sudo nano /etc/ssh/ssh_config

and ensure the following is present / uncommented:

Host *
ForwardX11 yes
ForwardAgent no

(You should ideally specify the specific host, rather than using astrix to tighten up security.)

and then SSH to the host running X11 (using the -X switch):

ssh -X [email protected]

Once logged in - simply issue startx at the prompt!

Sunday 3 April 2016

Shrinking the tempdb in SQL Server

The tempdb is (as the name suggests) a temporary database that is re-created every time the SQL server instance is restarted.

It is typically used to store global and local temporary tables, temporary sotred procedures and so on.

The are some also some major restrictions (or differenes) to standard SQL databases:

- The tempdb can't be dropped.
- You are not able to perform a backup on it.
- You are unable to change it's recovery model.

It is usually a good idea to ensure that the tempdb has it's own separate partition. By default the database is 8MB in size (with a log file size of 1MB_ and autogrowth is enabled (at 10%.)

In some scenerios it might be necessary to shrink the tempdb - we have two options:

- To do this without restarting SQL server we can do the following:

** Warning: Performing the following will (to some degree) temporarily degrade performance of the SQL server - read below for more information. **

1. Clear the proecdure cache with FREEPROCCACHE. By doing this all queries and stored procedures that were cached will have to be recompiled - hence degrading performance:

DBCC FREEPROCCACHE;
GO

2. The DROPCLEANBUFFERS will clear cached indexes and data pages:

DBCC DROPCLEANBUFFERS;
GO

3. The FREESYSTEMCACHE command will clear unused cached entries for the system:

DBCC FREESYSTEMCACHE ('ALL');
GO

4. The FREESESSIONCACHE command clears session cache data between SQL servers.

DBCC FREESESSIONCACHE;
GO

5. ** Before performing the following ensure that there are no open transaction to the tempdb before shrinking it as it could corrupt the DB!

DBCC SHRINKFILE (TEMPDEV, 1024); (Size in MB)
GO