Thursday 30 April 2015

Exchange 2013: Job is waiting for resource reservation. MRS will continue trying to pick up this request.

Job is waiting for resource reservation. MRS will continue trying to pick up this request. Details: Resource reservation failed for 'Mailbox Database/MdbWrite' (CiAgeOfLastNotification(Managed Services)): load ratio X.XXXXXXX, load state 'Critical', metric 2147483647. This resource is currently unhealthy.

This message simply indicates that the server's recources are currently under pressure.

Resolution: Suspend any other resource intensive operations or wait until an off-peak time to resume the migration with:

Resume-MoveRequest -Identity ""

RDM (Raw Device Mapping): Physical vs Virtual

RDM (Raw Device Mapping) allows your VM to have direct access to a Disk / LUN - it acts as a special mapping file that acts as a VMFS volume tha proxies I/O between the VM and the disk.

When setting up an RDM with a VM there are two compatability modes - either Physical or Virtual.

Physical: Allows the guest operating system to access the storage device directly as if it were attached to a physical machine. This mode is useful when utilizing SAN-aware software on the VM - although this mode also has limitations - for example you are unable to take snapshots of the disk and you are unable to perform storage migration on the disk.

Virtual: Support full virtualization of the mapped device - meaning that features such as snapshotting, vMotion and so forth are fully supported. Although the real characteristics of the hard-drive are hidden and the guest operating system see's the disk how it would see a typical virutal disk file in a VMFS volume.

Disabling SMB Signing in Server 2012 R2

Disabling SMB signing can dramatically increase I/O performance over SMB shares - in order to do this we can use group policy:

Go to Computer Configuration/Policies/Windows Settings/Security Settings/Local Policies/Security Options, and set Domain member: Digitally encrypt or sign secure channel data (always) and Microsoft network server: Digitally sign communications (always) to Disabled.

You should now do a gpupdate /force and net del * OR restart the computer for changes to take effect.

Enabling Jumbo Frames on VMware and Windows

Once you have enabled jumbo frames on the physical switch we should proceed by enabling them within vSphere and the VM.

To turn on jumbo frames for a specific vSwitch within vSphere: From within the vSphere Web Client go to: Manage >> Networking >> Virtual Switches >> Edit settings of desired vSwitch >> Properties >> Change the MTU parameter. This configuration will then apply to all physical NICs connected to the vSwitch.

To turn on jumbo frames for a specific VM running Server 2012:
From within Device Manager identify the virutal NIC - Right-hand click >> Properties >> Advanced >> Jumbo Packet.

Optimizing your backup speeds

I have listed a series of areas that can help you increase backup performance and help you to avoid over-running an maintainence windows:


- Enable port aggregation on your switches creating an etherchanell with something like LACP or PAgP (Cisco Switches only.)

- Enabling Jumbo Frames - by increasing the frame-size you reduce the header overhead, although on unreliable connections this can be more trouble that it's worth!

Disk Configuration:

- Utilize fast media such as SSD, SAS 15k

- Consider modifying RAID configuration for faster read / write speeds e.g. RAID 0, RAID 10, RAID 50.

- Performing defragmentation on the disks.

OS Configuration:

- Disabling SMB Signing: If you are backing up over SMB this can have a dramatic increase of performance. For instructions see here.

- Configure NIC teaming in Windows Server 2012

Removing an ESXI host from a cluster in vSphere 6

In order to remove a host from a cluster you should simply put the host in question into maintainence mode.

The VM's on the host will either be migrated to another host in the cluster or remain with the host and are as a result removed from the cluster.

Wednesday 29 April 2015

Insufficient resources to satisfy configured failover level for vSphere HA.

I encountered the following error message while attempting to turn on a FT VM in my lab environment:

Insufficient resources to satisfy configured failover level for vSphere HA.

While in a production environment you should obviously remidiate this as soon as possibe, although as a temporary workaround you can disable the Admission Control by going to:

Inventory >> Hosts and Clusters >> Right-hand click the Cluster >> Edit Settings... >> vSphere HA >> Admission Control >> Disable.

Thick Provision Lazy Zeroed vs Thick Provision Eager Zeroed

Thick Provision Lazy Zeroed: Consumes the allocated disk space on creation time, although data remaining on the disk is not wiped, rather it is zeroed out on demand later. Hence old data could theoretically be retrieved.

Thick Provision Eager Zeroed: Consumes the allocated disk space on creation time, although in contrast to the Lazy Zeroed approach remaining data on the disk is zeroed out. This option is best for performance, although takes much longer to initially create the disk.

Setup and deploy a Fault Tolerent VM on vSphere 6.0

Fault tolerance provides real-time replication of a VM on another secondary VM (which is sitting on another host) - allowing the identical secondary copy of the VM to takeover if the primary VM's host goes offline.

While this can be extremley useful in some cirucmstances there are a few drawbacks to note:

- CPU Archiecures / Vendors must be compatible with each other (i.e. they must be in the same FT processor group (
- If the primary VM crashes e.g. a BSOD - the other one dies too! - This is where a load balanced solution has the edge.

Before enabling fault tolerance (which I will refer to as FT from here in) on a cluster there are specific pre-requisites that must be met.

The two ESXI hosts should be part of a HA cluster - to set this up we should go to:

Inventory >> Host and Clusters >> Right-hand click on our Datacentre >> New Cluster and make sure "Turn on vSphere HA" is selected, configuring the failover settings and fo forth.

We should now drag and drop our two ESXI host's directly onto the cluster.

vSphere FT requires the following cluster requirements:

- Both ESXI hosts should have access to the same VM datastores and networks.

- Host certificate checking must be enabled within vCenter - this can performed within the vSphere client >> Administration >> vCenter Server Settings >>

- Fault Tolerance logging and VMotion networking configured: This can be performed by adding VMKernal connection via a point-to-point link between the two ESXI hosts. To set this up we go to Inventory >> Hosts and Clusters >> ESXI Host >> Conifugration >> Networking >> Hardware >> Add networking... >> VMKernel >> Create a vSphere standard switch >> Under connection types ensure that you only check either "Use this port for vMotion" or "Use this port for Fault Tolerance Logging" - as they can't reside on the same subnet! i.e. you will need to perform this process for the vMotion and Fault Tolerance Logging on each ESXI host.

Since they are both going to be p2p links we can setup the network something like the following:

FT Logging:

We also have specific host requirements we must meet:

- As mentioned above hosts must have processors from the FT-compatible processor group.

- Appropriate licensing must be in place to the enable FT tecnhology on the host.

Finally the VM's themsevles have requirements that must be met:

- Incompatible features / devices: Such as RDMs, Physical CD/DVD ROM Drives, USB Devices, Serial ports and so on. (See here for full list:

- VM's must be stored on shared media such as Fibre Channel, iSCSI, NFS, or NAS.

- Virutal machines must be limited to 1 vCPU in order to work with Fault Tolerance.

Now all of the initial FT pre-requisites and setup has been performed we can FT-enable one of our VMs:

Firstly ensure that your VM is turned off! Now from the vSphere web interface simply right-hand click on the relevent machine >> Fault Tolerance >> and hit Enable. We will be prompted to indicate where we wish to store the seconday copy of our VM (which will be on shared storage on another device!)

Install and Configure Veeam Backup and Availability Suite

We should install a fresh copy of Server 2012 R2 - making sure we have separate the backup disk(s) and the system disk (e.g. C:\ and D:\). Dependent on our disk options we will proberly want to enable data de-duplication on our backup disk - so we go to:

Server Manager >> Add Roles and Features >> File and Storage Services >> File and iSCSI Services >> Data-deduplication

We should then setup the data-deduplication on our backup drive (D:\ in this case.):

Enable-DedupVolume D: -UsageType Default

We should proceed by installing Veeam (VeeamAvailabilitySuite. and also specifying our (trial) licensing key (veeam_availability_suite_trial_32_32.lic).

Once installed we should proceed by adding a new "Managed Server" - in our case ESXI host - but before we add this server we must enable SSH on our ESXI host. In order to do this we should go to:

Inventory >> Host and Clusters >> ESXI Host >> Configuration >> Software >> Security Profile >> Services << Properties >> SSH >> Options >> Start with host. (Make you sure manually start it as well!)

We should now add our vSphere or vCenter host: Managed Servers >> VMware vSphere and enter the relevent details (web services login, ssh login etc.)

We will now add a backup repository: Backup Infrastructure >> Right-hand click Backup Repositories >> Add Backup Repository.

We can then create a quick backup job by gonig to: Backup and Replication >> Right-hand click jobs >> Backup... and select one of our VMs on our ESXI host.

Creating and attaching a SAN via iSCSI to an ESXI host

For this environment we will have a Server 2012 R2 box hosting the iSCSI target server and an ESXI box that will act as the initiator.

We should firstly install the File and Storage Service roles on our Server 2012 R2 box and ensure that the "iSCSI Target Server" component is also checked.

We then go to Server Mananger >> File and Storage Services >> Servers >> iSCSI >> New iSCSI Virutal Disk. Configuring the name, location, size and so on.

We should create a target name and also specify the access servers by thier IQN or simply by the DSN Name / IP Address.

Then from within vSphere we shall connect to the target server and add the LUN:- Inventory >> Host and Clusters >> ESXI Host >> Configuration >> Storage Adapters >> Right-hand click on the iSCSI Software Adapter and go to Dynamic Disocvery and add the IP address of the 2012 R2 iSCSI server.

We should then re-scan the iSCSI bus and go to: Storage >> Add Storage ... >> Disk / LUN >> Add Storage and select our LUN.

Creating a Server 2012 R2 VM template in vSphere 6.0

Within vSphere 6.0 (5.0 and 5.5) you have the ability to create VM templates that can be utilizied in scenerios where you have multiple operating systems that need to be deployed.

There are three ways of creating a VM template:

- Converting a Virtual Machine to a Template in the vSphere Client
- Cloning Virtual Machine to Template in the vSphere Client
- Clone a template in the vSphere Client.

For this tutorial I will be creating an initial Server 2012 R2 VM with the following specs:

Operating System: Windows Server 2012 R2
Disk Controller (OS / Boot Disk): LSI Logic SAS (must not be part of a MS cluster!)
Secondary Disks (Non-boot disks): Paravirtual Controller
Disk: 40GB, Thick Provision Eager Zeroed, SCSI Node 0:0
Floppy Drive 1: Removed
CPU Virtual Sockets: 1
Cores per Socket: 2
VMware Tools: Check and upgrade Tools during power cycling
Virtual Hardware Version: 11

We will make the following tweaks to the VM bios settings:

Advanced >> I/O Device Configuration:

- Serial Port A = Disabled
- Serial Port B = Disabled
- Parallel Port = Disabled

We can also make the following conifugrations to the OS itself:

- Enable Remote Desktop
- Change the computer name e.g. 2012R2-TEMPLATE
- Perform a Windows Update
- Internet Explorer Enhanced Security (Admins) = Turned off
- Internet Explorer Enhanced Security (Users) = Turned on
- Local User Accounts >> Administrator >> Password Never Expires = Checked.
- Power Plans >> High Performance = Checked
- Disable Hibernation: powercfg.exe -h off
- Folder options >> Show hidden files, folders and drives = Checked
- Folder options >> Hide extensions for known file types =
- Event Logs: Clear System, Security, Application
- Start bar, desktop customizations.
- Degrament the C: / System Drive.

I would like to ensure that the customizations I have made to my administrator profile are replicated throughout all user's environments - so we can use a utility called defprof that will copy a speicifed user's profile data to the Default Windows User Profile like follows:

defprof source_profile_folder

We should then shutdown the VM.

We can now take advantage of the Customization Specifications Manager which provides us with the ability to further customize certain Windows and Linux distributions by going to the vSphere Web Client >> Customization Specifications Manager >> Create a new specification:

Target VM Operating System: Windows
Customization Spec Name: Windows 2012 R2 Base
Set Computer Name = Enter a name in the Clone / Deploy Wizard
Domain Settings
Set Operating System Options >> Generate New Security ID (SID) = Checked.
and so on...

We can now simply convert our VM to a template by right-hand clicking on the VM and selecting Template >> Clone to Template.

Then to create a new VM from this template - right-hand click on the template and select "Deploy virtual machine from this template". During the wizard ensure that in the "Guest Optimization" section that you select "Customize using an existing customization specification."

Tuesday 28 April 2015

Troubleshooting the user access problems to the mailbox server in Exchange 2013

I have put together a list of quick checks that can be performed on a mailbox server if users are experiencing issues accessing thier mailbox.

Check networking settings (DNS, Default Gateway) on the clients workstation:

Run outlook.exe with the rpcdiag switch:

outlook /rcpdiag

and verify with a network sniffer such as Process Hacker or System Explorer (or netstat) that you can see RPC requests (over TCP Port 443) connected to your exchange server.

Check the mailbox database assosiated with the user is mounted:

Get-Mailbox "" | FL *Database*
Get-MailboxDatabaseCopyStatus -Identity "Database Name"

Ensure there is MAPI connectivity to the mailbox - this can be performed using the Test-MapiConnectivity cmdlet which indirectly checks the Exchange Store / Store Worker, MAPI Server and Directory Service Access:

Test-MapiConnectivity -Identity ""
OR for testing system mailboxes of a specific database we issue:
Test-MapiConnectivity -Database "Database Name"
OR for testing system mailboxes of a specific server we issue:
Test-MapiConnectivity -Server "server-name"

Check the throttling policy applied to the user - specifically the RCA* attributes (RPC Request Limiting):

Get-ThrottlingPolicy | FL *RCA*

Monday 27 April 2015

Creating Device Collection with SCCM 2012 to categorise Hardware

Device collections provide us with a way of categorizing hardware so we can perform operations such as applying driver updates to the appropriate hardware and

In order to classify the hardware by finding out something distinguishable from the laptop e.g. it's model number. To do this we should use the Recourse Explorer by going to Assets and Compliance >> Devices >> Device Name >> Right-hand click and select >> Start >> Recourse Explorer. We should now expand the Hardware Node and then the Computer System Node and finally there should be an atribute named "Model".

We should create a new query (Monitoring >> Overview >> Queries >> Create Query) as follows:

select distinct SMS_G_System_COMPUTER_SYSTEM.Manufacturer, SMS_G_System_COMPUTER_SYSTEM.Model from  SMS_R_System inner join SMS_G_System_COMPUTER_SYSTEM on SMS_G_System_COMPUTER_SYSTEM.ResourceID = SMS_R_System.ResourceId

We can then proceed to create a device collection based on the results of this query.

So we should go to Assets and Compliance >> Overview >> Device Collections >> Create Device Collection.

In this example I will be creating a device collection to identify all of the Dell Latitude E7440 laptops - so by selecting appropriate name such as "Latitude E7440" and choosing a limiting collection e.g. "Company Laptops" we proceed to add a membership rule. A membership simply defines a set of variables that the hardware in scope (the limiting collection) should match in order to become a member of the collection.

We should select "Query Rule" and then "Import Query Statement" - selecting our query we created earlier and then "Edit Query Statement" - here we should select the "Criteria" tab and select "New Criteria as follows:

Criterion Type: Simple Value
Where: Attribute Class = Computer System, Attribute = Model
Operator: is equal to
Value: Latitude E7440

That's it! Refreshing the collection should now return all laptops with the model ID: E7440.

Monitoring task sequence deployments (the easy way!)

More often than not I refer to the TS log files that reside on the client and/or distribution point - although using SCCM's built in reporting functionality we can track 'status messages' that can also help us to diagnose / debug task sequence statuses.

Going to Monitoring >> Overview >> Deployments and identify the "Deployment ID" e.g. S0111111.

We should then proceed to Monitoring >> Overview >> System Status >> Status Message Queries >> right click and choose Create Status Message Query.

Name: Deployment History for Windows 8.1 Task Sequence

Proceed by going to Edit Query Statement >> Show Query Language and ener the following query:

from SMS_StatusMessage
left join SMS_StatMsgInsStrings
on SMS_StatMsgInsStrings.RecordID = SMS_StatusMessage.RecordID
left join SMS_StatMsgAttributes
on SMS_StatMsgAttributes.RecordID = SMS_StatusMessage.RecordID
where SMS_StatMsgAttributes.AttributeID = 401 and SMS_StatMsgAttributes.AttributeValue = “P0120125“
and SMS_StatMsgAttributes.AttributeTime >= ##PRM:SMS_StatMsgAttributes.AttributeTime## order by SMS_StatMsgAttributes.AttributeTime DESC

Upon creation - simply right-hand click on the Status Message Query and select Show Messages.

Keeping Dell computers up to date / applying driver updates with SCCM

We will now need to deploy the driver set(s) to the relevent machines. The easiest way to perform this is too create a task sequence to deploy the updates - using the dpinst.exe utility from the Windows Driver Kit (

So we should download the relevent drivers drivers and ideally create a driver pack. We can then use Content Library Explorer ( to make some sense of the folders within the driver store and ultimately help us identify which driver is contained in which folder.

So we shall firstly create a new package for the driver:

Software Library >> Overview >> Application Management >> Packages >> Add Package.

Package Name: Latitude E7440 Driver Package 1.0
This package contains source files = Ticked.
Select the root folder of where your drivers for the E7440 are stored.

We should also place "dpinst.exe" (from the Windows Driver Kit) from and create an xml file called "DPInst.xml" with the following in:

<?xml version="1.0" ?>

Make sure you save the file with UTF-8 encoding.

These two files should be placed in the root of the driver package directory!

Next >> Do not create a program >> Next >> Close.

We should then distribute the content to the relevent Distribution Points. We will proceed by creating a new task sequence:

Software Library >> Overview >> Operating Systems >> Task Sequences >> New Task Sequence >> "Create a new custom task sequence"

We will now edit the new task sequence and add a new phase to it e.g.:

"New Command Line" >> Name: Update Graphics Card Driver >> Command line: dpinst.exe /S /SA /SE /SW /F

The working directory should bet set to the directory

Tick the "Package" checkbox and select our package we created earlier.

And finally make sure "Enable Continue on Error" is ticked!

We should then simply deploy this task sequence to a collection group that holds all of our E7440 laptops.

Integrating hardware drivers into SCCM 2012

We should firstly identify the relevent models with our organiztion.

For the purpose of this tutorial we will simply use the Lattitude E7440.

Dell have released a utility called "Driver Pack Catalog" that we can use to automatically download the relevent drivers for our Dell models:

This provides us with a way of ensuring we get the latest updates from Dell - in an a quick, efficient and automated fashion.

We can utilize a script by Dustin Hedges ( that will do the hard work for us:

Firstly make sure script-signing is disabled temporarily and execute the following within PowerShell for a specific model:

.\Download-DellDriverPacks.ps1 -DownloadFolder "D:\Drivers\Downloads" -TargetModel "Latitude E7440" -TargetOS Windows_8.1_64-bit -Verbose

or for WinPE drivers:

.\Download-DellDriverPacks.ps1 -DownloadFolder "D:\Drivers\Downloads" -TargetOS 64-bit_-_WinPE_5.0 -Verbose

We should extract the downloaded .cab file(s) to a location accessable by SCCM e.g. C:\Sources\Drivers\Dell

We can now import the cab file(s) into SCCM from Configuration Manager Console >> Software Library >> Operating Systems >> Right-hand click 'Drivers' >> Import Drivers and specify the folder where we copied the .cab contents.

Make sure "Import the driver and append a new category to the existing categories" is selected >> Next >> and ensure "Enable these drivers and allow computers to install them" is ticked >> Next >> New Package... >> Enter a name for the new driver package and select a location where it's contents (and future additions) will be stored e.g.:

C:\Sources\Drivers Packages\Dell

Make sure "Update distribuion points when finished" is ticked >> Next >> Add them to any boot images if nescasery >> Next >> Next >> SCCM should then begin to import the drivers.

Finally right-hand click on the new driver package and select "Distribute" and select which Distribuion Points you wish to publish it to.

Thursday 23 April 2015

Clearing out (saving space) old log files in Exchange 2013

Database log files can soon grow widely out of control and consume all of your disk if they are not monitored properly. The following will demonstrate how you can remove log files that are no longer required by the database - in order to clear some disk space.

We should firstly identify the database location (.edb) and the log file directory:

Get-MailboxDatabase -Identity "Database Name" | FT LogFolderPath,EdbFilePath

We should then use eseutil to identify the last checkpoint. A checkpoint simply tells us where the database is up to with the logs it has consumed / read - although we must firstly dismount the database:

Dismount-Database <database-name>

and then find the last checkpoint (pointing it at the .CHK file within the log file directory!) for an offline database:

eseutil /mk "D:\Database1\Logs\E01.chk"

OR for an online database:

eseutil /mk "D:\Database1\Logs\E01.chk" /vss /vssrec eNN "D:\Database1\Logs\"

Within the output you should see something like:

Checkpoint: (0xDA8EE,C0,0)

The hexidecimal value 'DA8EE' reperesents the newest logfile that has not been committed i.e. anything older than this can be deleted!

So in this example we would delete DA8ED.log, DA8EC.log and so on.

Wednesday 22 April 2015

Identifying corrupted mailboxes in Exchange 2013

We can use the -DetectOnly switch of New-MailboxRepairRequest in order to identify if any of the mailbox's within a database have been corrupted.

To perform a mass check of all the mailboxes in a database we can use:

Get-MailboxDatabase -Identity "database-name" | Get-Mailbox |  New-MailboxRepairRequest -CorruptionType ProvisionedFolder,SearchFolder,FolderView,AggregateCounts -DetectOnly

or to perform a check on a single mailbox check:

New-MailboxRepairRequest -Mailbox "" -CorruptionType ProvisionedFolder,SearchFolder,FolderView,AggregateCounts -DetectOnly

And to check the progress of the request:

Get-MailboxRepairRequest -Mailbox "Ann Beebe" | FL

or to check all of them for a specific mailbox database we can use the following powershell script:

$mailboxes = Get-MailboxDatabase -Identity "database-name" | Get-Mailbox
foreach ($mailbox in $mailboxes) {
  $MailboxGuid = Get-MailboxStatistics $mailbox
  Get-MailboxRepairRequest -Database $MailboxGuid.Database -StoreMailbox $MailboxGuid.MailboxGuid

Checking for database corruption with eseutil in Exchange 2013

eseutil /mh D:\database1\db.edb /vss /vssrec eNN "D:\database1\logs"
This will enable you to run a check on the database while it is mounted using the /vss switch to utilize the Volume Shadow Service and /vssrec to replay the existing logs (otherwise the database status would return "Dirty Shutdown" as the logs are missing!)

Save disk space by removing superseded / expired updates in SCCM 2012

We should firstly remove any expired updates from any deployments - this can be performed by gonig to the SCCM Console >> Software Library >> Overview >> software Updates >> All Software Updates

and adding "Expired" to the column view. Select all of the expired updates and right-hand click selecting "Edit memberships".

There is an internal clean-up process that is invoked after 7 days of an update being made redundent and that is no longer assigned to any deployments.

or there is a script (provided by Ben Morris)

This script automates the entire task and works with SCCM 2012 R2.

Usage: .\Remove-SCCMExpiredSupersededContent_1.0.ps1 [PrimarySiteServerName]

Tuesday 21 April 2015

Understanding Transport Agents in Exchange 2013

Transport agents allow you to process / manipulate messages that go through the transport pipeline:

- Front End Transport service on Client Access Servers
- Transport Service on Mailbox Servers
- Mailbox Transport Service on Mailbox Servers

While there are a series of default (created by Microsoft) transport agents such as (to name a few):

- Malware Agent on the Mailbox Server
- Transport Rule Agent on the Mailbox Server
- Recipient Filter Agent on the Edge Server
- Sender Filter Agent on the Edge Server AND Mailbox Server.

Microsoft have also provided third-party vendors to create their own transport agents - for example anti-spam products.

Agents also have an assosiated priority so that a specific transport agent could process mail first. For example some custom transprot agents are required to run first - so we would make sure they have the highest priority e.g.

Set-TransportAgent -Identity "MyCustomTransportAgent" -TransportService -FrontEnd -Priority 100

We can get a list of transport agents with the following command:


and we can disable / enable a transport agent with:

Disable-TransportAgent "Test App" -TransportService Hub

Enable-TransportAgent -Identity "Test App" -TransportService Hub

You can find out which transport agents are triggered by which event with:


Understanding the Information Store in Exchange 2013

The information store controls access to mailbox databases (AKA private exchange store) and public folder databases (AKA public exchange store).

You will notice when creating a new database that you are required to restart the Microsoft Exchange Information Store so that it picks up the addition of the new database.

Previous versions of Exchange (e.g. 2007 / 2010) had a single instance of the information store process that managed all databases - meaning that potentially if a single database caused a problem it affect all the other databases. This was why "managed stores" were brought into Exchange 2013 - instead there is a SINGLE store service controller process (in this case, Microsoft.Exchange.Store.Service.exe aka MSExchangeIS), and one worker process (in this case, Microsoft.Exchange.Store.Worker.exe) for each database that is mounted - and when a database is dismounted the dedicated / assosiated worker process also gets killed.

There is also a "store service process controller" (Microsoft.Exchange.Store.Service.exe) that simply manages all of the store worker processes for all of the databases. If this process dies all of the worker processes also die!

The store store worker process is responsible for executing RPC operations on a database.

You can easily identify which store worker belongs to each database simply by issuing:

Get-MailboxDatabase –Status | ft name, workerprocessid

You can then match the PID to a worker process using a tool such as Process Hacker or System Explorer.

Error: Receive connector Default Frontend rejected an incoming connection from IP address...

Error Message: Receive connector Default Frontend rejected an incoming connection from IP address The maximum number of connections per source (20) for this connector has been reached by this source IP address.

This error occurs due to too many clients accessing the recieve connector simultaneously. In my experience this is somtimes caused by a load balancer performing SNAT (all of the source IP's are that of the the load balancer and NOT the original client IP address) - although in some busy environments you might also want to increase the RecieveConnector

Set-ReceiveConnector -Identity "Internet Receive Connector" -MaxInboundConnectionPerSource 500

Recovering mailbox database whitespace in Exchange 2013

Freeing up space in an Exchange database can be a daunting task - in this article I will discuss the process of cleaning up and clearing space within a mailbox database.

We should firstly take into account that when we delete mailboxes from our mailbox database they are not actually permenetly deleted - rather they are put into (by default) a "soft-deleted" state and are reffered to as "disconnected mailboxes." You should refer to this article in order to check and permenently delete any of these mailboxes.

Now even with the above performed the size of the NTFS database file does not decrease in size - this is commonly reffered to as "white space" - in order to check how much white space is contained within your database we issue the following:

Get-MailboxDatabase "Database1" -Status | select Name, AvailableNewMailboxSpace

Although this does NOT give an accurate display of whitespace and rather the eseutil utiliy must be run on an unmounted database:

eseutil /ms <database-name>

You can also refer to the event ID 1221 in the event log to find out how much white space is available.

Now we will need to perform a degramentation on the mailbox database in order to recover this space with the help of eseuil again - although the following pre-requisites must be taken into account:

- The disk space available on the mailbox database drive should be: MailboxDatabaseSize - FreeSpace * 2.
- Backup the database!

We can now start the degragmentation process by firstly dismounting the database:

Dismount-Database <database-name>

Using eseutil to perform the degramentation:

eseutil /d databasefile.edb /tD:\temp\tempdefrag.edb

Mounting the database after the operation completes:

Dismount-Database <database-name>

And finally confirming the database whitespace:

Get-MailboxDatabase -Status | ft name,databasesize,availablenewmailboxspace -auto

Monday 20 April 2015

Permanently deleting disconnected mailboxes that have been deleted / migrated

When deleting / disabling a mailbox - it becomes a "disconnected mailbox" and is retained until the "MailboxDeletion" property of the mailbox database expires e.g. after 8 days.

We should firstly identify the mailbox GUID of the deleted / migrated mailbox:

Get-MailboxDatabase | Get-MailboxStatistics | Where { $_.DisconnectReason -eq "Disabled" } | ft DisplayName,Database,DisconnectDate,MailboxGuid

Get-MailboxDatabase | Get-MailboxStatistics | Where { $_.DisconnectReason -eq "SoftDeleted" } | ft DisplayName,Database,DisconnectDate,MailboxGuid

Get-MailboxStatistics -Database MBD01 | where {$_.DisconnectReason -eq "SoftDeleted"} | foreach {Remove-StoreMailbox -Database $_.database -Identity $_.mailboxguid -MailboxState SoftDeleted}

Although we can perminently delete these disconnected mailboxes that have been soft-deleted as follows:

Remove-StoreMailbox -Database MBD01 -Identity "2ab32ce3-fae1-4402-9489-c67e3ae173d3" -MailboxState Deleted

OR (mailbox that have been disabled):

Remove-StoreMailbox -Database MBD01 -Identity "2ab32ce3-fae1-4402-9489-c67e3ae173d3" -MailboxState SoftDeleted

Creating and assigning a certifcate for encryption of a domain / non-domain instance of Data Protection Manager

If your DPM server is within the same domain as your CA you can perform the following:

DPM can take advantage of certifcate based encryption - to perform this we can use a duplicate the RAS and IAS Server template on our CA:

So we go to Certifcate Authority snapin >> Right-hand click and Manage > Right-hand click on "RAS and IAS Server" and hit "Duplicate" >> Name the certificate template "DPM Authentication" >> and make sure "Publish certificate in Active Directory" is enabled >> Make sure that "Allow private key to be exported" is enabled in the "Request Handling" tab.

We will then close down the certifcate tempaltes management windows and right hand click on the "Certifcate Templates" folder >> New >> Enable Certificate Template >> "DPM Authentication".

Via Certicate Services Web Enrollment select "Advanced Certifcate Required" and select "Create and Submit a request to this CA" - the key should be 2048bit ideally and should also be exportable!

Otherwise if your DPM server is not within your local ADCS domain you can generate a self-signed certifcate within IIS and import it into your the Computer certifcate store under the DPMBackup store.

We should then modify the backup set and choose the "encryption of data" option - the backup process will automatically pickup the certifcate stored in DPMBackup store.

Consideration when defragmenting disks in Server 2012 R2

When creating virtual machines take in too consideration creating virtual machine disks as thick provisioned - doing this theoreticaly creates the blocks on the disk in the same block space and prevents degragmentation - where as with thin-provisioined disks you their growth is speradic and the spindle will not always be in the same block space.

Degragmenting drives will help block lookup times - since the block data will be close to each other - also bare in mind when defragementing a hard drive to take into account the following:

- Making sure free space in consolidated - this process ensures that all blocks which contain data are together and that blocks with free space are saeparated.

- Ensure the operation is performed during a period of which users are not active i.e. weekend etc

Exchange 2013: Internal error while processing request

The FastFeeder component received a connection exception from FAST. Error details: System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: Internal error while processing request (Fault Detail is equal to An ExceptionDetail, likely created by IncludeExceptionDetailInFaults=true, whose value is: Microsoft.Ceres.InteractionEngine.Component.ProcessingEngineException: Internal error while processing request
Event ID: 1006
This is due to the ExchangeFastSearch attempting to look for a security group called "ContentSubmitters" - although is not created automacially by Exchange 2013 and can be created manually - simply create a security group in the "Exchange Security Groups" OU called "ContentSubmitters" and assign the "Administrators" and "NetworkService" Full Access. And then restart the following services:

Microsoft Exchange Search
Microsoft Exchange Search Host Controller

Error: Couldn't read registry key:SOFTWARE\Microsoft\ExchangeServer\v15\ContentIndex\CatalogHealth

.\Troubleshoot-CI.ps1 -Database "database name" -Action DetectAndResolve

[PS] C:\Program Files\Microsoft\Exchange Server\V15\Scripts>.\Troubleshoot-CI.ps1
The troubleshooter failed with error:System.InvalidOperationException: Couldn't read registry key:SOFTWARE\Microsoft\ExchangeServer\v15\ContentIndex\CatalogHealth\{017222b7fdf254-4111-bd99-619687ghg1j96}At C:\Program Files\Microsoft\Exchange Server\V15\Scripts\CITSLibrary.ps1:1023 char:9

This is due to the location of the catalog health information has moved location in Exchange 2013 and a the script is calling "CITSLibrary.ps1" that contains the old location - and it looks like MS never updated the scripts.

At the time of writing I am not aware of any work-around to get Troubleshoot-CI.ps1 working.

Thursday 16 April 2015

Setting up Out-of-bound (OOB) access in SCCM with Intel AMT

This is a pretty legnthy article so I am going to summarise the article below:

- Install Intel SCS Driver on all workstations and SCCM server
- Create certificate templates for AMT clients and SCCM OOB Server Role and our Web Server.
- Install the SCS Tools, perform discovery and provisioning.
- Install SCS Addon for SCCM Server, perform OOB management.

Machines with a Intel vPro chipset come equiped with a feaure called AMT (Intel Management Technology.) To my knowledge there is not a utility that can simply tell you whether or not you have a Intel vPro chipset - although from my experience the Intel sticker on your laptop / desktop will usually have something like "Intel Core i7 vPro" or "Intel Core i5 vPro" and so on (of course you could always check the manafactures website as well if possible).

You should firstly enable / provision AMT either via the BIOS under "Intel Management Engine BIOS Extension MEBx" (although this may vary on your laptop / desktop manafacturer) - you will need to login firstly - the default password is "admin".

We will now setup the following in AD:

- A user account (e.g. AMT_Provisioning) that will be used for AMT Provisioning
- A Universal security group for AMT Provisioned computers (e.g. AMT Provisioned Computers Sec)
- An OU for AMT Provisioned computers (e.g. AMT Provisioned Computers) * Do not place computers you are planning AMT provisioing for in this OU! The AMT Provisioing process will create a separate object in here for you!*

We will then need to add a new site role (enrollment point and The out of band service point):

Administration >> Site Configuration >> Sites >> Add System Role.

We should now generate a client workstation certificate (and auto-enroll GPO for users):

We should create a new certificate - duplicating the existing "User" template and in the Properties of New Template dialog box, on the General tab, enter a template name to generate the client certificates that will be used on Configuration Manager client computers, such as ConfigMgr Client Certificate - and make sure "Publish Certificate in Active Directory" is unticked!

In the cryptography tab make sure "Microsoft Enhanced Cryptographic Provider 1.0" and "Microsoft Strong Cryptographic Provider" are ticked.

Under the subject tab ensure that "Supply in the request" is selected.

We should then ensure that the user account that will be used for AMT provisioning (e.g. AMT_Prov) has the "Read" and "Enroll" permissions!
** Also if you are going to run the provisioing using the "AMT Utility" you will also have to issue "Read and Enroll" rights to the computer you are running the utility on! **

Finally go to Extensions >> Application Policies >> Edit >> Add and select "Server Authentication" and then click Add again and hit "New" - typing in the name of "AMT Local Access" and OI of "2.16.840.1.113741.1.2.2" then do the same again but this time with a name of "AMT Remote Access" and OI of "2.16.840.1.113741.1.2.1" and then finally select all three selections and hit OK.

In the Certification Authority console, right-click Certificate Templates, click New, and then click Certificate Template to Issue.

In the Enable Certificate Templates dialog box, select the new template that you have just created, ConfigMgr Client Certificate, and then click OK.

We will also need to supply a AMT provisioning certificate - which we will generate from our CA.

We will create a new certificate template: CA Console > Certificate Templates >> Right-hand click and select 'Manage.'

Once created go to the properties under the General tab and call it: 'ConfigMgr AMT Provisioning'. 

Click the Subject Name tab, select Build from this Active Directory information, and then select Common name.

Click the Extensions tab, make sure Application Policies is selected, and then click Edit.

In the Edit Application Policies Extension dialog box, click Add.

In the Add Application Policy dialog box, click New.

In the New Application Policy dialog box, type AMT Provisioning in the Name field, and then type the following number for the Object identifier: 2.16.840.1.113741.1.2.3.

Click OK, and then click OK in the Add Application Policy dialog box.

Click OK in the Edit Application Policies Extension dialog box.

In the Properties of New Template dialog box, you should now see the following listed as the Application Policies description: Server Authentication and AMT Provisioning.

Click the Security tab, and remove the Enroll permission from the security groups Domain Admins and Enterprise Admins.

Click Add, enter the name of a security group that contains the computer account for the out of band service point site system role, and then click OK.

Select the Enroll permission for this group, and do not clear the Read permission..

Click OK, and close the Certificate Templates console.

In Certification Authority, right-click Certificate Templates, click New, and then click Certificate Template to Issue.

In the Enable Certificate Templates dialog box, select the new template that you have just created, ConfigMgr AMT Provisioning, and then click OK.

We will now go to the SCCM server and open up the Certificates mmc snapin (Computer Account!) and under the Personal node - right hand click and select "Request New Certificate." We should select the Active Directory Enrollment policy when prompted and then select the template we created before (ConfigMgr AMT Provisioning.)

Now back on the "Add Site System Roles Wizard" and choose our newly issued certificate and complete the wizard (selecting the IIS settings and so on.)

After this you should now have an Enrollment point and Out of band service point role in place on your SCCM server.

We should also configure the web server - by creating a certificate template for this:

Go to the Certificate Authority snapin - Manage Templates >> Duplicate the "Web Server" template.

In the Properties of New Template dialog box, on the General tab, enter a template name to generate the web certificates that will be used for out of band management on AMT computers, such as ConfigMgr AMT Web Server Certificate.

Click the Subject Name tab, click Build from this Active Directory information, select Common name for the Subject name format, and then clear User principal name (UPN) for the alternative subject name.

Click the Security tab, and remove the Enroll permission from the security groups Domain Admins and Enterprise Admins.

Click Add and enter the name of the security group that you created for AMT provisioning. Then click OK.

Select the following Allow permissions for this security group: Read and Enroll.

Click OK, and close the Certificate Templates console.

In the Certification Authority console, right-click Certificate Templates, click New, and then click Certificate Template to Issue.

In the Enable Certificate Templates dialog box, select the new template that you have just created, ConfigMgr AMT Web Server Certificate, and then click OK.

We proceed by going to our SCCM server and requesting a new certificate from the template we have just created to assign to our IIS server:

Load up the Certificates mmc snapin (Computer Account!) and right-hand click on the personal node and select "New certificate enrollment" >> Select "Active Directory Enrollment Policy" >> ConfigMgr AMT Web Server Certificate" >> Click on the details button >> Properties >> Private Key and tick "Make private key exportable" >> Enroll.

Export the new certificate (with its assosiated private key) and import it into IIS. We will now have to edit the site bindings to allocate the SSL certificate to the relevent website:

Right-hand click website >> Edit Bindings >> Select "https" type >> Edit >> and choose the appropriate SSL certificate.

We will now Out of Band Management Component in SCCM by going to:

Administration >> Site Configuration >> Sites select the appropriate site and on the 'Home' tab select "Configure Site Components" and then "Out of band management"

Under the general tab:-

- Select the enrollment point (make sure IIS is configured properly / works with the chosen URL!)
- Create and select the OU for AMT-Based computer accounts
- Create and select the global security group for the AMT-Based computer accounts
- Assign a password for the MEBx account (the default is 'admin' - although quite rightly SCCM would like you to choose a securer password!).
- Under the provisioning tab make sure you add any custom passwords used for the MEBx account e.g. Name: admin Password: custom-password.

** Note: MEBx stands for 'Intel Management Engine BIOS Extension' and is the password that is used to access the AMT. **

Under the 'AMT Settings' tab:-

- Specify the security group that will contain the users who will use AMT (e.g. helpdesk, admins and so on...) by clicking on the yellow star in the upper right-hand corner.
- Advanced and Audit Settings sections can be left as is (dependent on your requirements obviously!)

Under the 'Provisioning' tab:-

- You can define a "provisioing account" IF your manafacturer has set a custom MEBx password or a user has manually already configured this password.

I am not concerned about Wireless - so I will emit this configuration - click OK to exit the configuration view.

On the machines we must ensure that IntelSCS is installed:

and also that IntelHD Graphics drivers are installed for features such as remote control / KVM.

and finally you should install the relevent Intel Management Engine Driver for your chipset / motherboard series.

Within the Intel SCS package there is a utiliy called SCSDiscovery.exe - we can run the following (on the target computer) to find out whether AMT is present and enabled in the BIOS:

SCSDiscovery.exe SystemDiscovery

An XML file should be generated in the working directory - we are looking for the following elements to return true:


There is another utility called ACUWizard.exe that will help us provision AMT - either manually or by providing us with a settings file that can be scripted / automarically provision machines.

But before we launch this tool we must use a tool called "RCS" in order to facilitate a few things for the ACUConfig utiliy - taken from Intel:

"The RCSutils utility is a Command Line Interface (CLI) that was created to make some of the RCS setup tasks easier. These tasks include installing certificates and giving Windows Management Instrumentation (WMI) permissions to user accounts so that they can access the RCS."

Within the IntelSCS folder there should be a subfolder called RCS and within here we launch the RCS installer:


Upon installation completing - we will launch the Intel SCS Console and create a new profile. We must select "TLS Authentication" as an optional feature because SCCM 2012 no longer supports unsecure connections to AMT (Port 16993)!

We will also select Active Directory Integration: Here we specify the OU containing computers for AMT deployment.

and Access Control List: Here we will click "Add" >> "Active Directory User Group" >> Select the user / security group of users that will have access to AMT - The realm should be set too "PT Administration" and the "Access Type" should be set to "Both."

So on the TLS Configuration page select our DC and also the ConfigMgr Client Certificate we created earlier.

** Intel SCS will install the root CA in the AMT and also request a certificate from the template we setup and install it in the AMT. **

After this, you should save the XML file as 'profile.xml' and place it in the "ACU_Wizard" directory. We will now run ACUWizard.exe

** Note: If you run into any errors while running this tool you should refer to the log file that is generated in the working directory: ACUConfig.log **

You should now use the SCSDiscovery utility to indentify if AMT has been successfully provisioned


And we confirm the following line is equal to true:


We can now test the AMT with one of Intel's own tools called: Intel® vPro™ Platform Solution Manager - downloaded from:

I got the following error when attempting to reboot the computer with AMT:

"The sender was not authorized to access the resource"

This was becuase I was invoking the action of a local workstation and NOT on the SCCM server!

If this goes well, we should now install the Intel SCS SCCM Addon available below:

** Note: Although SCCM has built-in support for AMT - this is only up to version 6.1 ( - in this scenerio we are installing version 9 so we must download the above addon package from Intel **

Although before we install the addon we will need to configure Hardware Inventory Classes: sms_def_AMT.mof and sms_def_SCSDiscovery.mof - these files can be found in the addon installer directory. In order to import these into SCCM we do the following: Administration >> Overview >> Client Settings >> right-hand click on your "Default Client Configuration" (any changes made here will replicate onto your custom client settings!) and select Properties >> Hardware Inventory >> Set Classes >> Import.

During the installation process of the Addon you will be prompted to specify the location of some of the SCS components (Solutions Framework etc.) - you should point them to the SCS package we downloaded from here:

We should install the Discover, Configure, Maintain and Unconfigure components and also specify the XML profile we generated earlier using the Intel SCS Console. Finally select a directory to store the generated packages that is accessable to SCCM at all times! E.g. D:\Sources\Intel AMT Packages.

Now back in the SCCM Console you should see a series of new Device Collections - including: Intel SCS Platform Discovery and Intel AMT Configured.

We should now go to Assets and Compliance >> Overview >> Devices >> Add the "AMT Status" column to the list view and right-hand click on an AMT configured host >> Manage Out of Band >> Discover AMT Status.

We can now start provisioning AMT via task sequences. By default the addon creates several task sequences - one of which will help us achieve this - Intel AMT Configuration. Simply put it runs the following command:

Configure.bat ".\Profile.xml" "SCCM2012" <SCCM-SERVER-FQDN> "C:\temp" <SCCM-SITE-CODE>

It is a command line version of the ACUWizard utility we used before.

Wednesday 15 April 2015

Re-creating arbirtation mailboxes with Exchange 2013

To check the arbitration inboxes we issue:

Get-Mailbox -Arbitration

I encounted the following warning:

WARNING: The object Exchange System Objects/Monitoring Mailboxes/SystemMailbox35ea0e8476eb6c21b9c02c63c31 has been corrupted, and it's in an inconsistent state. The following validation errors happened:
WARNING: Database is mandatory on UserMailbox.
WARNING: Database is mandatory on UserMailbox.

In order to resolve this issue we will have to firstly delete the mailboxes and re-create them:

Open AD Users and Computers and Expand the Domain go to the Users OU and identify the following accounts and delete them:
"SystemMailbox{1f05a927-****-****-****-*******}” (Make a note of the Guid of previous system mailbox as it varies on every environment)

Open Command prompt navigate to your Exchange Setup files and run /preparead

Open the Exchange Management Shell and run the below:

Enable-Mailbox –Arbitration –Identity "FederatedEmail.4c1f4d8b-8179-4148-93bf-00a95fa1e042”
Enable-Mailbox –Arbitration –Identity "SystemMailbox{e0dc1c29-89c3-4034-b678-e6c29d823ed9}”
Enable-Mailbox –Arbitration –Identity "SystemMailbox{1f05a927-****-****-****-*******}”  (Remember to change *** to your Guid)
Set-Mailbox –Arbitration –Identity "SystemMailbox{e0dc1c29-89c3-4034-b678-e6c29d823ed9}” –DisplayName "Microsoft Exchange”
Set-Mailbox –Arbitration –Identity "FederatedEmail.4c1f4d8b-8179-4148-93bf-00a95fa1e042” –ProhibitSendQuota 1MB

We should then verify the mailboxes with:

get-user -arbitration
get-mailbox -arbitration

Re-creating health mailboxes in Exchange 2013

I recieved the following error message when attempting to run the command 'Test-ExchangeSearch -MailboxDatabase "Example Database" -IndexingTimeoutInSeconds 30 | FL':

"The monitoring mailbox could not be found in the Mailbox Database"

Firstly check if you can perform the indexing checks on any other mailboxes successfully and then verify if the relevent health monitoring inboxes exist for the appropriate mailbox database:

Get-Mailbox -monitoring | fl name,*database*

or for a specific database:

Get-Mailbox -monitoring -database "My Database" | fl name,*database*


Name                         : HealthMailboxy983865947agfj668c692cb19d1
Database                     : My Database
UseDatabaseRetentionDefaults : True
UseDatabaseQuotaDefaults     :
ArchiveDatabase              : My Database
DisabledArchiveDatabase      :

In my case there was only one health mailbox - although there should be two (one for user mailboxes and the other for public fodlers) by default...

** I also noticed other mailboxes databases had corrupted health mailboxes as follows:

WARNING: The object Exchange System Objects/Monitoring Mailboxes/HealthMailbox35ea0e8476eb6c21b9c02c63c31 has been corrupted, and it's in an inconsistent state. The following validation errors happened:
WARNING: Database is mandatory on UserMailbox.
WARNING: Database is mandatory on UserMailbox.

So in this case I had to manually re-create them:

Firstly delete ALL of the health mailbox objects in the "Monitoring Mailboxes" folder as follows:

Go to ADSIEdit >> Connect to "Default naming context" >> DC=your,DC=domain >> CN=Microsoft Exchange System Objects >> CN=Monitoring Mailboxes >> Delete all of the Health Mailbox objects.

We will then restart the Exchange Health Manager service and after a few minutes Exchange will re-create the health mailboxes:

Restart-Service MSExchangeHM

and verify the new health mailboxes have been created:

Get-Mailbox -monitoring -database "My Database" | fl name,*database*

Debugging Exchange Search / Content Indexing Errors

We should firstly check the content indexing status of the mailbox databases:


We can issue the Test-ExchangeSearch command to test a number of functionalities of the Exchange Search.

To test search indexing in a user mailbox we issue:

Test-ExchangeSearch -Identity

We can also refer to the event log - under the following event sources:

- MSExchangeFastSearch
- MS ExchangeIS

Tuesday 14 April 2015

Unable to connect to Exchange and throttling policies

If you have recieved any one of the following errors and have checked other elements such as DNS, Firewalling etc. and can see the Exchange client has successfully connected to the Exchange Server:

- Unable to open your default email folders. The Microsoft Exchange Server computer is not available
- Unable to connect to Outlook (when launching Outlook)
- Unable to connect to Outlook server (when adding a new Exchange profile)

Check your throttling policy:

Get-ThrottlingPolicy | FL *RCA*

Sometimes when the RcaMaxConcurrency entry is set quite low you might encounter one of the above error mesages.

Monitoring and Debugging RPC Client Access in Exchange 2013

By default RPC Client Access logs are stored in:

<exchange-installation-path>\Logs\RPC Client Access

We can use log parser to once again format the logs nicely:

** Although note we must REMOVE the following part of line 5 from our RPC Client Logs (otherwise log parse will run into issues):


C:\Program Files (x86)\Log Parser 2.2>logparser "SELECT date-time,client-name as User,client-software,client-software-version as Version,client-mode,client-ip,protocol,client-ip,client-connection-info from 'C:\input.txt' GROUP BY User,client-software,Version,client-mode,client-ip,protocol,date-time,client-ip,client-connection-info ORDER BY date-time" -nSkipLines:4 -i:CSV -rtp:-1 > C:\output.txt

Debugging the autodisover process in Exchange 2013

A script provided by Mike Pfeiffer ( that can retrieve autodiscover information can be found below:

Save the script to a file and then invoke it (e.g. .\autodiscover.ps1) to register the Test-Autodiscover cmdlet and then run one of following:

- For external auto-discover settings: Test-Autodiscover -EmailAddress -Location external
- For internal auto-discover settings: Test-Autodiscover -EmailAddress -Location internal

function Test-Autodiscover {
      [Parameter(Position=1, Mandatory=$true)]
      [ValidateSet("Internal", "External")]
      [Parameter(Position=2, Mandatory=$false)]
      $Location = "External",     
      [Parameter(Position=3, Mandatory=$false)]
      [Parameter(Position=4, Mandatory=$false)]
      [Parameter(Position=5, Mandatory=$false)]
      $IgnoreSsl = $true,
      [Parameter(Position=6, Mandatory=$false)]
        Add-Type -Path 'C:\Program Files\Microsoft\Exchange\Web Services\1.1\Microsoft.Exchange.WebServices.dll'
      process {
        $autod = New-Object Microsoft.Exchange.WebServices.Autodiscover.AutodiscoverService
        $autod.RedirectionUrlValidationCallback = {$true}
        $autod.TraceEnabled = $TraceEnabled
        if($IgnoreSsl) {
          [System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}       
          $autod.Credentials = New-Object Microsoft.Exchange.WebServices.Data.WebCredentials -ArgumentList $Credential.UserName, $Credential.GetNetworkCredential().Password
        if($Url) {
          $autod.Url = $Url
        switch($Location) {
          "Internal" {
            $autod.EnableScpLookup = $true
            $response = $autod.GetUserSettings(
            New-Object PSObject -Property @{
              RpcClientServer = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::InternalRpcClientServer]
              InternalOwaUrl = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::InternalWebClientUrls].urls[0].url
              InternalEcpUrl = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::InternalEcpUrl]
              InternalEwsUrl = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::InternalEwsUrl]
              InternalOABUrl = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::InternalOABUrl]
              InternalUMUrl = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::InternalUMUrl]
          "External" {
            $autod.EnableScpLookup = $false
            $response = $autod.GetUserSettings(
            New-Object PSObject -Property @{
              HttpServer = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::ExternalMailboxServer]
              ExternalOwaUrl = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::ExternalWebClientUrls].urls[0].url
              ExternalEcpUrl = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::ExternalEcpUrl]
              ExternalEwsUrl = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::ExternalEwsUrl]
              ExternalOABUrl = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::ExternalOABUrl]
              ExternalUMUrl = $response.Settings[[Microsoft.Exchange.WebServices.Autodiscover.UserSettingName]::ExternalUMUrl]
        This function uses the EWS Managed API to test the Exchange Autodiscover service.

        This function will retreive the Client Access Server URLs for a specified email address
        by querying the autodiscover service of the Exchange server.

    .PARAMETER  EmailAddress
        Specifies the email address for the mailbox that should be tested.

    .PARAMETER  Location
        Set to External by default, but can also be set to Internal. This parameter controls whether
        the internal or external URLs are returned.
    .PARAMETER  Credential
        Specifies a user account that has permission to perform this action. Type a user name, such as
        "User01" or "Domain01\User01", or enter a PSCredential object, such as one from the Get-Credential cmdlet.
    .PARAMETER  TraceEnabled
        Use this switch parameter to enable tracing. This is used for debugging the XML response from the server.   
    .PARAMETER  IgnoreSsl
        Set to $true by default. If you do not want to ignore SSL warnings or errors, set this parameter to $false.
        You can use this parameter to manually specifiy the autodiscover url.       

        PS C:\> Test-Autodiscover -EmailAddress -Location internal
        This example shows how to retrieve the internal autodiscover settings for a user.

        PS C:\> Test-Autodiscover -EmailAddress -Credential $cred
        This example shows how to retrieve the external autodiscover settings for a user. You can
        provide credentials if you do not want to use the Windows credentials of the user calling
        the function.



In order to run this script you will need to download Exchange Web Services from:

A sample break down of the RPCClientServer value output can be found below:

The RPCClientServer can be broken down into two parts:

- The GUID is the ExchangeGUID (not the GUID property!) property of the mailbox.
- The domain reperesents where the mailbox is situated.

get-mailbox |ft Name, ExchangeGuid > c:\guid.txt

Monday 13 April 2015

What is the Microsoft Exchange Recipient in Exchange 2013? or also know as the Microsoft Exchange Recipient is a special recipient object that is used as an internal postmaster to perform options such as sending DSN's, journal reports and error messages.

Within Exchange 2013 I noticed that there was what looked like a routing loop occuring - messages were originating from this address. I noticed there were 1000's of DSN's being generated with the following error in the event information:

"BadmailReason, NDRing a mail recepient that requires a DSN"

Using the Get-OrganizationConfig cmdlet you can verify this:

Get-OrganizationConfig | FL *MicrosoftExchangeRecipientEmailAddresses*

Simply creating the non-existent user stopped the DSN's in their tracks.

NDR Storms in Exchange 2013

I came across a NDR storm in Exchange 2013 the other day (yes - Exchange 2013 is not immune from them!)

I noticed 1000's of DSN Failures in the message tracking logs. Exchange was sending NDR's due to a recipient lookup failure, althoguh the sender address was also no longer exisited and hence an NDR storm was created.

In order to remedy the situation I attempted to re-create the sending account so the NDR would be delivered and the loop would end - although when attemping to create the alias address I recieved the following message:

The proxy address "" is already being used by the proxy addresses or LegacyExchangeDN of "nonexistentuser". Please choose another proxy address.

To verify that it doesn't already exist we can run the following to output a list of all addresses assosiated with users:

Get-Mailbox | Select Name -ExpandProperty EmailAddresses

Although strangely it wasn't listed, I did notice it was in a similar format to that of arbitration inboxes - such as the ones used for health reporting although that didn't shed much light still!

In the end I decided to turn of NDR's temporarily to halt the infinate loop:

Unknown to me at the time loop detection in Exchange 2013 is NOT enabled by default - so after enabling it:

Set-TransportConfig -AgentGeneratedMessageLoopDetectionInSmtpEnabled $true
Set-TransportConfig -AgentGeneratedMessageLoopDetectionInSubmissionEnabled $true

And then finally restart the MS Exchange Transport service for changes to be picked up immideately.

Although to no avail! So finally I stumbled accorss a transport rule that helps mitigate NDR storms!:

New-TransportRule "Prevent NDRs Storm - MichaelG" -Comments "Prevent NDRs Storm" -From "" -SentToScope "NotInOrganization" -SubjectContainsWords "FW: There was an error sending your mail", "FW: Mail delivery failed", "FW: failure notice", "Undeliverable:" -RedirectMessageTo "" -Enabled $True

Understanding mail flow with Exchange Server 2013

Exchange 2013 comprises of three transport services:

- Front-end transport service: Situated on the CAS it simply acts as a stateless proxy for outgoing internal and external SMTP traffic.

- Transport Service: Situated on the mailbox server - it is almost identical to the Exchange Server 2010 role - Hub Transport - simply providing mail routing accross your organization. (e.g. to different mailbox servers - although this role does NOT interact with any mailboxes directly!)

- Mailbox Transport Service: Situated on the mailbox server - it comprises of two services: Mailbox Transport Submission and the Mailbox Transport Delivery service. The Mailbox Transport Delivery service recieves mail from Transport service on the local server (or a server in the same site) and delivers the mail directly to a mailbox via RPC, while the The Mailbox Transport Submission service submits mail to the relevent Transport Service.

Below is a summary of a mail item being recieved from the internet destined for an exchange mailbox:

1. The external mail server sending the mail looks up the MX records for the destination domain and then routes it to the appropriate mail server.

2. The mail should then be recieved by a recieve connector setup in your Exchange environment on the CAS.

3. The mail is picked up by the front-end transport service and then forwarded to the transport service on the mailbox server.

4. The Transport service on Mailbox Server then categorizes the email, performs message content inspection, etc. Since it doesn’t connects directly to Mailbox database it sends email to the Mailbox Transport Service over port 25.

5. The Mailbox Transport Service is again divided into two service out of which Mailbox Transport Delivery service receives SMTP message from Transport service

6. The Mailbox Transport Delivery Service using Store Driver would connect to the mailbox database via RPC and deliver the e-mail to the mailbox database

Routing internal mail from another mailsystem via Exchange 2013

Add an accepted domain: and we select "Internal Relay" - as the sending mail server (e.g. running postfix) is within our network boundary - although is not part of our Exchange organization.

Create a new recieve connector:

Role: Frontend Transport
Type: Custom

Specify the interface bindings, port numbers and so on.

We then specify the remote servers which will be able to access the recieve connector.

Once finished we will then go to the recieve connector properties >> Authenication >> Tick the following:

- Externally secured

and under Permission Groups:

- Exchange Servers
- Anonymous Users

Sunday 12 April 2015

IPSec Process Flow

In order to setup an IPSec connection you must firstly establish a secure channel with the help of an IKE (Internet Key Exchange) policy. IKE itself relies upon ISAKMP (Internet Security Association and Key Management Protocol) and defines how it should establish the secure connection e.g. type of encryption, authentication, timeouts and so on.

The main attributes attached to an IKE Policy are:

- Encryption Algorithm (e.g. DES, 3DES, AES)
- Hash Algorithm (MD5, SHA1, SHA256)
- Authentication Method: (Pre-shared key or certificate)
- DF (Diffie-helman group): (1,2,5 and so on)
- Lifetime (in seconds - 86400 by default)

During this phase Diffie-helman (asymetric) encryption is used to secure communication and once complete the symmetric keys are used (to save CPU time.) Once this phase (aka IKE Phase 1) has successfully completed a 'security association' / SA is formed.

The next step is too define establish the IPSec SA (aka IKE Phase 2) - by creating 'transform-set's' we are able to define the encryption and hashing algorithms that we will use for this phase. Although unlike phase 1 the phase 2 SA's are unidirectional - meaning that you must explicitly define the hashing and encryption options within transform sets.

After implementing an IPSec tunnel the ISAKMP SA will be established immediately, although the IPSec SA will only be established when some "interesting traffic" (traffic that is destined to go over the tunnel) traverses the tunnel.

Both Authentication Header (AH) and ESP (Encapsulating Security Protocol) require IKE to function correctly. AH provides data integrity by encapsulating an IP packet (which itself operates on Protocol 51) and performing a hash computation on the whole IPv4 packet (minus and mutable fields in the IP packet header) using the keys only known to the hosts within the IPSec SA - hence providing data integrity and authenticity of the source. ESP also provides it's own authentication and encryption service - as AH alone will not encrypt any data! Typically data will be encrypted using ESP and then wrapped by AH.

Setting up an IPSec tunnel between a Cisco ASA and another security appliance

Setting up an IPSec tunnel between a Cisco ASA and another security appliance

We have three available interfaces on the ASA - they will be provisioned as follows:

Ethernet0/0 (outside - which is connected directly to the internet)
Ethernet0/1 (inside - which is an inside network where we want to terminate one side of the VPN terminal)

We will firstly configure the interfaces accordingly:

configure terminal

We configure the outside interface:

int e0/0
nameif outside
security-level 0
ip address
no shutdown

We configure the inside interface:

int e0/1
nameif inside
security-level 100
ip address
no shutdown

int e0/2
nameif management
security-level 100
ip address
no shutdown

And setup routing on the outside interface:
route outside 1
( represents the next hop and the '1' indicates the cost)

ISAKMP / IKE Phase 1 - this is the process where IKE create an initial SA using Diffie-helman forming an asymmetrical encrpytion channel between the two VPN endpoints and forms the foundation for IKE Phase 2

We need to make sure that is enabled on the outside interface firstly - this is achieved by:

show run crypto

ISAKMP was not enabled on my outside interface by default - so we should enable it with:

crypto ikev1 enable outside
crypto ikev2 enable outside

We also want strong encryption as by default I was only using DH Group 2 - lets set it to 5:

isakmp policy 1 authentication pre-share
isakmp policy 1 encryption 3des
isakmp policy 1 hash sha
isakmp policy 1 group 5
isakmp policy 1 lifetime 86400

(or you can explicity define IKE v1 and v2 policies with: crypto ikev1 policy 1 and crypto ikev2 policy 1 - with the introduction of IOS 8.4 and up on ASA.) E.g.

crypto ikev1 policy 1
  authentication pre-share
  encryption 3des
  hash sha
  group 5
  lifetime 86400
crypto ikev1 enable outside


crypto ikev2 policy 1
  encryption 3des
  group 5
  prf sha
  lifetime seconds 43200
crypto ikev2 enable outside

** Note: The lowest policy-priority (1 in this case) - take presidence. Policy priorities can range from 1–65535 **

** We do not use the isakmp key on an ASA (unlike Cisco IOS routers) instead we configure a tunnel group **

We will now create an IPSec transform set - which sets the authentication and encryption that the IPSec SA's (IKE Phase 2) will use.

crypto ipsec transform-set L2L esp-3des esp-sha-hmac


crypto ipsec ikev1 transform-set trans1 esp-3des esp-sha-hmac
crypto ipsec ikev2 ipsec-proposal secure
 protocol esp encryption 3des aes des
 protocol esp integrity sha-1
crypto ipsec ikev2 ipsec-proposal aescustom
 protocol esp encryption aes-256
 protocol esp integrity sha-1
crypto ipsec ikev2 ipsec-proposal AES256
 protocol esp encryption aes-256
 protocol esp integrity sha-1 md5
crypto ipsec ikev2 ipsec-proposal AES192
 protocol esp encryption aes-192
 protocol esp integrity sha-1 md5
crypto ipsec ikev2 ipsec-proposal AES
 protocol esp encryption aes
 protocol esp integrity sha-1 md5
crypto ipsec ikev2 ipsec-proposal 3DES
 protocol esp encryption 3des
 protocol esp integrity sha-1 md5
crypto ipsec ikev2 ipsec-proposal DES
 protocol esp encryption des
 protocol esp integrity sha-1 md5

We should now create an ACL to match our "interesting traffic" (traffic that will be traversing through the VPN):

access-list Interesting_Traffic extended permit ip

The next step is too define our tunnel group - which defines properties such as the connection type used and authentication parameters (typically this is either a pre-shared key or certificate based) - for this tutorial we will stick with a pre-shared key:

tunnel-group type ipsec-l2l
tunnel-group ipsec-attributes
  pre-shared-key your-password
  ikev1 pre-shared-key 0 your-password
  ikev2 local-authentication pre-shared-key 0 your-password
  ikev2 remote-authentication pre-shared-key 0 your-password

The final process is to create a crypto map (called L2L) - that simply ties our IPSec transform set, access lists and tunnel group together:

crypto map L2L 1 match address Interesting_Traffic
crypto map L2L 1 set peer
crypto map L2L 1 set transform-set L2L
crypto map L2L 1 set ikev1 transform-set trans1
crypto map L2L 1 set ikev2 ipsec-proposal secure aescustom AES256 AES192 AES 3DES DES
crypto map L2L interface outside

We will finally need to make sure that the interesting traffic is not natted - if the two sites are connected over two RIPE addresses (no-nat / nat exemption) is in place.

If you are using IOS 8.3 or below:
access-list NO-NAT permit ip
nat (inside) 0 access-list NO-NAT

or if you are using 8.4 and above use:
object network obj-local
object network obj-remote
nat (inside,outside) 1 source static obj-local obj-local destination static obj-remote obj-remote

We will commit our changes to the startup-configuration:
write memory

We can now check the staus of ISAKMP (IKE Phase 1) with the following command:

show isakmp sa

IKEv1 SAs:

   Active SA: 1
    Rekey SA: 0 (A tunnel will report 1 Active and 1 Rekey SA during rekey)
Total IKE SA: 1

1   IKE Peer:
    Type    : L2L             Role    : responder
    Rekey   : no              State   : MM_ACTIVE

show ipsec sa

You might not get any SA's output initially - although this might be because no "interesting traffic" has traversed the VPN yet - as the Phase 2 SA's are not established until interesting traffic traverses the VPN.

We can use an extended ping to generate some traffic from one network to he other as follows:

asa# ping
TCP Ping [n]:
Interface: inside
Target IP address:
Repeat count: [5]
Datagram size: [100]
Timeout in seconds: [2]
Extended commands [n]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to, timeout is 2 seconds:
Success rate is 0 percent (0/5)

or we can use the packet-tracer command to manually generate some traffic (this is also a very good command to debug packet flow):

packet-tracer input inside tcp 1250 80

Drop-reason: (acl-drop) Flow is denied by configured rule

You can also debug both phases with the following commands:

debug crypto isakmp
debug crypto ipsec

We can reset the IPSec SA or ISAKMP with:

clear crypto ipsec sa <ip-address>
clear crypto ikev1 sa <ip-address>
clear crypto ikev2 sa <ip-address>

Cisco ASA Fundamentals

Class Maps: Identifies the traffic for example by: protocols, access-lists etc. For example:

access-list ICMP_ACCESSLIST extended permit icmp any any
class-map ICMP_TRAFFIC
match access-list ICMP_ACCESSLIST

Policy Maps: Tell you what to do with the traffic e.g. PASS, DROP, INSPECT, LOG. For example:

policy-map ALLOW_ICMP
description xAllow ICMP Trafficx
police input 100000

Service Policy: Applied to a zone pair to do actions like control traffic flow between different zone pairs:

service-policy ALLOW_ICMP interface inside

So we have identified the traffic firstly, applied a rate limit on the traffic and then applied this policy to an interface / zone.

Zone Pairs: Traffic flow is between two zones, an explicit rule must be in place to allow traffic between the different zones. They are uni-

directional. For example: Inside Zone to Outside Zone.

Interface Security Levels: From 0 to 100 - indcates how trusted traffic from that interface is:

- 100 Indicates flow can go to all other interfaces freely.
- 0 Indicates that all traffic origniating from the interface is untrusted.

Interfaces with a higher security-level than others can freely access them - although by default interfaces with the same security-level are

unable to access each other.

Friday 10 April 2015

Get a Cisco ASA working with GNS3

GNS3 is able to successfully emulate an ASA running 8.4 of the IOS software:

Firstly download a copy of the ASA's boot images:

We then download

Launch GNS3 and go to: Edit >> Preferences >> QEMU >> QEMU VMs >> New  and enter the following:

RAM: 1024 MiB
Number of NICs: 6
QEMU options: -m 1024 -icount auto -hdachs 980,16,32

Initrd: C:\ASA\asa842-initrd.gz
Kernel: C:\ASA\asa842-vmlinuz
Kernel cmd line: -append ide_generic.probe_mask=0x01 ide_core.chs=0.0:980,16,32 auto nousb console=ttyS0,9600 bigphysarea=65536

Finally activate the ASA's features with the following command:

activation-key 0x4a3ec071 0x0d86fbf6 0x7cb1bc48 0x8b48b8b0 0xf317c0b5

SNAT vs DNAT vs Masquerading

SNAT (Source NAT): Simply changes the IP address in the source header of the IP packet and sometimes TCP / UDP port as well (PAT / Port Address Translation.) Typically this is used to provide internal (private IP) clients to access public IP addresses on the internet (e.g. a web server.)

DNAT (Destination NAT): Simply changes the IP address in the destination header of the IP packet and sometimes TCP / UDP port as well (PAT / Port Address Translation.) Typically this is used to allow incoming packets from an internet host to access interal (private IP) hosts.

Masquerading: Is similar to SNAT, although it is unaware of which IP address it will be NAT'ing against at the time of rule creation - rather it is decided when the rule is triggered. Typically used when a NAT'ed outside interface uses DHCP (i.e. the IP is variable.)


=These two terms are quite often used interchangably - although the two are very different - this post will aim to describe and contrast the two terms:

ISAKMP (Internet Security Association and Key Management Protocol) is part of a group of protocols within IKE (ISAKMP, SKEME and OAKLEY). IKE establishes the security assosiation and authneticated keys - ISAKMP defines how the key exchange process works.

Thursday 9 April 2015

Debugging Windows Time Service issues

 To test if the w32tm sync process is working we can force a resync:

We enable debugging mode:
w32tm /debug /enable /file:C:\time.log /size:10485760 /entries:0-300

and then invoke a resync and review the output log file:
w32tm /resync

Disable debugging:
w32tm /debug /disable

Querying the time source in Windows Server 2012

Using w32tm we can query the time source of a server:

w32tm /source /query

w32tm /dumpreg /subkey:parameters

or you can find the information in the registry:


The latter command will output the NTP Server and the type of time configuration - which should be one of the following:

NoSync = No synchroization performed on the server.

NTP = Server is configured for NTP.

NT5DS = Server uses the domain heiracrhy for it's time information - in other words it finds and uses the DC holding the PDC Emulator FSMO role and asks it for it's time source. If there is a parent domain the PDC Emulator in the child domain will query the PDC emulator in the parent domain and so on. (To find the PDC Emulator for the domain you can usee: netdom query fsmo)

AllSync = Attempts to syncronize with all available time sources e.g. could be NTP and NT5DS etc.

And to confirm the ime source has been working we can issue: w32tm /query /status

Setup and configure ADSM on a ASA5510

Firstly we will hookup the serial port on the management machine and console port on the ASA. If you are using linux 'screen' will work or on Windows PuTTY should do the trick.

Consult the ASA documentation for serial settings such as the baud rate - but as a gerneral rule of thumb 9600 should usuall work.

Once in we might want to delete any existing configurations:

write erase

or to delete everything in the flash memory:

erase flash

We should then setup and configure ADSM:

configure terminal
hostname devasa
enable password mypassword

* Intrestingly "enable secret" / "service password-encrpytion" are already in place (meaning that your password has been hashed) and are not configurable" *

interface Management0/0
nameif management
security-level 0

* Note: Security levels define the trust assosiated with an interface on a scale of 0 to 100 - for example a security level of 0 specifies that no traffic should be trusted on this interface: meaning that an implicit 'deny ip any any'. Although on the other hand a level of 100 implies that all traffic may pass through the interface - this might be applied to a green zone / inner network interface. *

ip address
no shutdown

We must now activate ADSM (the .bin file should be uploaded to the flash before-hand) and enable the HTTP server:

asdm image flash:/asdm.bin.
http server enable
copy run start

We create an access rule for the HTTP server so you can reach it:
http management

We will also create a username and password for ourselves - assiging us the highest permission of 15:

username <my-username> password <my-password> privilege 15

On your computer (with your interface configured as open up your web browser and visit:

Select "Run Cusco ADSM as a local application" and enter in the username and password we created before.