Wednesday 30 December 2015

MDT / Windows Error: The computer restartred unexpectedly or encountered an unexpected error.

Error: The computer restartred unexpectedly or encountered an unexpected error. Windows installation cannot proceed...

The other day I came accross an issue with MDT when attempting to deploy an image to a new machine - although funnily enough it worked on several other machines so I could only imagine thast is was related to the hardware.

When this error occurs you will want to refer to the windows setup logs that can be found here:
C:\Windows\Panther
or
C:\Windows\System32\Sysprep\Panther

Setting up centralized logging for deployment logs with MDT

For convienince I like to store all deployment logs in a central location so I don't need to mess around grabbing the logs of the local systems.

Fortunately MDT provides us with the ability to do this with the 'SLShareDynamicLogging' parameter (which will write the logs on the fly to a specific directory).

We simply need to modify our deployment share's customsettings.ini file e.g.

SLShareDynamicLogging=\\mdtserver\DeploymentShare$\Logs

Taken from TechNet - Microsoft gives us a quick overview of which logs pertain to what:

Before the Image is applied to the machine:

X:\MININT\SMSOSD\OSDLOGS

After the system drive has been formatted:

C:\MININT\SMSOSD\OSDLOGS

After Deployment:

%WINDIR%\TEMP\DeploymentLogs

The logs of most interest for troubleshooting a failed install will be:

BDD.LOG – This is an aggregated log of all the MDT Logs.

SMSTS.LOG – This log would be used to troubleshoot Task Sequence errors.


Setting up automatic deployment of Windows 7 with MDT and WDS

(Note: For this tutorial I will be doing all of the work on a Windows 10 box)

There are a few pre-requisites for this:

Windows 10 ADK: https://msdn.microsoft.com/en-us/windows/hardware/dn913721.aspx#deploy
Windows Server 2008 / 2012 with the WDS role installed.
Microsoft Deployment Toolkit: https://www.microsoft.com/en-us/download/details.aspx?id=48595

We should then launch the 'Depoyment Workbench' and create a new Deployment Share by right-hand clicking on the 'Deployment Share' node.

Once created - expand our new deployment share node and right-hand click on the 'Operating Systems' node and select 'Import OS' and proceed to either specify pre-created WIM images or simply point the wizard to a mounted ISO image of Windows 7.

Since there will also likely be many different computer models we will also need to inject drivers into our image - fortunately the 'Deployment Workbench' utility allows us to perform this pretty easily. It is also worth noting that a lot of computer manafacturers (like Dell, Toshiba, Lenovo etc.) will provide 'driver packs' for specific model / product lines so that  you can easily import all of the relevent drivers at once.

So to do this we right-hand click on the Out-of-box drivers' node and select 'Import Drivers'. The beauty of the driver integration with MDT is that drivers are injected offline into the relevent WIM images - so you do not have to have thousands of drivers in one WIM image.

NOTE: A lot of driver packs are labelled as for use with SCCM - although in my experience I have found these driver packs seem to work fine with MDT to!

We should now right-hand click on our MDT depployment share and go to >> Properties >> Rules and add / modify with something like the following:
[Settings]
Priority=Default
Properties=MyCustomProperty
[Default] 
OSInstall=Y
SkipCapture=NO
SkipAdminPassword=YES
SkipProductKey=YES
SkipComputerBackup=YES
SkipBitLocker=YES
SkipLocaleSelection=YES
SkipTimezone=YES
Timezone=1200
TimeZoneName=GMT Standard Time
UILanguage=en-GB
UserLocale=en-US
Systemlocale=en-GB
KeyboardLocale=2057:00000409
Note: For working out timezone ID's and keyboard locales see the following links:
http://powershell.org/wp/2013/04/18/determining-language-locales-and-values-for-mdt-2012-with-powershell/
https://msdn.microsoft.com/en-us/library/gg154758.aspx

We now want to capture an image of the build we are creating - so to do this we need to create a new 'task sequence' by right-hand clicking the 'Task Sequences' node >> 'New Task Sequence' >> Fill in a task sequence ID and name e.g. 'Capture Windows 7 Image' >> The template should be set to 'Sysprep and Capture' and finally follow the rest of the wizard.

Important: Ensure you now right-hand click on the deployment share and select "Update Deployment Share"

Now in order to capture an image we should install Windows 7 on ideally a VM - customise it as you wish e.g. registry settings, tweaks to UI etc. and then browse to the MDT share (e.g. \\MDTHost\Deploymentshare$ which contains your deployment share and run the file called litetouch.vbs which is located in the 'scripts' subfolder.

This will launch the capture wizard - which will usually take a few minutes to launch. Once the task sequence menu appears proceed by selecting 'Capture Image' and then ensure 'Capture an image of this reference computer' is selected (make a note of the .wim filename and location as we will need this for WDS!) and hit Next - enter the required credentails to connect to the network share (ensuring that user can read / write to that share!) and complete the wizard.

Now proceed - the system should then reboot into the WinPE enviornment - although I got an invalid credentail error message appear in the WinPE environment preventing me from proceeding! This was due to the fact that I had not updated my deployment share (right-hand click on it and select 'Update') and hence the WinPE images had not been generated!

Follow through the wizard and capture your images. Now reboot again and verify new .wim image is available.

If any errors occur during this process the log files can be found in: C:\Windows\Temp\DeploymentLogs. The specific file we are interested in is called BDD.log (contains all log entries) and LiteTouch.log.

We can open these files with notepad - although using something like SMSTrace / CMTrace makes everying ALOT easier...

We should now create a new Task Sequence in MDT: Right-hand click 'Operating Systems' and select 'Import' >> Select 'From custom capture' and point it to your image file you captured earlier.

Noiw proceed by Right-hand clicking 'Task Sequences' >> 'New Task Sequence' >> Name: Deploy Image >> Template: Standard Client Task Sequence >> and complete the wizard with the relevent settings e.g. product key etc.

Once the new task sequence has been completed we should right-hand click it and go to properties >> 'Task Sequence' tab >> Install >> 'Install Operating System' and select your custom .wim image.

We should refer to the following website in order to automate the deployment task sequence:

https://scriptimus.wordpress.com/2013/02/18/lti-deployments-skipping-deployment-wizard-panes/

For reference I added something like:
[Settings]
Priority=Default
Properties=MyCustomProperty 
[Default]
OSInstall=Y
SkipBDDWelcome=YES
DeployRoot=\\deploymenthost\DeploymentShare$
UserDomain=my.domain
UserID=administrator
UserPassword=mysecurepassword
SLShareDynamicLogging=\\deploymenthost\DeploymentShare$\Logs
SkipApplications=NO
SkipCapture=NO
SkipAdminPassword=YES
SkipProductKey=YES
SkipComputerBackup=YES
SkipBitLocker=YES
SkipLocaleSelection=YES
SkipTimezone=YES
Timezone=1200
TimeZoneName=GMT Standard Time
UILanguage=en-GB
UserLocale=en-US
Systemlocale=en-GB
KeyboardLocale=2057:00000409
BitsPerPel=32
VRefresh=60
XResolution=1
YResolution=1
TaskSequenceID=50
SkipTaskSequence=YES
JoinDomain=my.domain
DomainAdmin=administrator
DomainAdminDomain=my.domain
DomainAdminPassword=mysecurepassword
SkipUserData=YES
SkipComputerBackup=YES
SkipPackageDisplay=YES
SkipSummary=YES
SkipFinalSummary=YES

** Note: You MUST add the following to boot.ini as well:
SkipBDDWelcome=YES
DeployRoot=\\deploymentserver\DeploymentShare$
UserDomain=my.domain
UserID=admin
UserPassword=mypassword
** As a word of caution you should ensure that the accounts you use e.g. to connect to deployment share, join computer to domain etc. are limited as they are contained in plain-text in the bootstrap files. For example I limited access for my user to only the deployment share and delegated access permissions to allow the account to join computers to the domain. **

Because of this we will need to regenerate the boot image! So right-hand click the deployment share and click 'Update Deployment Share' **

We will now use these with WDS - so in the WDS console we go to: Right-hand click 'Boot Images' >> 'Add Boot Image...' >> Select the LiteTouchPE_x64.wim located within our MDT deployment share (it's in the 'boot' folder). We should also add an 'Install Image' - selecting the .wim / capture image we created earlier.

Now simply boot from the WDS server and the task sequence should automatically be started.

If you are using a Cisco router - something like the following would get you up and running for PXE boot:
ip dhcp pool vlanXX
next-server 1.2.3.4
option 67  boot\x64\wdsnbp.com


Tuesday 22 December 2015

Converting in-guest iSCSI LUNs to VMWare native VMDK disks

To give some pre-text for the point on this tutorial I should point in the specific circumstances vSphere Essentials Plus was being used - hence Storage vMotion was not available.

** Fornote: For this technique to work you are required to have at least 2 ESXI hosts in your environment **

Firstly unmount the disks and then disconnect the targets from the Windows iSCSI connector tool.

Now in order to connect to iSCSI targets directly from the vSphere host we will need a VMKernel adapter associated with the relevant physical interface (i.e. the one connected directly to the storage network.)

So we go to ESXI Host >> Configuration >> Networking >> 'Add Networking...' >> Connection Type = VMKernel >> Select the relevant (existing) vSwitch that is attached to your storage network >> Set a network label e.g. VMKernel iSCSI >> And give it an IP address >> Finish wizard.

We should proceed by adding a new storage adapter for the ESXI host within the vSphere Client: ESXI Host >> Configuration >> Storage Adapters >> iSCSI Software Adapter >> Right hand click the adapter and select 'Properties' >> Network Configuration >> Add >> Select the VMKernel adapter we created earlier >> Hit OK >> Go to 'Dynamic Discovery' tab >> Add >> Enter the iSCSI server and port and close the window.

We can then right-hand click on the storage adapter again and hit 'Rescan'. If we then go to ESXI Host >> Configuration >> Storage >> View by 'Devices' and we should be presented the iSCSI target device.

Now we should go to our VM Settings >> Add New Disk >> Raw Device Mapping and select the iSCSI disk and ENSURE that the compatibility mode is set to Virtual otherwise the vMotion process will not work.

RDM's effectively proxy the block data from the physical disk to the virtual machine and hence in actuality are only a few kilobytes in size although this is not directly clear when looking at file sizes within the vSphere datastore file explorer.

Now boot up the virtual machine and double check the disk is initialized / functioning correctly. We can then proceed by right-hand clicking on the virtual machine in the vSphere client >> Migrate >> 'Change host and datastore' etc. etc. and finish the migration wizard.

** Note: You might also have to enable vMotion on the ESXI host:
https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.solutions.doc%2FGUID-64D11223-C6CF-4F64-88D3-436D17C9C93E.html
Once the vMotion is complete we should now see that the RDM no longer exists and the VMDK's are now effectively holding all of the block data on the actual datastore.

Creating a bootable deployment ISO for Windows Deployment Services

We should firstly install WAIK  for Windows 7 and WAIK for Windows 7 SP1 Supplement on the server that has WDS currently installed:

https://www.microsoft.com/en-gb/download/details.aspx?id=5753
https://www.microsoft.com/en-us/download/details.aspx?id=5188

Now open the WDS snapin and right-hand click the relevent boot image and select 'Create Capture Wizard' >> and follow through the Wizard ensuring that 'Enter the name of the Windows Deployment Services server that you want to respond when you boot...' is set to your WDS server!

Now open 'Deployment Tools Command Prompt' from the start menu and then run something like the following (where 'D:\DiscoverBootImage' was where my disocovery image resided):
CopyPE amd64 D:\DiscoverBootImage\WinPE
Now leave this prompt open.

Proceed by copying the discovery boot image into the WinPE folder - and RENAMING it to boot.wim

Now to create the bootable ISO we can issue something like:
oscdimg -n -bD:\DiscoverBootImage\Winpe\ISO\Boot\etfsboot.com D:\DiscoverBootImage\Winpe\ISO D:\DiscoverBootImage\boot.iso
Finally burn the ISO to CD / DVD and test it!

Thursday 17 December 2015

How to setup port forwarding with iptables / Netfilter (properly)

The first command tells the host that it is allowed to forward IPv4 packets (effectively turning it into a network router):
echo "1" > /proc/sys/net/ipv4/conf/ppp0/forwarding
echo "1" > /proc/sys/net/ipv4/conf/eth0/forwarding
or better yet ensure that the ip forwarding persists after reboot:
sudo vi /etc/sysctl.conf
and add / amend:
net.ipv4.ip_forward = 1
and to apply changes we should run:
sudo sysctl -p
Simple port forwarding: This is often applied when you have a service running on the local machine that uses an obscure port - for example Tomcat on tcp/8080 - you  might want to provide external access to the service from port 80 - so we would do something like:
sudo iptables -t filter -I INPUT 1 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
OR (to be more restrictive)
sudo iptables -t filter -I INPUT 1 -i eth0 -s 8.9.10.11/24 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
and then:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
The first command permits inbound traffic on our 'eth0' interface to tcp/80.

The second command instructs traffic bound for tcp/80 to be redirected (effectively DNAT'ing it) to port tcp/8080.

Advanced port forwarding: Sometimes you might have the need to forward traffic on a local port to an external port of another machine (note: 'REDIRECT' will not work in the scenerio - it only works locally) - this could be achieved as follows:
sudo iptables -t filter -I INPUT 1 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
OR (to be more restrictive)
sudo iptables -t filter -I INPUT 1 -i eth0 -s 8.9.10.11/24 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
and then:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.11.12.13:8080
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
The first command again permits inbound traffic on 'eth0' to tcp/80.

The second command performs a DNAT on traffic hitting port 80 to 10.11.12.13 on tcp/8080.

* The masquerade action is used so that traffic can be passed back from the destination host to the requesting host - otherwise the traffic will be dropped. *

you can also use SNAT do achieve the same thing:
iptables -t nat -A POSTROUTING -d 66.77.88.99 -s 11.12.13.14 -o eth0 -j SNAT --to-source 192.168.0.100
(where 192.168.0.100 is the interface you wish to NAT the traffic to, 11.12.13.14 is the requesting client and 66.77.88.99 is the destination address that should match for SNAT to be performed - which is the remote server in our case.)

If you have a default drop-all rule in your forward chain you should also include something like:
sudo iptables -t filter -A FORWARD -i eth0 -s <destination-ip> -d <client-ip> -p tcp -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
sudo iptables -t filter -A FORWARD -i eth0 -s <client-ip> -d <destination-ip> -p tcp -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
Client IP = The IP address (or subnet) from which want to use the port forwarding.
Destination IP = The end IP address you wish the client to hit after the port redirection process.

The first command permits the firewall to forward traffic from the destination ip (our Tomcat server) to the requesting client.

The second command permits the firewall to forward traffic from the client ip to the destination server.

Sometimes (although I strongly discourage it personally) you might want to put a blanket rule in to allow everything to be forwarded between interfaces on the host - this can be achieved with something like:
sudo iptables -t filter -A FORWARD -j ACCEPT
Finally to ensure they persist after a restart of the system we can use:
iptables-save > /etc/sysconfig/iptables

How to setup port forwarding with iptables / Netfilter (properly)

The first command tells the host that it is allowed to forward IPv4 packets (effectively turning it into a network router):
echo "1" > /proc/sys/net/ipv4/conf/ppp0/forwarding
echo "1" > /proc/sys/net/ipv4/conf/eth0/forwarding
or better yet ensure that the ip forwarding persists after reboot:
sudo vi /etc/sysctl.conf
and add / amend:
net.ipv4.ip_forward = 1
and to apply changes we should run:
sudo sysctl -p
Simple port forwarding: This is often applied when you have a service running on the local machine that uses an obscure port - for example Tomcat on tcp/8080 - you  might want to provide external access to the service from port 80 - so we would do something like:
sudo iptables -t filter -I INPUT 1 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
OR (to be more restrictive)
sudo iptables -t filter -I INPUT 1 -i eth0 -s 8.9.10.11/24 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
and then:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
The first command permits inbound traffic on our 'eth0' interface to tcp/80.

The second command instructs traffic bound for tcp/80 to be redirected (effectively DNAT'ing it) to port tcp/8080.

Advanced port forwarding: Sometimes you might have the need to forward traffic on a local port to an external port of another machine (note: 'REDIRECT' will not work in the scenerio - it only works locally) - this could be achieved as follows:
sudo iptables -t filter -I INPUT 1 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
OR (to be more restrictive)
sudo iptables -t filter -I INPUT 1 -i eth0 -s 8.9.10.11/24 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
and then:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.11.12.13:8080
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
The first command again permits inbound traffic on 'eth0' to tcp/80.

The second command performs a DNAT on traffic hitting port 80 to 10.11.12.13 on tcp/8080.

* The masquerade action is used so that traffic can be passed back from the destination host to the requesting host - otherwise the traffic will be dropped. *

you can also use SNAT do achieve the same thing:
iptables -t nat -A POSTROUTING -d 66.77.88.99 -s 11.12.13.14 -o eth0 -j SNAT --to-source 192.168.0.100
(where 192.168.0.100 is the interface you wish to NAT the traffic to, 11.12.13.14 is the requesting client and 66.77.88.99 is the destination address that should match for SNAT to be performed - which is the remote server in our case.)

If you have a default drop-all rule in your forward chain you should also include something like:
sudo iptables -t filter -A FORWARD -i eth0 -s <destination-ip> -d <client-ip> -p tcp -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
sudo iptables -t filter -A FORWARD -i eth0 -s <client-ip> -d <destination-ip> -p tcp -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
Client IP = The IP address (or subnet) from which want to use the port forwarding.
Destination IP = The end IP address you wish the client to hit after the port redirection process.

The first command permits the firewall to forward traffic from the destination ip (our Tomcat server) to the requesting client.

The second command permits the firewall to forward traffic from the client ip to the destination server.

Sometimes (although I strongly discourage it personally) you might want to put a blanket rule in to allow everything to be forwarded between interfaces on the host - this can be achieved with something like:
sudo iptables -t filter -A FORWARD -j ACCEPT
Finally to ensure they persist after a restart of the system we can use:
iptables-save > /etc/sysconfig/iptables

Controlling VPN traffic with VPN Filters on Cisco ASA

Typically (or by default rather) VPN traffic is NOT controlled by normal access controls on the interfaces and rather are controlled by VPN filters.

They are fairly straight forward to apply - for example...

We firstly create an ACL:
access-list EU-VPN-FILTER permit ip 10.1.0.0 255.255.0.0 10.2.0.0 255.255.255.0
Then proceed by defing a group policy:
group-policy MYSITE internal
group-policy MYSITE attributes
  vpn filter value EU-VPN-FITER
And finally creating / amending the tunnel group so it uses the default policy we have created:
tunnel-group 176.177.178.179 general-attributes
  default-group-policy MYSITE

Wednesday 16 December 2015

The directory service is missing mandatory configuration information, and is unable to determine the ownership of floating single-master operation roles

The operation failed because: Active Directory Domain Services could not transfer the remaining data in directory partition DC=ForestDnsZones,DC=domain,DC=int to Active Directory Domain Controller \\dc01.domain.int.

"The directory service is missing mandatory configuration information, and is unable to determine the ownership of floating single-master operation roles."

I encountered this error while attempting to demote a Server 2008 server from a largely 2003 domain.

By reviewing the dcpromo log (found below):
%systemroot%\Debug\DCPROMO.LOG
The error indicates that the DC being demoted is unable to replicate changes back to the DC holding the infrastructure FSMO role. As dcdiag came back OK and replication appeared to be working fine I ended up querying the Infrastructure master from the server in question and suprisngly it returned back a DC that was no longer action / had been decomissioned a while back!
dsquery * CN=Infrastructure,DC=ForestDnsZones,DC=domain,DC=int -attr fSMORoleOwner
Although funnily enough running something like:
netdom query fsmo
would return the correct FSMO holder - so this looks like a bogus reference.

So in order to resolve the problem we open up adsiedit and connect to the following nameing context:

*Note* I would highly recommend taking a full backup of AD before editing anything with ADSI edit! *
CN=Infrastructure,DC=ForestDnsZones,DC=domain,DC=int
Right hand click on the new 'Infrastructure' node and hit 'Properties' >> Find the fSMORoleOwner attribute and change the value to match your actual DC that holds the FSMO role!

For example:
CN=NTDS Settings\0ADEL:64d1703f-1111-4323-1111-84604d6aa111,CN=BADDC\0ADEL:93585ae2-cb28-4f36-85c2-7b3fea8737bb,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=domain,DC=int
would become:
CN=NTDS Settings\0ADEL:64d1703f-1111-4323-1111-84604d6aa111,CN=GOODDC\0ADEL:93585ae2-cb28-4f36-85c2-7b3fea8737bb,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=domain,DC=int

Unfortunately I got the following error message when attempting to apply the changes:

"The role owner attribute could not be read"

Finally I stumbled across a script provided by Microsoft that will do this for us. Simply put you run it against a specific naming context and it will automatically choose a valid infrastructure master DC for you.

Save the following script:

'-------fixfsmo.vbs------------------
const ADS_NAME_INITTYPE_GC = 3
const ADS_NAME_TYPE_1779 = 1
const ADS_NAME_TYPE_CANONICAL = 2
set inArgs = WScript.Arguments
if (inArgs.Count = 1) then
    ' Assume the command line argument is the NDNC (in DN form) to use.
    NdncDN = inArgs(0)
Else
    Wscript.StdOut.Write "usage: cscript fixfsmo.vbs NdncDN"
End if
if (NdncDN <> "") then
    ' Convert the DN form of the NDNC into DNS dotted form.
    Set objTranslator = CreateObject("NameTranslate")
    objTranslator.Init ADS_NAME_INITTYPE_GC, ""
    objTranslator.Set ADS_NAME_TYPE_1779, NdncDN
    strDomainDNS = objTranslator.Get(ADS_NAME_TYPE_CANONICAL)
    strDomainDNS = Left(strDomainDNS, len(strDomainDNS)-1)
   
    Wscript.Echo "DNS name: " & strDomainDNS
    ' Find a domain controller that hosts this NDNC and that is online.
    set objRootDSE = GetObject("LDAP://" & strDomainDNS & "/RootDSE")
    strDnsHostName = objRootDSE.Get("dnsHostName")
    strDsServiceName = objRootDSE.Get("dsServiceName")
    Wscript.Echo "Using DC " & strDnsHostName
    ' Get the current infrastructure fsmo.
    strInfraDN = "CN=Infrastructure," & NdncDN
    set objInfra = GetObject("LDAP://" & strInfraDN)
    Wscript.Echo "infra fsmo is " & objInfra.fsmoroleowner
    ' If the current fsmo holder is deleted, set the fsmo holder to this domain controller.
    if (InStr(objInfra.fsmoroleowner, "\0ADEL:") > 0) then
        ' Set the fsmo holder to this domain controller.
        objInfra.Put "fSMORoleOwner",  strDsServiceName
        objInfra.SetInfo
        ' Read the fsmo holder back.
        set objInfra = GetObject("LDAP://" & strInfraDN)
        Wscript.Echo "infra fsmo changed to:" & objInfra.fsmoroleowner
    End if
End if
Now run the script as follows - so in our case we would run something like:

cscript fixfsmo.vbs DC=ForestDnsZones,DC=domain,DC=int
* Note: I also had to apply the above command on the DomainDNZZones to!! *

You can then verify it has changed with something like:

dsquery * CN=Infrastructure,DC=ForestDnsZones,DC=domain,DC=int -attr fSMORoleOwner

And finally attempt the demotion again! (You might also want to ensure that replication is upto date on all DC's firstly!)

Wednesday 2 December 2015

Trusting a self-signed certifcate on Debian Jessie

I came across a number of how-to's on this subject although the vast majority were not that accurate for Debian Jessie.

So we have a scenario where we have a local application on our machine that  uses a self-signed certificate that we would like to trust.

Firstly download the ca-certificates package:
apt-get install ca-certificates
Proceed by obtaining our certificate we would like to import e.g.:
openssl s_client -connect mywebsite.com:443
Extract the public key from the output and place it in a file called something like:

yourhost.crt

** Important: You MUST ensure that the file extension is '.crt' or the ca-certificates tool will not pickup the certificate! **
sudo cp mydomain.crt /usr/local/share/ca-certificates
or
sudo cp mydomain.crt /usr/share/ca-certificates

Then simply run the following command to update the certificate store:
 sudo update-ca-certificates
And if (or when) you wish to remove the certificate we can issue the following command:
 dpkg-reconfigure ca-certificates

Script to identify and drop all orphaned logins for SQL Server 2008+

The script can be found below (credit to: https://www.mssqltips.com):

Use master
Go
Create Table #Orphans
(
RowID int not null primary key identity(1,1) ,
TDBName varchar (100),
UserName varchar (100),
UserSid varbinary(85)
)
SET NOCOUNT ON
DECLARE @DBName sysname, @Qry nvarchar(4000)
SET @Qry = ''
SET @DBName = ''
WHILE @DBName IS NOT NULL
BEGIN
SET @DBName =
(
SELECT MIN(name)
FROM master..sysdatabases
WHERE
/** to exclude named databases add them to the Not In clause **/
name NOT IN
(
'model', 'msdb',
'distribution'
) And
DATABASEPROPERTY(name, 'IsOffline') = 0
AND DATABASEPROPERTY(name, 'IsSuspect') = 0
AND name > @DBName
)
IF @DBName IS NULL BREAK
Set @Qry = 'select ''' + @DBName + ''' as DBName, name AS UserName,
sid AS UserSID from [' + @DBName + ']..sysusers
where issqluser = 1 and (sid is not null and sid <> 0x0)
and suser_sname(sid) is null order by name'
Insert into #Orphans Exec (@Qry)
End
Select * from #Orphans
/** To drop orphans uncomment this section
Declare @SQL as varchar (200)
Declare @DDBName varchar (100)
Declare @Orphanname varchar (100)
Declare @DBSysSchema varchar (100)
Declare @From int
Declare @To int
Select @From = 0, @To = @@ROWCOUNT
from #Orphans
--Print @From
--Print @To
While @From < @To
Begin
Set @From = @From + 1
Select @DDBName = TDBName, @Orphanname = UserName from #Orphans
Where RowID = @From
Set @DBSysSchema = '[' + @DDBName + ']' + '.[sys].[schemas]'
print @DBsysSchema
Print @DDBname
Print @Orphanname
set @SQL = 'If Exists (Select * from ' + @DBSysSchema
+ ' where name = ''' + @Orphanname + ''')
Begin
Use ' + @DDBName
+ ' Drop Schema [' + @Orphanname + ']
End'
print @SQL
Exec (@SQL)
Begin Try
Set @SQL = 'Use ' + @DDBName
+ ' Drop User [' + @Orphanname + ']'
Exec (@SQL)
End Try
Begin Catch
End Catch
End
**/
Drop table #Orphans

Tuesday 1 December 2015

Creating and applying a retention tags for mailbox / mailbox items

** Fornote: You must have setup in-place archiving before the below will take effect **

Using both a retention policy and tags we have the ability to allocate individual retention periods for objects (i.e. folders) within a mailbox.

For example if we wished to create a root level retention policy on a mailbox so all mail items would be retained for 30 days - but we have a folder

with holds archived email (lets call it 'Archived Items') - we could assign a retention period of 365 days.

We would typically use rentention tags to set a retention period for a default folder (e.g. Inbox):
New-RetentionPolicyTag "Corp-Exec-DeletedItems" -Type Inbox -Comment "Deleted Items are purged after 30 days" -RetentionEnabled $true -
AgeLimitForRetention 30 -RetentionAction PermanentlyDelete
We would then proceed by creating a rentention policy and attaching the relevant retention tags:
New-RetentionPolicy "Auditors Policy" -RetentionPolicyTagLinks "General Business","Legal","Sales"
We would then assign the new retention policy to a user:

** There is a default retention policy that which exists that will be applied to users who enable in-place archiving and are not assigned a specific policy - you can view this with:
Get-RetentionPolicy -Identity "Default Archive and Retention Policy"
We can then assign this to a specific user with:
Set-Mailbox -Identity "user@domain.com" -RetentionPolicy "My Retention Policy"
Although in order to use a custom (user-created) folder (i.e. not the standard inbox, sent items, deleted items etc.) we can configure auto-archiver settings in our Outlook Client - by clicking on the desired folder >> select the 'FOLDER' tab >> 'AutoArchive Settings' >> Select the 'Archive this folder using these settings' and select the desired timeframe and action.

Monday 23 November 2015

Determining the cause of an ESXI host power failure / restart

Firstly ensure that there are no warning / error lights on the physical host.

Check the event log for the specific ESXI host by going to;

Host >> Tasks and Events >> Tasks

We should then proceed by enabling SSH from the vSphere Client:

Host >> Configuration >> Security Profile >> Services >> Properties and enable SSH.

SSH into the host and run:
cat /var/log/vmksummary.log
You should typically see a regular heart-beat message - although around the time in question we encountered the folloeing event:
2013-01-01T12:30:04Z bootstop: Host has booted
To determine if it was a deliberate reboot we should check for the following line:
localhost vmkhalt: (1268148282) Rebooting system...
* This line would indicate that the boot was initiated by a user.

We should also be able to tell whether it was initiated by a user from the vCenter logs accessable via the vSphere client:

vCenter (Root node) >> Tasks and Events.

Since this line was absent from the vmksummary.log log file it appears that there might have been a power failure at this point.

Wednesday 18 November 2015

TCP / UDP Ports Required for Active Directory in an off-premise environment like AWS or Azure

Below are the required ports to get a new domain controller (Server 2008 and above) up and running:

TCP 389
UDP 389
TCP 636
TCP 3268
TCP 3269
TCP 88
UDP 88
TCP 53
UDP 53
TCP 445
UDP 445
TCP 25
TCP 135
TCP 5722
UDP 123
TCP 464
UDP 464
UDP 138
TCP 9389
UDP 67
UDP 2535
UDP 137
TCP 139

Dynamic Ports:

TCP 49152-65535
UDP 49152-65535

Manually configuring DC replication with Active Directory

Firstly we should ensure that all firewall ports are as should be if the replication will be between two different sites. So we go to Sites and Services >> Select our site >> Seelct our server >> Right-hand click on NTDS Settings  >> 'New Active Directory Connection' and select the DC you wish to replicate too.

We then proceed to open up the newley created connection and on the General tab ensure that 'IP' for transport is selected and that the revelent naming contexts are being replicated.

We can then do a repadmin /syncall on the target host to ensure that replication finishes correctly.

Forcing replication of the SYSVOL share

The other day I identified a newly installed domain controller that had not created the SYSVOL share - in order to initiate this I did the following:

Open regedit and go to the following key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters and set the value of 'SysvolReady' to 0 and then set it again to '1'.

Failure to replicate the SYSVOL folder will cause some pretty serious problems with features such as group policy and the like.

To perform a manual / force replication we run:
ntfrsutl.exe forcerepl "sourcedc" /r "Domain System Volume" /p targetdc
** The "Domain System Volume" is AKA "SYSVOL" **

If you wanted to replicate a specific folder you would run something like:
ntfrsutl.exe forcerepl "sourcedc" /r "my.domain\mysharedfolder" /p targetdc

Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set

The other day I came across an error while troubeshooting a problem I had from a run of dcdiag:

Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have
   Replicating Directory Changes In Filtered Set
access rights for the naming context:
DC=ForestDnsZones,DC=my,DC=domain
Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have
   Replicating Directory Changes In Filtered Set
access rights for the naming context:
DC=DomainDnsZones,DC=my,DC=domain

This indicates a permission problem with the ENTERPRISE DOMAIN CONTROLLERS security group and it's ability to replicate directorty changes in a filtered set.

 To resolve this issue we go to adsiedit on our PDC >> Action >> "Connect to..." >> "Select a type or a Distinguished Name or Naming Context" and enter (replacing the obvious):
DC=ForestDnsZones,DC=my,DC=domain
Expand the new tree node and right hand-click on "DC=ForestDnsZones,DC=my,DC=domain" >> Properties >> Security

and identify the security group "ENTERPRISE DOMAIN CONTROLLERS" and ensure that the "Replicating Directory Changes In Filtered Set" is ticked / set to allowed.

We should then do exactly the same for the "DC=DomainDnsZones,DC=my,DC=domain" partition.

Ensure dcdiag now returns OK and then....




We then proceed by going onto the DC with the permission issues and syncing the changes while specifying the source sever as our PDC:
repadmin /syncall myPDC /APed

Tuesday 17 November 2015

Using a vmdk / virtual disk from a VMWare Workstation / Player product in ESXI

I attempted to use a vmdk from a virtual machine hosted on a PC running VMWare Player with an ESXI instance by simply copying the vmdk over SFTP directly to the ESXI datastore and then attaching the disk to a newly created VM on the ESXI host.

Although unfortuantely it wasn't as simple as that as when attempting to turn on the VM I recieved the following errror message in the vSphere client:

"An unexpected error was received from the ESX host while powering on VM XXXXX. Reason: Failed to lock the file. Cannot open the disk '/vmfs/volumes/11fed2c5-81a6f17c-558h-553f/VM01/DISK01.vmdk' or one of the snapshot disks it depends on."

I learnt that the vmdk files that ESXI uses are slightly different from the ones used with the Workstation/Player products and hence have to be converted.

Fortunately VMWare make this pretty easy to do - simply login to the ESXI host, CD to the directory with your VMDK in and run the following command:
vmkfstools -i <original-vmdk> <vmdk-for-esxi>

Monday 16 November 2015

Introducing the first Windows Server 2008 R2 DC into a Server 2003 domain.

1. Firstly ensure that all DC's are 2003 and decomission any older versions e.g. NT 4.0, 2000 etc.

2. Raise the domain functional level to 'Windows Server 2003' by going to 'AD Domains and Trusts' MMC snapin and right-hand clicking the domain node and select "Raise Domain Functional Level..."

3. Find out which DC holds the schema and infrastructure FSMO roles:

http://blog.manton.im/2015/02/how-to-query-and-move-fsmo-roles-with.html

4. Ensure that there are no outstanding issues with the domain / forest with dcdiag e.g.:

dcdiag /v

and ensure that replication is happening successfully with:

repadmin /showrepl /all /verbose

5. Run the adprep tool on the DC with the above to FSMO roles - the AD prep tool can be found within the 'support\adprep' folder on the root of the Server 2008 R2 disk.

There are too version - adprep.exe (for 64bit OS's) and adprep32 (for 32bit OS's).

** NOTE: You should ensure that the user context launching the adprep tool is a member of the 'Schema Admins', "Enterprise Admins" and "Domain Admins' security group in A.D **

*** WARNING: Before performing something like this it is imporant that (if possible) you can perform this in a similar on ideally mirrored development environment before making changes to the schema OR at least making a backup of AD firstly! ***

So we shall copy the adprep folder directly onto the Server 2003 host and login with the user who hold the schema admin privilages and run the following:

adprep32 /forestprep

or

adprep /forestprep

We can now OPTIONALLY run the 'adprep32 /rodcprep' statement that will prepare the domain / forest for read-only DC's (a feature introudced in Server 2008) with:

adprep32 /rodcprep

And then proceed by preparing the domain with:

adprep32 /domainprep /gpprep

Once this has completed we can then promote our Windows Server 2008 DC's successfully!

Friday 13 November 2015

Delete specific email meesage from a server / mailbox database with Exchange shell

We should firstly ensure that the user has necessary permissions by assigning thier security group the relvent role:
New-ManagementRoleAssignment -Name "Import Export Mailbox Admins" -SecurityGroup "*SecurityGroupName*" -Role "Mailbox Import Export"
To find an email sent by a user to a numbero users on a specific date / subject we can use:
Get-Mailbox -Server  ExchangeServer | Search-Mailbox -SearchQuery 'Subject:"*My Subject*" AND From:"Joe Bloggs" AND Sent:"11/13/2015"' -targetfolder "Inbox" -targetMailbox "Admin Email" -logonly -loglevel full > C:\temp\results.txt
* The 'targetMailbox' command simply states where the results will be sent too.

Once you have verified from the output that the relevent (and only relevent!) mail items are there we can then use the '-deletecontent' switch to then delete the messages:

** Note: Within the output you should be looking for nodes with something like 'ResultItemsCount' : >0 **
Get-Mailbox -Server  ExchangeServer | Search-Mailbox -SearchQuery 'Subject:"*My Subject*" AND From:"Joe Bloggs" AND Sent:"11/13/2015"' -targetfolder "Inbox" -logonly -loglevel full -deletecontent > C:\temp\results.txt
OR alternatively we can leave the 'targetMailbox' switch in which will save all deleted messages to a mailbox before deleting them:
Get-Mailbox -Server  ExchangeServer | Search-Mailbox -SearchQuery 'Subject:"*My Subject*" AND From:"Joe Bloggs" AND Sent:"11/13/2015"' -targetfolder "Inbox" -targetMailbox "Admin Email" -logonly -loglevel full -deletecontent > C:\temp\results.txt

Wednesday 11 November 2015

Moving and removing public folder database replication with Exchange 2010

If you have recently upgraded from an earlier version of Exchange too Exchange 2010 and you have now decided to decomission the oldere version of Exchange you might be required to move all of your existing public folders to the newer server.

We should firstly add our Exchange 2010 server as a replica to ensure the migration goes smoothly by making use of the following script:
.\AddReplicaToPFRecursive.ps1 -TopPublicFolder "\" -ServerToAdd "Exchange 2010 Server"
and also ensuring the 'SYSTEM' public folders are added as well:
.\AddReplicaToPFRecursive.ps1 -TopPublicFolder "\NON_IPM_SUBTREE" -ServerToAdd "Exchange 2010 Server"
We should ensure that after adding the replica server that all of the heiracry and content has been replicated and is upto date with:
Update-PublicFolderHierarchy –Server “<Exchange 2010 Server>”
 Then confirm that the appropraite the public folders are listed as being replicated with the new Exchange 2010 server:

Get-PublicFolder -recurse \ | fl name, replicas

and

Get-PublicFolder -recurse \non_ipm_subtree | fl name, replicas

We can then run the following script with Exchange 2010 to move all public folders to the 2010 instance:
.\MoveAllReplicas.ps1 -Server Exchange2003 -NewServer Exchange2010

 * Note: Do not include the source server if Exchange 2003 as the script will throw a tantrum complaining: "Server 'Exchange 2003' is running Microsoft Exchange 2003 or earlier." ) *

Tuesday 10 November 2015

Checking for bad sectors with badblocks and fsck

Badblocks in a linux utility that scan storage media for bad blocks.

It can be operated in serveral modes:

- Destructive mode: Where block data will be wiped, as each sector is overwritten by random data and read. This mode is potentially very dangerous and should typically be only applied on disks that are brand new or you are not worried about losing the data on them!

- Non-destructive mode: Where block data is checked, although rather than overwriting the original block data (effectively wiping it) the block data is firstly backed up. This mode is useful if you have data on the disks you are testing which you don't want to lose! - Although takes slightly longer than the destructive mode.

!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*
READ CAREFULLY!
!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*


To perform a DESTRUCTIVE block data test we can issue the following:
badblocks -wsv -t random /dev/<device>
!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*
READ CAREFULLY!
!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*


We can perform a NON-DESTRUCTIVE test with the following:
badblocks -nsv /dev/<device>
(where the 'n' indicates it's a non-destructive test)

We can also tell our filesystem not to include any badblocks with the fsck utility:
fsck -vcck /dev/<device-PARTITION>
This command ends up calling 'badblocks' although will perform a non-destructive test as we have included the '-cc' option.

Friday 6 November 2015

Performing an off-site backup with AWS using Veeam Backup and Replication

It is now possible to backup from Veeam to AWS through the use of the AWS Storage Gateway service.

To explain how it works: AWS Storage Gateway allows you to create a Virutal Tape Library Gateway - than simply speaking is a way of creating a virtual tape drive in the cloud that can hook upto other AWS services such as S3 and Glacier.

You are required to download the AWS Storage Gateway virtual appliance to act as an intermetriaty to effectively proxy the data between the Veeam server and AWS. Although in order to hook Veeam upto the virtual appliance you are required to install specific tape drive drivers that emulate a physical tape drive - but actually hook into the AWS storage gateway appliance.

Please refer to the below article for help deploying / provisioing the Storage Gateway appliance:

http://docs.aws.amazon.com/storagegateway/latest/userguide/GettingStartedSetupVMware-common.html

and the following article on how to install the appropraite drivers for your backup solution:

http://docs.aws.amazon.com/storagegateway/latest/userguide/resource_vtl-devices.html#update-vtl-device-driver

Create the relevent tape drives and select your cache and upload buffers disks:

http://docs.aws.amazon.com/storagegateway/latest/userguide/GettingStartedCreateTapes.html

Configure the Veeam server's drivers / iSCSI settings:

http://docs.aws.amazon.com/storagegateway/latest/userguide/GettingStartedAccessTapesVTL.html

Veeam makes use of the STK-L700 device - of which I had issues getting to work under Server 2008 - the medium changer showed up as 'Unknown Medium Changer' - so I ended up downloading the xxxxxxx driver from the Microsoft Update Catalog via IE:

http://catalog.update.microsoft.com/v7/site/Home.aspx

The driver is called: StorageTek - Storage - Sun/StorageTek Library

I then re-scanned the tape server and only then I was able to right-hand click on the medium changer (now called STK L700 0103) and click 'Import Tapes'.


For convienicne you can download the x64 version of the driver here (labelled as support server 2003 and 2008 R2)

and finally on how to configure Veeam:

http://docs.aws.amazon.com/storagegateway/latest/userguide/backup-Veeam.html#veeam-configure-software

Thursday 5 November 2015

How to enable an AD security group for use with Exchange

By default when you want to use an AD security group within Exchange - lets say for example within a transport rule you will notice that by default they are not available.

So in order to make the security groups accessable we need to 'mail-enable' them: AKA mail enabled security groups. In order to do this we should firstly ensure that the security group's scope is 'Universal' NOT 'Global' as it is by default.

We can then proceed to go to the Exchange Management Console >> Recipient Configuration >> right-hand click 'Distribution Group' >> New Distribution Group >> Select 'Existing Group' >> Next ensure 'Security' is selected for the group type and seelct the relevent security group.

You should now be able to specify your mail enabled security group within Exchange e.g. when creating transport rules.

Wednesday 4 November 2015

Deleting old backups from Windows Backup Sets

Although you can do this via the control panel >> Windows Backups - if you are using a thrid party product that is utilizing the Windows Backup engine you will need to use the wbadmin tool.

I was recently required to clear out several older windows backups to free some space on the disk.

Firstly we can  view all backups within a backup set with something like:
wbadmin get versions -backupTarget:"B:\"
(where B: is the root of the backup.)

We can use the vssadmin tool to list all of our VSS backups with:
vssadmin list shadows /for=b:
and delete the oldest VSS backup using diskshadow in interactive mode:
diskshadow
delete shadows oldest b:

Monday 2 November 2015

Exchange Routing Groups

Routing groups are used to provide communication between to Exchange servers - typically between two different versions of Exchange e.g. Exchange 2010 and Exchange 2003.

The two servers that form the source and destination of the routing are reffered to as 'bridgehead servers'.

In order to view information about current routing groups we can use:
Get-RoutingGroupConnector | FL
Routing group connectors are unidrectional routes between two bridgehead servers i.e. a seperate routing group has to be defined for both incoming and outgoing mail.

In the event of a post migration the task of removing the old Exchange server would require removing any redundent routing groups with something like:
Remove-RoutingGroupConnector "My Routing Group"
Just like send connectors routing group connectors also have a cost associated with them allowing you to specify preferred routes.

Monday 26 October 2015

Installing missing Intel I217-V for Debian

This firmware was not included as part of a readily available package within Debian and so needed to be installed from Intel's website:

https://downloadcenter.intel.com/download/15817/Network-Adapter-Driver-for-PCI-E-Gigabit-Network-Connections-under-Linux-

Firstly ensure we have the appropriate kernel headers with:
sudo apt-get install linux-headers-$(uname -r)
So we simply unzip the gzip file:
tar zxvf e1000e-3.2.*

cd e1000e-3.2.4.2

make install 
and then activate the module with:
sudo modprobe e1000e
 Finally confirm the module is installed / in use with lsmod:
lsmod | grep e1000e

log_reuse_wait: 'Replication' status appearing

After being unable to shrink a specific database log file to a zero (or something near that) I became slightly puzzled why a 30GB log file would only truncate to about 15GB - after some research I found out that sometimes after replication has been turned on and then stopped - the log_reuse_wait value has not changed back to it's default and is still set at '6' - which tells us that the log file witholds some transactions in the log for use with replication.

I ran the following command to retireve log_reuse_wait information for each database:
SELECT name, log_reuse_wait_desc FROM sys.databases

As replicatoin was not currently turned on (although it was at one point) it looks like the termination of the replication went wrong somewhere and so we need to attempt to remove the 'Replication' status:
EXEC sp_removedbreplication my_database;
and then run the following command again to verify the log_reuse-wait is back at '0':
SELECT name, log_reuse_wait_desc FROM sys.databases

Monday 19 October 2015

Microsoft Exchange: The properties on this object have invalid data.

Firstly we need to identify what is causing the issue - so we can review the distribution group with something like:
Get-DistributionGroup "Testing(123)" | FL
and ensure there are no invalid objects within that group with something like:
Get-DistributionGroupMember "Testing(123)" | FL
In my event after issuing the first command I was presented with the following warning at the bottom of the output:

WARNING: The object mydomain.int/Users/Alerts has been corrupted, and it's in an inconsistent state. The following validation errors happened: WARNING: Property expression "Testing(123456)" isn't valid. Valid values are: Strings formed with characters from A to Z (uppercase or lowercase), digits from 0 to 9, !, #, $, %, &, ', *, +, -, /, =, ?, ^, _, `, {, |, } or ~. One or more periods may be embedded in an alias, but each period should be preceded and followed by at least one of the other characters. Unicode characters from U+00A1 to U+00FF are also valid in an alias, but they will be mapped to a best-fit US-ASCII string in the e-mail address, which is generated from such an alias.
It looks like the object had not been upgraded and was orginially created as an older version that accepted the parethnesis in alias names, although Exchange 2010 did not like this.

So to resolve the issue I simply reset the alias with something like:
Set-DistributionGroup "Testing(123)" -Alias "Testing123456"

Thursday 15 October 2015

How to shrink / truncate a database log file within SQL Server

Firstly ensure that the database recovery model is set too 'Simple':

Right hand click on the database >> Properties >> Options >> Recovery Mode = Simple.

Then right-hand click on the database again and select 'Tasks' >> 'Shrink' >> Files - from here you should ensure that the file type is set to 'Log' and the Shrink action 'Reorganize pages before releasing unused space' is selected and enter a value in MB to shrink the log file too and finally hit OK.

Change if recovery model back to 'Full' (if applicable) and take a full backup of the database.

You can also do all of this via commands as follows:
 ALTER DATABASE MYDATABASE SET RECOVERY SIMPLE

 DBCC SHRINKFILE (MYDATABASE_Log, 1)

 ALTER DATABASE MYDATABASE SET RECOVERY FULL

Checking / repairing a database or table for consistency errors / corruption with MSSQL

If you encounter consistency errors such as:

The Database ID 5, Page (1:4835927), slot 7 for LOB data type node does not exist. This is usually caused by transactions that can read uncommitted data on a data page. Run DBCC CHECKTABLE.

We should firstly find identify the database ID 5 by running:
SELECT DB_NAME(2) AS Database_Name;
(where '5' is the database ID in question.)
DBCC CHECKDB ('MYDATABASE') WITH ALL_ERRORMSGS,NO_INFOMSGS
We can also check an induvidual table with:
USE MYDATABASE
DBCC CHECKTABLE ('YourTable'); WITH ALL_ERRORMSGS,NO_INFOMSGS
If you are not lucky enough to have an available backup to hand you can attempt to repair the databaser with the DBCC CHECKDB command.

Although before we perform this you should ensure that the database is put into single user mode - along with the 'ROLLBACK IMMEDIATE' switch that will rollback any user transactions immidiately so that the DB can drop into single user mode (i.e. all user connections have disconnected) or the 'WITH ROLLBACK AFTER 30' option which allows the users a number of seconds (30 in this case) to complete the transactions.
ALTER DATABASE MYDATABASE SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
BEGIN TRANSACTION;
or
ALTER DATABASE MYDATABASE SET SINGLE_USER WITH ROLLBACK AFTER 30;
BEGIN TRANSACTION;
Now there are two repair levels: 'REPAIR_REBUILD' which is used when the corruption is confined to non-clustered indexes which are redeundent and hence cause no data loss and 'REPAIR_ALLOW_DATA_LOSS' which can potentially cause data loss.

To repair the a single table we can use:
USE MYDATABASE
DBCC CHECKTABLE ('MYTABLE', REPAIR_ALLOW_DATA_LOSS);
or to attempt to repair the whole database:
DBCC CHECKDB ('MYDATABASE', REPAIR_ALLOW_DATA_LOSS);
And finally change the database back to multi-user mode with:
ALTER DATABASE MYDATABASE SET MULTI_USER;
BEGIN TRANSACTION

Tuesday 13 October 2015

Re-building database log files that have been corrupted for MSSQL

Firstly check the database for log corruption with:

Bare in mind that DBCC CHECKDB WITH ALL_ERRORMSGS,NO_INFOMSGS will NOT check the logs and hence will not identify any corruptino in the logs!

You should also check the event log for any hardware or MSSQL related errors such as:
"Description: Executing the query "BACKUP LOG [PCDB_PROD] TO  DISK = E'\\MyDatabase2..." failed with the following error: "BACKUP detected corruption in the database log. Check the errorlog for more information."
and you could also run chkdsk to identify any file system related errors.

The easiest way to remedy this situation is to firstly ensure you have a FULL backup of the database.

There are two methods to remove the courrpt logs - the first can be performed while the database is online and hence not affect downtime:

Firstly run a DBCC CHECKDB WITH ALL_ERRORMSGS,NO_INFOMSGS on the database to look for any errors.

Method 1: which will incur downtime! as described below:

*** You should also ensure there are no transactions pending on the database by using the activity monitor on the SQL server - as if there are the operation of setting the database into recovery mode will hang! ***

Put the database into emergency mode:

ALTER DATABASE <dbname> SET EMERGENCY, SINGLE_USER
Proceed by re-building the log file as follows:


 ALTER DATABASE <dbname> REBUILD LOG ON (NAME=<logical file name>, FILENAME='<full path to new file>')

 Now we can simply bring the database back online, putting it into multi-user mode:

ALTER DATABASE MYDATABASE set ONLINE ONLINE;
ALTER DATABASE MYDATABASE set ONLINE MULTI_USER;

Method 2: requires database downtime and is as follows:

*** You should also be aware that transactions occuring between the backup above and when you detach the database below that they will be lost! ***

Proceed by detaching the database and then renaming (or deleting) the log file assosiated with the database.

We should then attach the database to the SQL server and re-build the logs with:
CREATE DATABASE [mydatabase]
ON (FILENAME=E:\Path\to\database.mdb)
FOR ATTACH_REBUILD_LOG;
Now ensure that the recovery model is set to the appropriate mode!

and then run a integrity check on the database as follows:
DBCC CHECKDB WITH ALL_ERRORMSGS,NO_INFOMSGS
Finally make a full backup of the database.

Sources

http://blogs.msdn.com/b/glsmall/archive/2013/11/14/recovering-from-log-file-corruption.aspx
http://blogs.msdn.com/b/suhde/archive/2009/07/11/database-corruption-part-5-dealing-with-log-file-corruption.aspx


Monday 5 October 2015

Checking VM and datastore performance issues with ESXTOP

SSH into the ESXI host and launch ESXTOP:

esxtop

Hit 'v' on the keyboard to display the VM view.

Now hit 'f' and ensure that the 'F', 'G' and 'H' are selected so that the latency stats are displayed.

Hit enter and review LAT/rd and LAT/wr stats.

As a rough baseline typically anything above 20ms is considered poor performance, anything below this should be acceptable in most cases.

You can also view the overall performance of a whole datastore by pressing the 'u' key (and ensuring the appropriate latency fields are included.)

again as a guideline anything above 20ms should be considered poor performance.

Thursday 10 September 2015

Throttling a VM's IOPS - vSphere 6

By default there is no disk I/O throttling setup within vSphere - in order to get an idea of how many IOPS the machine is hitting we should firstly use esxtop to provide the information:

SSH into the ESXI host >> run the 'esxtop' command >> press the 'v' key to go into the VM view.

You can then confirm the IOPS by observing the CMDS/s column.

Dependent on your disk setup you could also make use of an IOPS calculator to give you an estimate of what kind of IPOS you should be expecting:

http://www.thecloudcalculator.com/calculators/disk-raid-and-iops.html

Once we have a figure in mind we should proceed to the vSphere Web Client >> VMs >> Right-hand click on the VM in question and select 'Edit Settings' >> Expand the relevent virtual hard drive(s) and enter you desired figure in the "Limit - IOPs" textbox.

Wednesday 9 September 2015

How to check the queue depth of a storage controller using ESXTOP

 From http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1027901

To identify the storage adapter queue depth:
  1. Run the esxtop command in the service console of the ESX host or the ESXi shell (Tech Support mode). For more information, see Using Tech Support Mode in ESXi 4.1 and ESXi 5.0 (1017910) or Tech Support Mode for Emergency Support (1003677) .
  2. Press d.
  3. Press f and select Queue Stats.
  4. The value listed under AQLEN is the queue depth of the storage adapter. This is the maximum number of ESX VMKernel active commands that the adapter driver is configured to support.
To identify the storage device queue depth:
  1. Run the esxtop command in the service console of the ESX host or the ESXi shell (Tech Support mode). For more information, see Using Tech Support Mode in ESXi 4.1 and ESXi 5.0 (1017910) or Tech Support Mode for Emergency Support (1003677).
  2. Press u.
  3. Press f and select Queue Stats.
  4. The value listed under DQLEN is the queue depth of the storage device. This is the maximum number of ESX VMKernel active commands that the device is configured to support.

My collection of useful Powershell commands

The below is (or rather will be) a compilation of powershell commands that have come in handy for myself during day to day operations.

Ensure all users in a specific OU have thier accounts set to "Password Never Expires":
Get-AdUser -Filter * -SearchBase "OU=myou,DC=my,DC=domain" -Server dc01 | Set-ADUser -PasswordNeverExpires $true -Credential administrator

Wednesday 2 September 2015

Error: A binding for this client already exists.

When attempting to add a reservation into a Cisco 1841 I encountered the following error message:

A binding for this client already exists.

when running:

client-identifier 0100.0111.001e.48
To resolve firstly identify which IP address the MAC address is assosiated with - this can be performed with:

show ip dhcp binding
Identify the assosiated IP and then simply run the following command to remove the binding:
clear ip dhcp binding <ip-address>
You should now be able to run the client-identifier command again.

Setting up a reverse proxy for several websites with IIS and TMG 2010

Firstly launch the Forefront TMG Console and go to the "Firewall Policy" node and select "Publish Web Sites" on the Tasks" navigation window on the far left.

In the wizard give the rule name something like "Reverse Proxy" >> Next >> Allow >> Select "Publish multiple Web sites" >> Add your desired site.

The wizard will also ask you to create a new 'Listener' - assign or add an additional IP address to your 'External' adapter.

Make the appropriate DNS entries into your DSN system and proceed by going to IIS and setting up our reverse proxy:

We will need to download the 'URL Rewrite' module for IIS (supported on IIS 7.0+) from the following URL:

http://www.iis.net/downloads/microsoft/url-rewrite

and also the 'Application Request Routing' extension available from:

http://www.iis.net/downloads/microsoft/application-request-routing

Once installed go to IIS Manager >> Server >> 'Application Request Routing' >> 'Server Proxy Settings...' >> Ensure 'Enable Proxy' is ticked.

Once installed launch the IIS Manager >> Create a New Website >> URL Rewrite >> Reverse Proxy and enter the relevent information.

Finally restart your site and test.

Tuesday 1 September 2015

Performing a test DR recovery of a virtual machine with vSphere Replication

Unless you are using vSphere Replication with Site Recovery Manager you will not be able to perform a 'test recovery' (i.e. recover the VM while the source site/VM is still active/online). The process you could follow in case you would like to keep your primary site online is as follows:

1. Perform Recovery using the second option (Recover with latest available data). In this way you would not need to power off source VMs and your primary site will be online.

2. After recovery is complete, stop the replications (which will be in Recovered state)

3. Power off recovered VMs, unregister them from VC inventory, but keep the disk files intact.

4. Manually configure all replications using the disks that were left over as initial seeds. This will cause only changes to be synced.


Source: https://communities.vmware.com/message/2409693

Friday 28 August 2015

Using the Active Directory Powershell module with Windows Server 2003

In order to perform this you will need to firstly install Powershell 2.0 on the server 2003 instance:

http://www.microsoft.com/en-us/download/details.aspx?id=4045

Unfortunately we have to issue our powershell commands from a Windows 7 (or Server 2008 R2+) if we wish to use the 'ActiveDirectory' module.

Ensure that the following hotfix is installed on the Server 2003 instance:

*** Firstly ensure that the LATEST version of the .NET framework is installed before proceeding! ***

You should also install the hotfix for .NET Framework 3.5.1 (KB969166) which can be downloaded below:

http://thehotfixshare.net/board/index.php?autocom=downloads&showfile=20161

and install the following on the server 2003 instance:

https://www.microsoft.com/en-us/download/details.aspx?id=2852

In order to perform this on the Windows 7 machine we should firstly download and install the Remote Server Administration Tools for Windows 7 from:

https://www.microsoft.com/en-us/download/details.aspx?id=7887

Proceed by activating the feature from: Control Panel >> Programs and Features >> 'Turn Windows features on or off' >> Remote Server Administration Tools >> Role Administration Tools >> AD FS and AD LDS Tools >> and ensure that the 'Active Directory Module for Windows Powershell' node is ticked.

We proceed by launching powershell from cmd on our Windows 7 box:
powershell.exe
Import our AD module with:
Import-Module ActiveDirectory
You should then be able to run your AD related commands e.g.:
Search-AdAccount -LockedOut -SearchBase "OU=myou,DC=my,DC=domain" -Server mydomaincontroller | Unlock-AdAccount
This will search an OU for locked accounts on a specific server, display them and then automatically unlock them for you.

Remoting into Windows Server 2003/2008/2012 using Powershell / PSSession

On the server you wish to remote to we should ensure that the server is setup for powershell remote session (e.g. firewall etc.) by running the following from powershell:
Enable-PSRemoting
and authorize the connecting machine (the Windows 7 box) to connect to the server instance:
Set-Item wsman:\localhost\Client\TrustedHosts client.my.domain -Concatenate -Force
We then initiate the session with:
$securePassword = ConvertTo-SecureString "Password" -AsPlainText -force
$credential = New-Object System.Management.Automation.PsCredential("domain\username",$securePassword)
$session = New-PSSession -ComputerName Remote2003Server -authentication default -credential $credential
Enter-Pssession $session

Thursday 27 August 2015

Setup LinOTP with FreeRadius

We shall firstly install and configure LinOTP from thier repositories (I will be using Debian for this tutorial)

Add the following line to your /etc/apt/sources.list:
deb http://www.linotp.org/apt/debian jessie linotp
and then install the linotp packages:
apt-get update && apt-get install linotp linotp-useridresolver linotp-smsprovider linotp-adminclient-cli linotp-adminclient-gui libpam-linotp
Install mysql server and client:
apt-get install mysql-server mysql-client
Setup useraccount called 'linotp2' and database named 'LinOTP2' with password.

Go to LinOTP management panel: https://10.0.3.128/manage/

Add LDAP user directory: LinOTP Config >> Useridresolvers >> New >> LDAP and fill in as below:
Resolver Name: MyDomain
Server-URI: <domaincontroller-hostname>
BaseDN: OU=Users,DC=my,DC=domain
BindDN: OU=Administrator,OU=Users,DC=my,DC=domain
Install free-radius and linotp radius perl module:
apt-get install freeradius linotp-freeradius-perl
We need configure freeradius:
cp -a /etc/freeradius /etc/freeradius_original
rm /etc/freeradius/{clients.conf,users}

nano /etc/freeradius/clients.conf

#arbitrary name of the authentification asking client (i.e. VPN server)
client vpn {
        ipaddr  = 10.0.0.0 #IP of the client
        netmask = 8           
        secret  = 'mysecret' #shared secret, the client has to provide
}
set default module:
nano /etc/freeradius/users

DEFAULT Auth-type := perl
Insert:
module = /usr/lib/linotp/radius_linotp.pm

into /etc/freeradius/modules/perl (between perl parenthesis / nest)
Configure the linotp module:
nano /etc/linotp2/rlm_perl.ini

#IP of the linotp server
URL=https://10.1.2.3:443/validate/simplecheck
#optional: limits search for user to this realm
REALM=my-realm
#optional: only use this UserIdResolver
#RESCONF=flat_file
#optional: comment out if everything seems to work fine
Debug=True
#optional: use this, if you have selfsigned certificates, otherwise comment out
SSL_CHECK=False
 Create the virtual server for linotp:
nano /etc/freeradius/sites-available/linotp

authorize {

#normalizes maleformed client request before handed on to other modules (see '/etc/freeradius/modules/preprocess')
        preprocess
       
        #  If you are using multiple kinds of realms, you probably
        #  want to set "ignore_null = yes" for all of them.
        #  Otherwise, when the first style of realm doesn't match,
        #  the other styles won't be checked.

#allows a list of realm (see '/etc/freeradius/modules/realm')
        IPASS

#understands something like USER@REALM and can tell the components apart (see '/etc/freeradius/modules/realm')
        suffix

#understands USER\REALM and can tell the components apart (see '/etc/freeradius/modules/realm')
        ntdomain
      
        #  Read the 'users' file to learn about special configuration which should be applied for
        # certain users (see '/etc/freeradius/modules/files')
        files
      
        # allows to let authentification to expire (see '/etc/freeradius/modules/expiration')
        expiration

        # allows to define valid service-times (see '/etc/freeradius/modules/logintime')
        logintime

        # We got no radius_shortname_map!
        pap
}

#here the linotp perl module is called for further processing
authenticate {
        perl
}

Activate the virtual server:

ln -s ../sites-available/linotp /etc/freeradius/sites-enabled
You should now ensure you DELETE the inner-tunnel and default configuration within the sites-enabled folder to get this working properly.
service freeradius restart
** Note: If you get an error like follows when starting freeradius e.g.:

freeradius  Unknown value perl for attribute Auth-Type

try commenting out the default auth type in /etc/freeradius/users **

Test FreeRADIUS:

apt-get install freeradius-utils

radtest USERNAME PINOTP IP_OF_RADIUSSERVER NAS_PORTNUMBER SECRET

e.g.: radtest username 1234151100 10.1.2.3 0 56w55Rge0m1p4qj nasname 10.1.2.3

You can also test with https://linotp-server>/validate/check?user=myuser&pass=<pin><access-code>

Checking VMFS for file system errors with VOMA

Checking VMFS for errors might arise when you are unable to modify or erase file on a VMFS datastore or problems accessing specific files.

Typically using VOMA should be done when one of the following occurs:

SAN outage
Rebuilt RAID
Disk replacement

** Important ** Before running voma ENSURE that all VM's are turned off on the datastore or ideally migrated onto a completely different datastore.

You should also ensure that the datastore is unmounted on ** ALL ** ESXI hosts (you can do this through vSphere)!

SSH into the ESXI host and run the following:

voma -m vmfs -d /vmfs/devices/disks/naa.00000000000000000000000000:1 -s /tmp/analysis.txt

(Replacing 'naa.00000000000000000000000000:1' with the LUN NAA ID and partition to be checked.)

Wednesday 26 August 2015

Installing a self-signed certifcate as a trusted root CA in Debian

Firstly copy your certifcate to /usr/share/ca-certifcates:

cp /path/to/certificate.pem /usr/share/ca-certificates

and then ensure the ca-certifcates package is installed with:

apt-get install ca-certificates

and finally install the certifcate with:

dpkg-reconfigure ca-certificates

Select 'Yes' at the dialog prompt and ensure that your certifcate is checked.

Windows Server Enterprise 2003 x86 on VMWare

Windows Server Enterprise 2003 x86 can support upto 64GB of RAM (with PAE enabled) - although be aware that when running under the ESXI hypervisor / vSphere having the 'Memory Hot Plug' enabled under Memory section in the VM settings can cause issues if you are using over 4GB of RAM and you should disable it as it is not compatible with ESXI 6.0 (at least in my testing.)

If you have it enabled you get all sorts memory errors, services etc. failnig to start at boot and a partially working OS!

This scenerio can arise when importing Server 2003 VM's from VMWare Player with the use of an OVA package.

Enabling isakmp and ipsec debugging on Cisco ASA and IOS Router

On the ASA / router run:

config t
monitor logging 7 // This allows you to see the output on vty lines e.g. telnet / SSH sessions

debug crypto isakmp 127
debug crypto ipsec 127

We can also filter the logging to a specific VPN peer e.g.:

debug crypto condition peer 1.1.1.1

If you are not seeing any expected output verify whether syslog is turned on with:

show logging

If it is you can use ADSM under Monitoring >> Logging to view / filter etc. the logs.

To help debug any VPN issues you can also use the following command to troubleshoot ISAKMP:

show isakmp sa

show ipsec sa

and

show isakmp sa detail

Tuesday 25 August 2015

Setting up log shipping with MSSQL

Log shipping is a process that allows you to create a secondary copy of a database for failover / backup purposes by transporting logs from one database to the other by means of backing up and restoring logs between the primary and secondary (AKA standby) database.

TYpically log shipping should be performed between two of the same MSSQL version, although it is possible to perform it between different versions - not all variations are supported (check firstly!)

The SQL Server Agent handles and processes the log shipping and is typically setup on the primary source.

You should ensure the database you wish to mirror has it's recovery plan to either 'Full' or 'Bulk Loggged'

To setup log shipping you should firstly go to the database's properties >> Transaction Log Shipping >> and tick 'Enable this as a primary database in a transaction log shipping configuraiton'. Proceed by hitting the 'Backup Settings' button to specify when you wish to backup the database logs e.g. every 15 minutes.

Specify the backup location, job name, compression settings and so on and hit OK.

Specify the secondary database in the 'Secondary Databases' section by clicking on 'Add'. Specify the secondary database and server. Within the 'Initialize Secondary Database' tab we will select "Yes generate a full backup of the database...", within the "Copy Job" tab specify the destination folder for where the coppied logs from the primary server will be stored on the secondary server and give the copy job a name. Within the 'Restore Transaction Log' tab select whether the (secondary) database can be read during a restore operation ('No Recovery' or 'Standby' Mode.)

Hit OK and within the database properties >> 'Transaction Log Shipping' tab again and ensure 'Use a monitor service instance' is ticked (under Monitor Service Instance section) - this will provide you will details of transaction log history and help you keep track of operations.)

Monday 24 August 2015

Full vs Simple Recovery Model in MSSQL

MSSQL offers a number of recovery models that can be employed - of which 'Full' and 'Simple' are the most common types.

The simple recovery model provides you with a complete backup of the database that you can restore - although does not provide point in time recovery.

Typically it should be used for transient databases or very hot databases and where data loss is not critical.

Where as a 'full' recovery model keeps all of the logs for the database until a backup occurs or logs are truncated. This provides you with the ability to perform point-in-time recovery and is typically used for databases where data loss can't be afforded - although disk space from backed up logs comes at a premium with this model.

Resolved: Resolving the CID mismatch error: The parent virtual disk has been modified since the child was created.

When performing snapshotting operations within VMware a delta disk is created for all compatible disks. The delta vmdk file holds all of the changes made from the point-in-time that the snapshot was performed.

Typically you would have a base disk e.g. CDRIVE.VMDK - the delta disk would look something like CDRIVE-0000000001.vmdk, incrementing the integer with each snapshot taken.

I came accross a situation the other day where VMWare was complaning that my the parent disk had a different CID (a unique identifer for a disk / VMDK that changes whenever turned on):

"Resolving the CID mismatch error: The parent virtual disk has been modified since the child was created"

I was unable to consolidate the snapshots with the VMWare client - in they weren't even visable within the client - I had to browse the datastore to find them.

This can be caused by problems with vSphere Replication, adding snapshotted disks to an existing VM, expanding disks with existing snaphsots and so on.

In order to remedy this we should firstly identify the CID of each disk - the CID is located in the disks discriptor file. Lets say we have the following parent disk:

CDRIVE.VMDK

and the following delta (snaphots) drives:

CDRIVE-00000001-DELTA.VMDK
CDRIVE-00000002-DELTA.VMDK

* Warning: Do not run 'cat' on any files with *flat* or *delta* in their title - these are NOT discriptor files *

*** BEFORE PERFORMING ANY OF THE BELOW ENSURE YOU BACKUP ALL OF THE FILES AFFECTED AS THIS OPERATION HAS THE POTENTIAL OF CORRUPTING THE DISKS! ***

We will firstly identify the CID of all of the VMDK's:

SSH into the ESXI host and cd to the vm's directory e.g.:

cd /vmfs/datastore1/myvm

There will be an assortment of files - but we are looking for CDRIVE-00000001.VMDK in our case.

To view the descriptor file we should run something like:

cat CDRIVE.VMDK

We might get something like:

# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=2776c5f6
parentCID=ffffffff

The parentCID for this disk is 'ffffffff' informing us that this is the the highest disk in the hierarchy.

The CID '2776c5f6' is what the delta disk CDRIVE-00000001.VMDK should have for it's 'parentCID' attribute, so we will now check the descriptor file for 'CDRIVE-00000001.VMDK':

cat CDRIVE-00000001.VMDK

# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=45784545j
parentCID=2776c5f6

As expected the parentCID is correct - now CDRIVE-00000002.VMDK's parentSID attribute should be that of CDRIVE-00000001.VMDK's CID (45784545j)

So we proceed by checking CDRIVE-00000002.VMDK:

cat CDRIVE-00000002.VMDK

# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=KKF88944
parentCID=3HHF343F

Although we find that CDRIVE-00000002.VMDK's parentCID does not match that of CDRIVE-00000001.VMDK and hence is where the break in the chain is. So we should change CDRIVE-00000002.VMDK's parentCID to '45784545j'.

We can now ensure that the snapshots are valid with the following command against the highest level snapshot e.g.:

vmkfstools -e CDRIVE-00000001.VMDK

Thursday 20 August 2015

SBC (Session Border Controller) vs Traditional Firewall

To summerise a firewall typically works on layer 3, 4 and sometimes partially on 7 (e.g. FTP / limited SIP awareness)

A SBC works on layer 7 and can fully understand VOIP traffic, meaning that it can:

- Block denial of service attacks
- Detect spoofed / malicious SIP packets
- Close and open RTP ports as nesacasery
- Transcode between different media protocols

The major seeling point of an SBC is used to provide interopability between different SIP solutions, since SIP implmentations quite often devaite from each other.

SIP utilizes RTP (Real-time traffic protocol) that unfortuanely typically requires UDP ports 6000 - 40000 to be available (much like the downfalls of FTP) - a traditional firewall will not be able to protect the SIP traffic.

They can be placed behind a firewall (that works properly with NAT an SIP) or simply presented directly to the router.

Throttling the I/O of Windows Server Backup

I noticed by default that Windows Backup / Windows Server Backup does not have any built-in way to throttle disk IO during backups. When backups overun this can cause a pretty severe impact on services / users relying on the machine.

Although fortunatly windows provides a native API to control I/O on processes - now lucky for us Process Hacker can do this for us...

Firstly select the process and right-hand click selecting I/O Priority and choosing a specific level e.g. 'Low' - there is also a 'save for myprocess.exe' button in the save dialog window allowing you to make the changes permenemet for future instances of the process:


The specific process you should throttle is called wbengine.exe. I usually set the I/O prioirty to 'Low' which seems to do the job for me at least - but you might need to experiment a little first :)

You can read further about the different levels here.

Wednesday 19 August 2015

Restore a Windows Server Backup job

There was an occasion where for some reason a Windows Server backup job had mysteriously dissapeared from the Windows Server Backup GUI and was not visible from wbadmin.

Although we need not worry as we can simply use the wbadmin utility to import the job's catalog file and viola - we are on track again:

wbadmin restore catalog -backupTarget:C:\path\to\backupdestination

Tuesday 11 August 2015

Fiber Optics and Switches 101

Typically when dealing with fiber connections you will have either an LC or SC cable that will go over a patch panel and terminate on both sides on a switch with a transiever module such as the MGBSX1 transiever commonly used on Cisco switches.

Two main types single mode and multimode:

Single Mode - Offers less bandwidth, although can transmit further distances (around 5KM maximum)

Multimode - Offers more bandwdith but at a reduced maximum distance (between 300 - 600 meters approx)

Two common connector types:

SC - 1.25mm in diameter

LC - Older 2.5mm in diameter

Typically as good practise the transit cables (the casbles connecting the two patch panels together) should be straight through (e.g. Fiber 1 on Patch Panel A should be Fiber 1 on Patch Panel B and so on)

One of the patch cables should be Fiber 1 = A and Fiber 2 = B. The other patch cable on the other site should be crossed e.g. Fiber 1 = B and Fiber 2 = A.

** The fibers must be crossed at one end for a connection to be established on the switches. **

Tuesday 4 August 2015

Importing your physical and virtual machines into AWS

AWS provides you with the ability to import on-premise machines into thier cloud.

Firstly if your existing machine is physical you should download the vCenter converter from below:

https://www.vmware.com/products/converter

Once you have converted your physical machine into a virtualized format you should download and install the AWS Command Line Interface from:

http://aws.amazon.com/cli/

There are also some pre-requisites on importing / exporting machines from AWS - including the operating system support:

- Microsoft Windows Server 2003 (with at least SP1)
- Microsoft Windows Server 2003 R2
- Microsoft Windows Server 2008
- Microsoft Windows Server 2008 R2
- Microsoft Windows Server 2012
- Microsoft Windows Server 2012 R2
- Windows 7
- Windows 8
- Windows 8.1
- Various Linux Versions

Disk images must be in either VHD, VMDK or OVA containers.

For a more detailed list please see:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/VMImportPrerequisites.html

We wil proceed by uploading our VMDK image to AWS via the AWS Command Line Interface by opening up a command prompt:

cd C:\Program Files\Amazon\AWSCLI
aws --version

Run the following to configure your AWS command line client:

aws configure

We will also need to create a specific role that will allow us to perform the import process - so we should create a file named "role.json" containing the following:

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Sid":"",
         "Effect":"Allow",
         "Principal":{
            "Service":"vmie.amazonaws.com"
         },
         "Action":"sts:AssumeRole",
         "Condition":{
            "StringEquals":{
               "sts:ExternalId":"vmimport"
            }
         }
      }
   ]
}

and create the role, specifying the file we just created:

aws iam create-role --role-name vmimport --assume-role-policy-document file://role.json

We will also have to create a role policy - so create another file called "policy.json" and insert the following (replacing <disk-image-file-bucket> with the bucket where you VMDK file is stored:

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":[
            "arn:aws:s3:::<disk-image-file-bucket>"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:GetObject"
         ],
         "Resource":[
            "arn:aws:s3:::<disk-image-file-bucket>/*"
         ]
      },
      {
         "Effect":"Allow",
         "Action":[
            "ec2:ModifySnapshotAttribute",
            "ec2:CopySnapshot",
            "ec2:RegisterImage",
            "ec2:Describe*"
         ],
         "Resource":"*"
      }
   ]
}
And then run the following command to apply the policy:

aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://policy.json

Proceed by creating a new S3 bucket (if needed) for the VMDK file:

aws s3 mb s3://bucket-name

And copy the VMDK on your local file system to the newly created bucket:

aws s3 cp C:\path-to-vmdk\vm.vmdk s3://bucket-name/test2.txt

Finally verify it has uploaded successfuly:

aws s3 ls s3://mybucket

We can now use the import-image command to import the image into AWS.

For importing multiple VMDK's we can use:

$ aws ec2 import-image --cli-input-json "{  \"Description\": \"Windows 2008 VMDKs\", \"DiskContainers\": [ { \"Description\": \"Second CLI task\", \"UserBucket\": { \"S3Bucket\": \"my-import-bucket\", \"S3Key\" : \"my-windows-2008-vm-disk1.vmdk\" } }, { \"Description\": \"First CLI task\", \"UserBucket\": { \"S3Bucket\": \"my-import-bucket\", \"S3Key\" : \"my-windows-2008-vm-disk2.vmdk\" } } ] }"

or for importing a single OVA file we can use:

$ aws ec2 import-image --cli-input-json "{  \"Description\": \"Windows 2008 OVA\", \"DiskContainers\": [ { \"Description\": \"First CLI task\", \"UserBucket\": { \"S3Bucket\": \"my-import-bucket\", \"S3Key\" : \"my-windows-2008-vm.ova\" } } ]}"

For more detailed information please refer to:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ImportVMImportImage.html

We can monitor the import process with:

ec2 describe-import-image-tasks

We are then able to launch an instance of the VM we imported as an AMI:

ec2-run-instances ami-1111111 -k my-key-pair --availability-zone us-east-1a