Pages

Wednesday, 30 December 2015

MDT / Windows Error: The computer restartred unexpectedly or encountered an unexpected error.

Error: The computer restartred unexpectedly or encountered an unexpected error. Windows installation cannot proceed...

The other day I came accross an issue with MDT when attempting to deploy an image to a new machine - although funnily enough it worked on several other machines so I could only imagine thast is was related to the hardware.

When this error occurs you will want to refer to the windows setup logs that can be found here:
C:\Windows\Panther
or
C:\Windows\System32\Sysprep\Panther

Setting up centralized logging for deployment logs with MDT

For convienince I like to store all deployment logs in a central location so I don't need to mess around grabbing the logs of the local systems.

Fortunately MDT provides us with the ability to do this with the 'SLShareDynamicLogging' parameter (which will write the logs on the fly to a specific directory).

We simply need to modify our deployment share's customsettings.ini file e.g.

SLShareDynamicLogging=\\mdtserver\DeploymentShare$\Logs

Taken from TechNet - Microsoft gives us a quick overview of which logs pertain to what:

Before the Image is applied to the machine:

X:\MININT\SMSOSD\OSDLOGS

After the system drive has been formatted:

C:\MININT\SMSOSD\OSDLOGS

After Deployment:

%WINDIR%\TEMP\DeploymentLogs

The logs of most interest for troubleshooting a failed install will be:

BDD.LOG – This is an aggregated log of all the MDT Logs.

SMSTS.LOG – This log would be used to troubleshoot Task Sequence errors.


Setting up automatic deployment of Windows 7 with MDT and WDS

(Note: For this tutorial I will be doing all of the work on a Windows 10 box)

There are a few pre-requisites for this:

Windows 10 ADK: https://msdn.microsoft.com/en-us/windows/hardware/dn913721.aspx#deploy
Windows Server 2008 / 2012 with the WDS role installed.
Microsoft Deployment Toolkit: https://www.microsoft.com/en-us/download/details.aspx?id=48595

We should then launch the 'Depoyment Workbench' and create a new Deployment Share by right-hand clicking on the 'Deployment Share' node.

Once created - expand our new deployment share node and right-hand click on the 'Operating Systems' node and select 'Import OS' and proceed to either specify pre-created WIM images or simply point the wizard to a mounted ISO image of Windows 7.

Since there will also likely be many different computer models we will also need to inject drivers into our image - fortunately the 'Deployment Workbench' utility allows us to perform this pretty easily. It is also worth noting that a lot of computer manafacturers (like Dell, Toshiba, Lenovo etc.) will provide 'driver packs' for specific model / product lines so that  you can easily import all of the relevent drivers at once.

So to do this we right-hand click on the Out-of-box drivers' node and select 'Import Drivers'. The beauty of the driver integration with MDT is that drivers are injected offline into the relevent WIM images - so you do not have to have thousands of drivers in one WIM image.

NOTE: A lot of driver packs are labelled as for use with SCCM - although in my experience I have found these driver packs seem to work fine with MDT to!

We should now right-hand click on our MDT depployment share and go to >> Properties >> Rules and add / modify with something like the following:
[Settings]
Priority=Default
Properties=MyCustomProperty
[Default] 
OSInstall=Y
SkipCapture=NO
SkipAdminPassword=YES
SkipProductKey=YES
SkipComputerBackup=YES
SkipBitLocker=YES
SkipLocaleSelection=YES
SkipTimezone=YES
Timezone=1200
TimeZoneName=GMT Standard Time
UILanguage=en-GB
UserLocale=en-US
Systemlocale=en-GB
KeyboardLocale=2057:00000409
Note: For working out timezone ID's and keyboard locales see the following links:
http://powershell.org/wp/2013/04/18/determining-language-locales-and-values-for-mdt-2012-with-powershell/
https://msdn.microsoft.com/en-us/library/gg154758.aspx

We now want to capture an image of the build we are creating - so to do this we need to create a new 'task sequence' by right-hand clicking the 'Task Sequences' node >> 'New Task Sequence' >> Fill in a task sequence ID and name e.g. 'Capture Windows 7 Image' >> The template should be set to 'Sysprep and Capture' and finally follow the rest of the wizard.

Important: Ensure you now right-hand click on the deployment share and select "Update Deployment Share"

Now in order to capture an image we should install Windows 7 on ideally a VM - customise it as you wish e.g. registry settings, tweaks to UI etc. and then browse to the MDT share (e.g. \\MDTHost\Deploymentshare$ which contains your deployment share and run the file called litetouch.vbs which is located in the 'scripts' subfolder.

This will launch the capture wizard - which will usually take a few minutes to launch. Once the task sequence menu appears proceed by selecting 'Capture Image' and then ensure 'Capture an image of this reference computer' is selected (make a note of the .wim filename and location as we will need this for WDS!) and hit Next - enter the required credentails to connect to the network share (ensuring that user can read / write to that share!) and complete the wizard.

Now proceed - the system should then reboot into the WinPE enviornment - although I got an invalid credentail error message appear in the WinPE environment preventing me from proceeding! This was due to the fact that I had not updated my deployment share (right-hand click on it and select 'Update') and hence the WinPE images had not been generated!

Follow through the wizard and capture your images. Now reboot again and verify new .wim image is available.

If any errors occur during this process the log files can be found in: C:\Windows\Temp\DeploymentLogs. The specific file we are interested in is called BDD.log (contains all log entries) and LiteTouch.log.

We can open these files with notepad - although using something like SMSTrace / CMTrace makes everying ALOT easier...

We should now create a new Task Sequence in MDT: Right-hand click 'Operating Systems' and select 'Import' >> Select 'From custom capture' and point it to your image file you captured earlier.

Noiw proceed by Right-hand clicking 'Task Sequences' >> 'New Task Sequence' >> Name: Deploy Image >> Template: Standard Client Task Sequence >> and complete the wizard with the relevent settings e.g. product key etc.

Once the new task sequence has been completed we should right-hand click it and go to properties >> 'Task Sequence' tab >> Install >> 'Install Operating System' and select your custom .wim image.

We should refer to the following website in order to automate the deployment task sequence:

https://scriptimus.wordpress.com/2013/02/18/lti-deployments-skipping-deployment-wizard-panes/

For reference I added something like:
[Settings]
Priority=Default
Properties=MyCustomProperty 
[Default]
OSInstall=Y
SkipBDDWelcome=YES
DeployRoot=\\deploymenthost\DeploymentShare$
UserDomain=my.domain
UserID=administrator
UserPassword=mysecurepassword
SLShareDynamicLogging=\\deploymenthost\DeploymentShare$\Logs
SkipApplications=NO
SkipCapture=NO
SkipAdminPassword=YES
SkipProductKey=YES
SkipComputerBackup=YES
SkipBitLocker=YES
SkipLocaleSelection=YES
SkipTimezone=YES
Timezone=1200
TimeZoneName=GMT Standard Time
UILanguage=en-GB
UserLocale=en-US
Systemlocale=en-GB
KeyboardLocale=2057:00000409
BitsPerPel=32
VRefresh=60
XResolution=1
YResolution=1
TaskSequenceID=50
SkipTaskSequence=YES
JoinDomain=my.domain
DomainAdmin=administrator
DomainAdminDomain=my.domain
DomainAdminPassword=mysecurepassword
SkipUserData=YES
SkipComputerBackup=YES
SkipPackageDisplay=YES
SkipSummary=YES
SkipFinalSummary=YES

** Note: You MUST add the following to boot.ini as well:
SkipBDDWelcome=YES
DeployRoot=\\deploymentserver\DeploymentShare$
UserDomain=my.domain
UserID=admin
UserPassword=mypassword
** As a word of caution you should ensure that the accounts you use e.g. to connect to deployment share, join computer to domain etc. are limited as they are contained in plain-text in the bootstrap files. For example I limited access for my user to only the deployment share and delegated access permissions to allow the account to join computers to the domain. **

Because of this we will need to regenerate the boot image! So right-hand click the deployment share and click 'Update Deployment Share' **

We will now use these with WDS - so in the WDS console we go to: Right-hand click 'Boot Images' >> 'Add Boot Image...' >> Select the LiteTouchPE_x64.wim located within our MDT deployment share (it's in the 'boot' folder). We should also add an 'Install Image' - selecting the .wim / capture image we created earlier.

Now simply boot from the WDS server and the task sequence should automatically be started.

If you are using a Cisco router - something like the following would get you up and running for PXE boot:
ip dhcp pool vlanXX
next-server 1.2.3.4
option 67  boot\x64\wdsnbp.com


Tuesday, 22 December 2015

Converting in-guest iSCSI LUNs to VMWare native VMDK disks

To give some pre-text for the point on this tutorial I should point in the specific circumstances vSphere Essentials Plus was being used - hence Storage vMotion was not available.

** Fornote: For this technique to work you are required to have at least 2 ESXI hosts in your environment **

Firstly unmount the disks and then disconnect the targets from the Windows iSCSI connector tool.

Now in order to connect to iSCSI targets directly from the vSphere host we will need a VMKernel adapter associated with the relevant physical interface (i.e. the one connected directly to the storage network.)

So we go to ESXI Host >> Configuration >> Networking >> 'Add Networking...' >> Connection Type = VMKernel >> Select the relevant (existing) vSwitch that is attached to your storage network >> Set a network label e.g. VMKernel iSCSI >> And give it an IP address >> Finish wizard.

We should proceed by adding a new storage adapter for the ESXI host within the vSphere Client: ESXI Host >> Configuration >> Storage Adapters >> iSCSI Software Adapter >> Right hand click the adapter and select 'Properties' >> Network Configuration >> Add >> Select the VMKernel adapter we created earlier >> Hit OK >> Go to 'Dynamic Discovery' tab >> Add >> Enter the iSCSI server and port and close the window.

We can then right-hand click on the storage adapter again and hit 'Rescan'. If we then go to ESXI Host >> Configuration >> Storage >> View by 'Devices' and we should be presented the iSCSI target device.

Now we should go to our VM Settings >> Add New Disk >> Raw Device Mapping and select the iSCSI disk and ENSURE that the compatibility mode is set to Virtual otherwise the vMotion process will not work.

RDM's effectively proxy the block data from the physical disk to the virtual machine and hence in actuality are only a few kilobytes in size although this is not directly clear when looking at file sizes within the vSphere datastore file explorer.

Now boot up the virtual machine and double check the disk is initialized / functioning correctly. We can then proceed by right-hand clicking on the virtual machine in the vSphere client >> Migrate >> 'Change host and datastore' etc. etc. and finish the migration wizard.

** Note: You might also have to enable vMotion on the ESXI host:
https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.solutions.doc%2FGUID-64D11223-C6CF-4F64-88D3-436D17C9C93E.html
Once the vMotion is complete we should now see that the RDM no longer exists and the VMDK's are now effectively holding all of the block data on the actual datastore.

Creating a bootable deployment ISO for Windows Deployment Services

We should firstly install WAIK  for Windows 7 and WAIK for Windows 7 SP1 Supplement on the server that has WDS currently installed:

https://www.microsoft.com/en-gb/download/details.aspx?id=5753
https://www.microsoft.com/en-us/download/details.aspx?id=5188

Now open the WDS snapin and right-hand click the relevent boot image and select 'Create Capture Wizard' >> and follow through the Wizard ensuring that 'Enter the name of the Windows Deployment Services server that you want to respond when you boot...' is set to your WDS server!

Now open 'Deployment Tools Command Prompt' from the start menu and then run something like the following (where 'D:\DiscoverBootImage' was where my disocovery image resided):
CopyPE amd64 D:\DiscoverBootImage\WinPE
Now leave this prompt open.

Proceed by copying the discovery boot image into the WinPE folder - and RENAMING it to boot.wim

Now to create the bootable ISO we can issue something like:
oscdimg -n -bD:\DiscoverBootImage\Winpe\ISO\Boot\etfsboot.com D:\DiscoverBootImage\Winpe\ISO D:\DiscoverBootImage\boot.iso
Finally burn the ISO to CD / DVD and test it!

Thursday, 17 December 2015

How to view all IPTables rules quickly

iptables -vL -t filter
iptables -vL -t nat
iptables -vL -t mangle
iptables -vL -t raw
iptables -vL -t security

How to setup port forwarding with iptables / Netfilter (properly)

The first command tells the host that it is allowed to forward IPv4 packets (effectively turning it into a network router):
echo "1" > /proc/sys/net/ipv4/conf/ppp0/forwarding
echo "1" > /proc/sys/net/ipv4/conf/eth0/forwarding
or better yet ensure that the ip forwarding persists after reboot:
sudo vi /etc/sysctl.conf
and add / amend:
net.ipv4.ip_forward = 1
and to apply changes we should run:
sudo sysctl -p
Simple port forwarding: This is often applied when you have a service running on the local machine that uses an obscure port - for example Tomcat on tcp/8080 - you  might want to provide external access to the service from port 80 - so we would do something like:
sudo iptables -t filter -I INPUT 1 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
OR (to be more restrictive)
sudo iptables -t filter -I INPUT 1 -i eth0 -s 8.9.10.11/24 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
and then:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
The first command permits inbound traffic on our 'eth0' interface to tcp/80.

The second command instructs traffic bound for tcp/80 to be redirected (effectively DNAT'ing it) to port tcp/8080.

Advanced port forwarding: Sometimes you might have the need to forward traffic on a local port to an external port of another machine (note: 'REDIRECT' will not work in the scenerio - it only works locally) - this could be achieved as follows:
sudo iptables -t filter -I INPUT 1 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
OR (to be more restrictive)
sudo iptables -t filter -I INPUT 1 -i eth0 -s 8.9.10.11/24 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
and then:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.11.12.13:8080
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
The first command again permits inbound traffic on 'eth0' to tcp/80.

The second command performs a DNAT on traffic hitting port 80 to 10.11.12.13 on tcp/8080.

* The masquerade action is used so that traffic can be passed back from the destination host to the requesting host - otherwise the traffic will be dropped. *

you can also use SNAT do achieve the same thing:
iptables -t nat -A POSTROUTING -d 66.77.88.99 -s 11.12.13.14 -o eth0 -j SNAT --to-source 192.168.0.100
(where 192.168.0.100 is the interface you wish to NAT the traffic to, 11.12.13.14 is the requesting client and 66.77.88.99 is the destination address that should match for SNAT to be performed - which is the remote server in our case.)

If you have a default drop-all rule in your forward chain you should also include something like:
sudo iptables -t filter -A FORWARD -i eth0 -s <destination-ip> -d <client-ip> -p tcp -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
sudo iptables -t filter -A FORWARD -i eth0 -s <client-ip> -d <destination-ip> -p tcp -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
Client IP = The IP address (or subnet) from which want to use the port forwarding.
Destination IP = The end IP address you wish the client to hit after the port redirection process.

The first command permits the firewall to forward traffic from the destination ip (our Tomcat server) to the requesting client.

The second command permits the firewall to forward traffic from the client ip to the destination server.

Sometimes (although I strongly discourage it personally) you might want to put a blanket rule in to allow everything to be forwarded between interfaces on the host - this can be achieved with something like:
sudo iptables -t filter -A FORWARD -j ACCEPT
Finally to ensure they persist after a restart of the system we can use:
iptables-save > /etc/sysconfig/iptables

How to setup port forwarding with iptables / Netfilter (properly)

The first command tells the host that it is allowed to forward IPv4 packets (effectively turning it into a network router):
echo "1" > /proc/sys/net/ipv4/conf/ppp0/forwarding
echo "1" > /proc/sys/net/ipv4/conf/eth0/forwarding
or better yet ensure that the ip forwarding persists after reboot:
sudo vi /etc/sysctl.conf
and add / amend:
net.ipv4.ip_forward = 1
and to apply changes we should run:
sudo sysctl -p
Simple port forwarding: This is often applied when you have a service running on the local machine that uses an obscure port - for example Tomcat on tcp/8080 - you  might want to provide external access to the service from port 80 - so we would do something like:
sudo iptables -t filter -I INPUT 1 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
OR (to be more restrictive)
sudo iptables -t filter -I INPUT 1 -i eth0 -s 8.9.10.11/24 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
and then:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080
The first command permits inbound traffic on our 'eth0' interface to tcp/80.

The second command instructs traffic bound for tcp/80 to be redirected (effectively DNAT'ing it) to port tcp/8080.

Advanced port forwarding: Sometimes you might have the need to forward traffic on a local port to an external port of another machine (note: 'REDIRECT' will not work in the scenerio - it only works locally) - this could be achieved as follows:
sudo iptables -t filter -I INPUT 1 -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
OR (to be more restrictive)
sudo iptables -t filter -I INPUT 1 -i eth0 -s 8.9.10.11/24 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
and then:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.11.12.13:8080
sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
The first command again permits inbound traffic on 'eth0' to tcp/80.

The second command performs a DNAT on traffic hitting port 80 to 10.11.12.13 on tcp/8080.

* The masquerade action is used so that traffic can be passed back from the destination host to the requesting host - otherwise the traffic will be dropped. *

you can also use SNAT do achieve the same thing:
iptables -t nat -A POSTROUTING -d 66.77.88.99 -s 11.12.13.14 -o eth0 -j SNAT --to-source 192.168.0.100
(where 192.168.0.100 is the interface you wish to NAT the traffic to, 11.12.13.14 is the requesting client and 66.77.88.99 is the destination address that should match for SNAT to be performed - which is the remote server in our case.)

If you have a default drop-all rule in your forward chain you should also include something like:
sudo iptables -t filter -A FORWARD -i eth0 -s <destination-ip> -d <client-ip> -p tcp -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
sudo iptables -t filter -A FORWARD -i eth0 -s <client-ip> -d <destination-ip> -p tcp -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
Client IP = The IP address (or subnet) from which want to use the port forwarding.
Destination IP = The end IP address you wish the client to hit after the port redirection process.

The first command permits the firewall to forward traffic from the destination ip (our Tomcat server) to the requesting client.

The second command permits the firewall to forward traffic from the client ip to the destination server.

Sometimes (although I strongly discourage it personally) you might want to put a blanket rule in to allow everything to be forwarded between interfaces on the host - this can be achieved with something like:
sudo iptables -t filter -A FORWARD -j ACCEPT
Finally to ensure they persist after a restart of the system we can use:
iptables-save > /etc/sysconfig/iptables

Controlling VPN traffic with VPN Filters on Cisco ASA

Typically (or by default rather) VPN traffic is NOT controlled by normal access controls on the interfaces and rather are controlled by VPN filters.

They are fairly straight forward to apply - for example...

We firstly create an ACL:
access-list EU-VPN-FILTER permit ip 10.1.0.0 255.255.0.0 10.2.0.0 255.255.255.0
Then proceed by defing a group policy:
group-policy MYSITE internal
group-policy MYSITE attributes
  vpn filter value EU-VPN-FITER
And finally creating / amending the tunnel group so it uses the default policy we have created:
tunnel-group 176.177.178.179 general-attributes
  default-group-policy MYSITE

Wednesday, 16 December 2015

The directory service is missing mandatory configuration information, and is unable to determine the ownership of floating single-master operation roles

The operation failed because: Active Directory Domain Services could not transfer the remaining data in directory partition DC=ForestDnsZones,DC=domain,DC=int to Active Directory Domain Controller \\dc01.domain.int.

"The directory service is missing mandatory configuration information, and is unable to determine the ownership of floating single-master operation roles."

I encountered this error while attempting to demote a Server 2008 server from a largely 2003 domain.

By reviewing the dcpromo log (found below):
%systemroot%\Debug\DCPROMO.LOG
The error indicates that the DC being demoted is unable to replicate changes back to the DC holding the infrastructure FSMO role. As dcdiag came back OK and replication appeared to be working fine I ended up querying the Infrastructure master from the server in question and suprisngly it returned back a DC that was no longer action / had been decomissioned a while back!
dsquery * CN=Infrastructure,DC=ForestDnsZones,DC=domain,DC=int -attr fSMORoleOwner
Although funnily enough running something like:
netdom query fsmo
would return the correct FSMO holder - so this looks like a bogus reference.

So in order to resolve the problem we open up adsiedit and connect to the following nameing context:

*Note* I would highly recommend taking a full backup of AD before editing anything with ADSI edit! *
CN=Infrastructure,DC=ForestDnsZones,DC=domain,DC=int
Right hand click on the new 'Infrastructure' node and hit 'Properties' >> Find the fSMORoleOwner attribute and change the value to match your actual DC that holds the FSMO role!

For example:
CN=NTDS Settings\0ADEL:64d1703f-1111-4323-1111-84604d6aa111,CN=BADDC\0ADEL:93585ae2-cb28-4f36-85c2-7b3fea8737bb,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=domain,DC=int
would become:
CN=NTDS Settings\0ADEL:64d1703f-1111-4323-1111-84604d6aa111,CN=GOODDC\0ADEL:93585ae2-cb28-4f36-85c2-7b3fea8737bb,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=domain,DC=int

Unfortunately I got the following error message when attempting to apply the changes:

"The role owner attribute could not be read"

Finally I stumbled across a script provided by Microsoft that will do this for us. Simply put you run it against a specific naming context and it will automatically choose a valid infrastructure master DC for you.

Save the following script:

'-------fixfsmo.vbs------------------
const ADS_NAME_INITTYPE_GC = 3
const ADS_NAME_TYPE_1779 = 1
const ADS_NAME_TYPE_CANONICAL = 2
set inArgs = WScript.Arguments
if (inArgs.Count = 1) then
    ' Assume the command line argument is the NDNC (in DN form) to use.
    NdncDN = inArgs(0)
Else
    Wscript.StdOut.Write "usage: cscript fixfsmo.vbs NdncDN"
End if
if (NdncDN <> "") then
    ' Convert the DN form of the NDNC into DNS dotted form.
    Set objTranslator = CreateObject("NameTranslate")
    objTranslator.Init ADS_NAME_INITTYPE_GC, ""
    objTranslator.Set ADS_NAME_TYPE_1779, NdncDN
    strDomainDNS = objTranslator.Get(ADS_NAME_TYPE_CANONICAL)
    strDomainDNS = Left(strDomainDNS, len(strDomainDNS)-1)
   
    Wscript.Echo "DNS name: " & strDomainDNS
    ' Find a domain controller that hosts this NDNC and that is online.
    set objRootDSE = GetObject("LDAP://" & strDomainDNS & "/RootDSE")
    strDnsHostName = objRootDSE.Get("dnsHostName")
    strDsServiceName = objRootDSE.Get("dsServiceName")
    Wscript.Echo "Using DC " & strDnsHostName
    ' Get the current infrastructure fsmo.
    strInfraDN = "CN=Infrastructure," & NdncDN
    set objInfra = GetObject("LDAP://" & strInfraDN)
    Wscript.Echo "infra fsmo is " & objInfra.fsmoroleowner
    ' If the current fsmo holder is deleted, set the fsmo holder to this domain controller.
    if (InStr(objInfra.fsmoroleowner, "\0ADEL:") > 0) then
        ' Set the fsmo holder to this domain controller.
        objInfra.Put "fSMORoleOwner",  strDsServiceName
        objInfra.SetInfo
        ' Read the fsmo holder back.
        set objInfra = GetObject("LDAP://" & strInfraDN)
        Wscript.Echo "infra fsmo changed to:" & objInfra.fsmoroleowner
    End if
End if
Now run the script as follows - so in our case we would run something like:

cscript fixfsmo.vbs DC=ForestDnsZones,DC=domain,DC=int
* Note: I also had to apply the above command on the DomainDNZZones to!! *

You can then verify it has changed with something like:

dsquery * CN=Infrastructure,DC=ForestDnsZones,DC=domain,DC=int -attr fSMORoleOwner

And finally attempt the demotion again! (You might also want to ensure that replication is upto date on all DC's firstly!)

Wednesday, 2 December 2015

Trusting a self-signed certifcate on Debian Jessie

I came across a number of how-to's on this subject although the vast majority were not that accurate for Debian Jessie.

So we have a scenario where we have a local application on our machine that  uses a self-signed certificate that we would like to trust.

Firstly download the ca-certificates package:
apt-get install ca-certificates
Proceed by obtaining our certificate we would like to import e.g.:
openssl s_client -connect mywebsite.com:443
Extract the public key from the output and place it in a file called something like:

yourhost.crt

** Important: You MUST ensure that the file extension is '.crt' or the ca-certificates tool will not pickup the certificate! **
sudo cp mydomain.crt /usr/local/share/ca-certificates
or
sudo cp mydomain.crt /usr/share/ca-certificates

Then simply run the following command to update the certificate store:
 sudo update-ca-certificates
And if (or when) you wish to remove the certificate we can issue the following command:
 dpkg-reconfigure ca-certificates

Script to identify and drop all orphaned logins for SQL Server 2008+

The script can be found below (credit to: https://www.mssqltips.com):

Use master
Go
Create Table #Orphans
(
RowID int not null primary key identity(1,1) ,
TDBName varchar (100),
UserName varchar (100),
UserSid varbinary(85)
)
SET NOCOUNT ON
DECLARE @DBName sysname, @Qry nvarchar(4000)
SET @Qry = ''
SET @DBName = ''
WHILE @DBName IS NOT NULL
BEGIN
SET @DBName =
(
SELECT MIN(name)
FROM master..sysdatabases
WHERE
/** to exclude named databases add them to the Not In clause **/
name NOT IN
(
'model', 'msdb',
'distribution'
) And
DATABASEPROPERTY(name, 'IsOffline') = 0
AND DATABASEPROPERTY(name, 'IsSuspect') = 0
AND name > @DBName
)
IF @DBName IS NULL BREAK
Set @Qry = 'select ''' + @DBName + ''' as DBName, name AS UserName,
sid AS UserSID from [' + @DBName + ']..sysusers
where issqluser = 1 and (sid is not null and sid <> 0x0)
and suser_sname(sid) is null order by name'
Insert into #Orphans Exec (@Qry)
End
Select * from #Orphans
/** To drop orphans uncomment this section
Declare @SQL as varchar (200)
Declare @DDBName varchar (100)
Declare @Orphanname varchar (100)
Declare @DBSysSchema varchar (100)
Declare @From int
Declare @To int
Select @From = 0, @To = @@ROWCOUNT
from #Orphans
--Print @From
--Print @To
While @From < @To
Begin
Set @From = @From + 1
Select @DDBName = TDBName, @Orphanname = UserName from #Orphans
Where RowID = @From
Set @DBSysSchema = '[' + @DDBName + ']' + '.[sys].[schemas]'
print @DBsysSchema
Print @DDBname
Print @Orphanname
set @SQL = 'If Exists (Select * from ' + @DBSysSchema
+ ' where name = ''' + @Orphanname + ''')
Begin
Use ' + @DDBName
+ ' Drop Schema [' + @Orphanname + ']
End'
print @SQL
Exec (@SQL)
Begin Try
Set @SQL = 'Use ' + @DDBName
+ ' Drop User [' + @Orphanname + ']'
Exec (@SQL)
End Try
Begin Catch
End Catch
End
**/
Drop table #Orphans

Tuesday, 1 December 2015

Creating and applying a retention tags for mailbox / mailbox items

** Fornote: You must have setup in-place archiving before the below will take effect **

Using both a retention policy and tags we have the ability to allocate individual retention periods for objects (i.e. folders) within a mailbox.

For example if we wished to create a root level retention policy on a mailbox so all mail items would be retained for 30 days - but we have a folder

with holds archived email (lets call it 'Archived Items') - we could assign a retention period of 365 days.

We would typically use rentention tags to set a retention period for a default folder (e.g. Inbox):
New-RetentionPolicyTag "Corp-Exec-DeletedItems" -Type Inbox -Comment "Deleted Items are purged after 30 days" -RetentionEnabled $true -
AgeLimitForRetention 30 -RetentionAction PermanentlyDelete
We would then proceed by creating a rentention policy and attaching the relevant retention tags:
New-RetentionPolicy "Auditors Policy" -RetentionPolicyTagLinks "General Business","Legal","Sales"
We would then assign the new retention policy to a user:

** There is a default retention policy that which exists that will be applied to users who enable in-place archiving and are not assigned a specific policy - you can view this with:
Get-RetentionPolicy -Identity "Default Archive and Retention Policy"
We can then assign this to a specific user with:
Set-Mailbox -Identity "[email protected]" -RetentionPolicy "My Retention Policy"
Although in order to use a custom (user-created) folder (i.e. not the standard inbox, sent items, deleted items etc.) we can configure auto-archiver settings in our Outlook Client - by clicking on the desired folder >> select the 'FOLDER' tab >> 'AutoArchive Settings' >> Select the 'Archive this folder using these settings' and select the desired timeframe and action.