Friday, 27 May 2016

Setting up custom metrics with Cloudwatch and Windows Server 2008/2012

Just some quick notes on how to monitor custom  performance monitor metrics with CloudWatch.

We will firstly need to download the EC2Config service (if not already installed) from:

http://aws.amazon.com/developertools/5562082477397515

We need to create an in-line IAM Policy (IAM >> Users >> Select User >> 'Permissions' >> Inline Policies:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "UZ1000",
      "Effect": "Allow",
      "Action": [
        "cloudwatch:PutMetricData"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Sid": "UZ1000",
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams",
        "logs:PutLogEvents"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

We will now need to enable CloudWatch intergration by going to:
C:\Program Files\Amazon\Ec2ConfigService\Ec2ConfigServiceSettings.exe
and ensure 'Enable CloudWatch logs integration' is ticked.

The CloudWatch configuration can be found in:
%PROGRAMFILES%\Amazon\Ec2ConfigService\Settings\AWS.EC2.Windows.CloudWatch.json
In my case I wish to monitor a performace counter that monitors active RDP / RDS sessions:

{
    "EngineConfiguration": {
        "PollInterval": "00:00:15",
        "Components": [
            {
                "Id": "ApplicationEventLog",
                "FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
                "Parameters": {
                    "LogName": "Application",
                    "Levels": "1"
                }
            },
            {
                "Id": "SystemEventLog",
                "FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
                "Parameters": {
                    "LogName": "System",
                    "Levels": "7"
                }
            },
            {
                "Id": "SecurityEventLog",
                "FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
                "Parameters": {
                "LogName": "Security",
                "Levels": "7"
                }
            },
            {
                "Id": "ETW",
                "FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
                "Parameters": {
                    "LogName": "Microsoft-Windows-WinINet/Analytic",
                    "Levels": "7"
                }
            },
            {
                "Id": "IISLog",
                "FullName": "AWS.EC2.Windows.CloudWatch.IisLog.IisLogInputComponent,AWS.EC2.Windows.CloudWatch",
                "Parameters": {
                    "LogDirectoryPath": "C:\\inetpub\\logs\\LogFiles\\W3SVC1"
                }
            },
            {
                "Id": "CustomLogs",
                "FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch",
                "Parameters": {
                    "LogDirectoryPath": "C:\\CustomLogs\\",
                    "TimestampFormat": "MM/dd/yyyy HH:mm:ss",
                    "Encoding": "UTF-8",
                    "Filter": "",
                    "CultureName": "en-US",
                    "TimeZoneKind": "Local"
                }
            },
            {
                "Id": "PerformanceCounter",
                "FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
                "Parameters": {
                    "CategoryName": "Terminal Services",
                    "CounterName": "Active Sessions",
                    "InstanceName": "",
                    "MetricName": "Active Sessions",
                    "Unit": "Count",
                    "DimensionName": "",
                    "DimensionValue": ""
                }
            },
            {
                "Id": "CloudWatchLogs",
                "FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
                "Parameters": {
                    "AccessKey": "",
                    "SecretKey": "",
                    "Region": "eu-west-1",
                    "LogGroup": "Default-Log-Group",
                    "LogStream": "{instance_id}"
                }
            },
            {
                "Id": "CloudWatch",
                "FullName": "AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch",
                "Parameters":
                {
                    "AccessKey": "",
                    "SecretKey": "",
                    "Region": "eu-west-1",
                    "NameSpace": "Windows/RDS"
                }
            }
        ],
        "Flows": {
            "Flows":
            [
                "PerformanceCounter,CloudWatch"
            ]
        }
    }
}

We should then restart the ec2config service with:

net stop ec2config
net start ec2config

We should now review the logs by going to AWS Console >> CloudWatch >> Logs >> Custom Logs >> 'Windows/RDS'

Thursday, 19 May 2016

ip command usage basics

Get detailed information of the network interfaces attached to your computer:

ip addr

Get a list of your layer 3 addresses:

ip addr | grep inet

Bring an interface down:

ip link set dev eth0 down

and to bring a device up we can issue:

ip link set dev eth0 up

To add a static route:

ip route add 10.0.100.0/24 via 172.16.0.1 dev eth1

and to delete a static route:

ip route del 10.0.100.0/24

Add an ip to an interface:

ip addr add 10.11.12.13/16 dev eth0

Remove an ip from an interface:

ip addr del 10.11.12.13/16 dev eth0

Tuesday, 17 May 2016

Setting up a keypair for SSH authentication in CentOS 7

We should firstly generate our RSA key pair for the server we wish to remote from:

ssh-keygen -t rsa -b 2048

We should end up with a public key here:

~/.ssh/id_rsa.pub

and a private key in here:

~/.ssh/id_rsa

We now need to place our public key on the remote server so it will allow us to login from the origin server:

vi ~/.ssh/authorized_keys

and copy the relevant output of ~/.ssh/id_rsa.pub (from the master server.)

and then attempt to login to the remote server:

ssh root@remote-server

I received the following error message after attempting to login on CentOS 7:

Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

So to debug this issue we can run the SSH daemon in debug mode:

vi /etc/sysconfig/sshd

and add: OPTIONS="-ddd"

and then restart the SSH daemon:

systemctl restart sshd

and observe the output with tail or something similar:

tail -f /var/log/messages

After reviewing the output I noticed 'key_read missing keytype' - after reviewing the 'authorized_keys' file on the remote server it was immediately obvious what was wrong - the two keys were missing a linebreak between them - lesson learnt!

Thursday, 12 May 2016

Backup a user's mailbox and calendar permissions with Exchange

This is fortunately pretty simple to do.

To backup a user's calendar permissions we should issue:

Get-MailboxFolderPermission <USERNAME>:\calendar > usercalperms.txt

and to backup a user's mailbox permissions we should issue:

Get-MailboxPermission <USERNAME> | FL > C:\usermailboxperms.txt

Wednesday, 11 May 2016

Performing fsck on a file system within a logical volume / LVM

I would usually use the debian live rescue CD for this kind of job - but since it is no longer around (and the standard version is missing LVM support I will be using the CentOS live cd from:

http://buildlogs.centos.org/rolling/7/isos/x86_64/CentOS-7-x86_64-LiveGNOME.iso

Boot into the live cd and grab a list of the logical volumes:

sudo lvdisplay

and to view the filesystem type of the unmounted disks we can run:

parted -l

Which should return all of your logical volumes, along with the filesystem type.

In my case it was xfs - so we should run a TEST with something like the following:

xfs_repair -n /dev/mapper/centos-root

The '-n' parameter instructs xfs_repair to only check (and not fix) any errors.

And then fix any problems with:

xfs_repair /dev/mapper/centos-root

Although I got the following message when attempting to do so:

'Error: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair...'

So I simply mounted and then unmounted the system in order to get the logs to replay:

mkdir /mnt/tmp
mount /dev/mapper/centos-root /mnt/tmp
umount /mnt/tmp

and then re-run the command:

xfs_repair /dev/mapper/centos-root

IF you are UNABLE to replay the metadata log (this should only be used as a last resort - as you will likely lose some changes to the filesystem and potentially cause corruption of files!) - you can issue the xfs_repair command with the '-L' parameter.

xfs_repair -L /dev/mapper/centos-root

Monday, 9 May 2016

Routing all external email via your on-premise environment from Exchange Online

In order for you to route all external email from Exchange Online to your on-premise Exchange server(s) you must firstly ensure 'RouteAllMessagesViaOnPremises' is set to true.

This has to be performed via the shell - so we should firstly connect to our Exchange Online tenant:

$session = New-PSSession -ConfigurationName:Microsoft.Exchange -Authentication:Basic -ConnectionUri:https://ps.outlook.com/powershell -AllowRedirection:$true -Credential:(Get-Credential)

Import-PSSession $session

and set it with:

Get-OutboundConnector | Set-OutboundConnector -RouteAllMessagesViaOnPremises $True

and re-configure the send connector on the Exchange Online tenant so that it includes '*'.

Wednesday, 4 May 2016

[Exchange Migration] Local autodiscover for domain.com failed.

This error occurred after moving an on-premise user to Exchange Online / Office 365.

Lets say we have an on-premise user: [email protected] who we then move to our cloud Exchange Online tenant: [email protected].

When we attempt to use autodiscover on the primary email ([email protected]) the autodiscover services goes - 'hmm... looks like we should be looking in the domain.com domain' - whereas really the user now resides on the domain.onmicrosoft.com domain - so we need to specifically instruct or rather associate the mailbox (joe.bloggs) with a remote domain of 'domain.onmicrosoft.com' (this remote domain should be automatically been created when going through the Microsoft Exchange Hybrid Wizard.)

So to associate the mailbox with our Exchange Online domain we issue:

Set-RemoteMailbox -Identity joe.bloggs -RemoteRoutingAddress [email protected]

And then retry the autodiscover process.

Microsoft Azure AD Sync: Management agent warnings

When reviewing the event logs after installing the Microsoft Azure AD Sync tool and performing a full / initial sync I noticed the following warningi n the event log:

The management agent "mydomain.onmicrosoft.com - AAD" step execution completed on run profile "Full Import" but some objects had exported changes that were not confirmed on import.
 Additional Information
 Discovery Errors       : "0"
 Synchronization Errors : "0"
 Metaverse Retry Errors : "0"
 Export Errors          : "0"
 Warnings               : "5"= 
 User Action
 View the management agent run history for details.
In order to 'view' the management agent's run hostory we need to review the syncronization status using the FIM (Forefront Identity Manager) client:

C:\Program Files\Microsoft Azure AD Sync\UIShell\miisclient.exe

Identify the sync run that is causing problems (as below):



Tuesday, 3 May 2016

Packaging python scripts into RPM's

Firstly install the relevent dependancies:

sudo dnf install rpm-build rpmdevtools

cd to a directory within your home folder e.g.:

cd ~/packages

and then run 'rpmdev-setuptree':

rpmdev-setuptree

The above will generate the folder tree structure for our rpm creation.

Proceed by checking the .rpmmacros file is correct:

cat ~/.rpmmacros

*Note* The "_topdir" variable in the '.rpmmacros' file defines the root path - e.g. if you wanted to deploy the nano utility we might put it in $_topdir/usr/bin.

We should bundle our sources up:

tar czvf myscript_0.1.tar.gz myscript/

and place it in the 'SOURCES' folder:

mv myscript_0.1.tar.gz /home/limited/packages/myscript/SOURCES

We should now create our 'spec' file - which contains the meet of the configuration, defining dependencies, build instructions and so on:

vi ~/packages/myscript/SPECS/myscript.spec

An example .spec file is as follows:

Name: pywinusb
Version: 1
Release: 0
Summary: Allows you to install Windows on a USB drive.
Source0: pywinusb_0.1.tar.gz
License: GPL
BuildArch: noarch
BuildRoot: %{_tmppath}/%{name}-buildroot

%description
Allows you to install Windows on a USB drive.

%prep

%setup -q

%build

%install
mkdir -p $RPM_BUILD_ROOT/usr/share/pywinusb/bin
install -m 0755 pywinusb.py $RPM_BUILD_ROOT/usr/share/pywinusb/bin/pywinusb

%clean
rm -rf $RPM_BUILD_ROOT

%post
ln -sf /usr/share/pywinusb/bin/pywinusb /usr/bin/pywinusb
echo pywinusb has been installed successfully!

%files

%dir /usr/share/pywinusb/bin
/usr/share/pywinusb/bin/pywinusb

Finally we can build the rpm with:

cd ~/rpmbuild
rpmbuild -ba SPECS/myscript.spec

Ensure interface ip address(es) are not registered in DNS / Active Directory

 The most common way to perform this is:

'Right-hand click on the relevant interface' >> Properties >> 'Double click Internet Version Protocol 4' >> Advanced >> DNS >> Untick 'Register this connection's addresses in DNS.'

Although in addition to this on Windows Server 2008 R2 you should also perform the following if the interface has secondary IP's:

netsh int ipv4 add address "Local Area Connection" 10.0.2.79 255.255.255.0 skipassource=true

or if you wish to ensure DNS is not registered interface wide you can issue:

netsh int ipv4 add address "Local Area Connection" skipassource=true

You can then confirm with:

netsh int ipv4 show ipaddresses level=verbose

A good write-up on 'skipassource' can be found here.

Sunday, 1 May 2016

Installing the official NVIDIA driver on Fedora 23

We should firstly install some dependencies:

sudo dnf install gcc kernel-devel

** Note: Ensure that all of your packages are upto date! As I have had issues where my kernel version is different than that of my kernel-headers version and hence the installer has complained that no headers are available! **

sudo dnf update

We will then need to blacklist the generic 'nouveau' driver - so it will not be loaded up with the kernel on boot:

sudo echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf

and then regenerate the initramfs:

dracut -f /boot/your-fedora-initramfs.img

And reboot into run level 3 / networking.target and run the NVIDEA installer.

Download the official installer and make it bootable:

chmod +x NVIDIA-Linux-x86_64-361.42.run
./NVIDIA-Linux-x86_64-361.42.run


Restoring files with the EXT3/4 filesystem

Fortunately EXT3 and EXT4 provide the ability to recover deleted files (unlike XFS - which I found out the hard way!)

To perform the file recovery we should firstly install extundelete:

dnf install extundelete

and we can then restore a singular file with:

extundelete --restore-file myuser/Documents/important_document.txt /dev/mapper/fedora-home

(where '/dev/mapper/fedora-home' is the block device.)