Archive for the ‘IT’ Category

Getting the cluster size of CSV disks in Hyper-V

Saturday, April 23rd, 2016

If you want to check the cluster size of NTFS formatted disks used for Cluster Shared Volumes, here’s some handy code below. Just change the names of the hyper-v clusters you want to check in red and run from an administrative level powershell prompt.

If you want another row of information from the fsutil command, just change the number where it says $arr[9]

Import-Module FailoverClusters

$Clusters = (“My-CLUSTERNAME01“,”My-CLUSTERNAME02“,”My-CLUSTERNAME03“)

#Get CSVs foreach cluster

foreach($Cluster in $Clusters){

      $c = Get-ClusterSharedVolume -Cluster $Cluster 

     $csvs += $c

}

foreach ($csv in $csvs) {

     invoke-command -ComputerName $csv.ownernode -scriptblock {

                 param ($name,$node)

                 $Clustersize = fsutil fsinfo ntfsinfo “C:\ClusterStorage\$name” 

                $arr = $Clustersize -split ‘`n’

                write-host $name ” on ” $node ” has ” $arr[9]    

         }   -argumentlist $csv.name, $csv.OwnerNode

}

 

The output reports against the owner node for the CSV. As the underlying disk for the CSV is the same on all nodes, I report against the owner node to limit the output to one row per CSV.

Powershell launch options

Tuesday, December 29th, 2015

There are a few things that we can do when launching powershell.

1) We can simply start qowrdshell by typing “powershell” at the command prompt.
2) We can launch powershell faster by not displaying the banner – we do this using the command “powershell -nologo”
3) We can launch an older version of powershell by stating which version to launch e.g. “powershell -version 3”
4) We cab launch powershell using a specific configuration fie (e.g. “powershell -psconsolefile my console.psc1”
5) we can launch powrrshell an tell it to run a command on launch – powershell -command “& {get-service}”

Common PowerShell commands for Hyper-V

Sunday, November 9th, 2014

This blog post will list some simple things you can do with PowerShell to make Hyper-V easier to manage – the details and commands for this post are taken from TechEd Australia 2014 – you can watch the original session at http://channel9.msdn.com/Events/TechEd/Australia/2014/DCI313.

Want to find a command to use ?

e.g. Get-Command –Module Hyper-V –Name *vhd*

Swap the text in RED for the type of command you want to look for. If you are not sure if it’s a Hyper-V command then don’t limit it to a module bur search all loaded modules.

Get-command *Adapter*

If you want to list all of the available commands in a module you need to know the module name.

Get-Command –Module Hyper-V

If you are using Get-Help to understand how to use individual commands, use

Get-Help Update-Help

to understand how to update the help files to the latest versions of the help files

If you just want to see how to construct an individual command type

Show-Command

This will pop up a screen showng all the available commands. You can the search for a command and the GUI will allow you to browse the options for that command.

Creating new VMs is a cinch – enter a command similar to the below:

 New-VM –Name “VM Name” –MemoryStartupBytes 512MB –Path “C:\CulesterVolumes\Volume1\MyVMs

If you want to create 10 new VMs then this can be done with a single command line by creating an array holding 10 values and then ForEach of those items in the array, create a new VM. An example of the code required is:

$CreateVM = @(): (1..10) | %{ $CreateVM += new-VM –Name “MyVM$_”}

This will create 10 new virtual machines named MyVM1, MyVM2 tjhrough to MyVM10.

Want to know which virtual machines are powered down ?

Get-VM | where-object {$_.state = ‘Off’}

The above is for PowerShell 3 – in PowerShell 4 it’s been made even easier !

Get-VM | Where-Object State –eq Off

As you can see, we no longer need to tell PowerShell that state is an attribute if our object, it understands the context. It also doesn’t need to be told that “off” is a value rather than a variable, it just works it out – amazing !

If you want to be able to review the results in something other than the PowerShell command line interface, just pipe the results to “Out-Gridview” to open them in a grid in a new interface

Get-VM | Out-GridView

This is very useful in very large environments as it allows searching of the results.

Want to create a new disk ?

New-VHD C:\CustomerVolumes\Volume1\MyVHD\MyDisk.VHD –SizeBytes 60GB

This will create a new 60GB disk. If you want a dynamic disk, add the –Dynamic switch.

Want to convert a VHD to the newer VHDx format (to expand the disk beyond 2TB) ?

Convert-VHD C:\CustomerVolumes\Volume1\MyVHD\MyDisk.VHD C:\CustomerVolumes\Volume1\MyVHD\MyDisk.VHDX -Passthru

The –Passthru switch ensures that the results of the command are disaplyed on the screen (it says to pass through the output to the pipeline, as no commands follow it simply writes to the screen) – without this the command simply executes. Don’t forget, you can use a command line to get all VM, get all of their disks and convert all disks to the new format.

If you want to connect the disk you created to a virtual machine then simply

Add-VMHardDiskDrive –VMName MyVM –Path C:\CustomerVolumes\Volume1\MyVHD\MyDisk.VHDX -Passthru

Want to move all of your VMs from cluster to another ? Well, you can do live motion or a shred noting move but to do this using PowerShell we can simply export and import.

Get-VM | Export-VM –Path C:\MyExportedVMs –Passthru

Now that we can create disks and VMs, we can also create networking within Hyper-V also. To create a new virtual switch we can simply

New-VMSwaitch “MyvSwitch” –SwitchType Internal

This will create a new vSwitch to enable communications only within the node itself – To create a switch available for external use we can use the command

New-VMSwitch “QoS Switch” -NetAdapterName “Wired Ethernet Connection 3” -MinimumBandwidthMode Weight

i.e. External is assumed by default – the above switch will be enabled for QoS for traffic which is especially useful where NICs have been presented to the Management OS also.

If we want to add a network adapter to our virtual machine we use a command similar to the below

Add-VMNetworkAdapter –VMName “MyVM” –Name “NewNIC” –Passthru

If we want to present a NIC to the ManagementOS (for converged networking) then, instead of using the –VMName switch, we use the –ManagementOS switch)

Add-VMNetworkAdapter –ManagementOS –Name “NewNIC” –SwitchName “MyvSwitch” –Passthru

As you can see, we connect to the switch at the time. For our “standard” VM NICs, we can connect the network adapter we created to our vSwitch as follows

Connect-VMNetworkAdapter –VMName “MyVM” –SwitchName “MyvSwitch

Once our VM is created we can migrate it between nodes

Move-VM –Name MyVM -DestinationHost Server2

If we don’t have a cluster and want to perform a “shared nothing” migration between stand alone nodes or even between clusters then we can do this

Move-VM MyVM Server2 –IncludeStorage –DestinationStoragePath C:\ClusterVolumes\Volume2\MyVM

Of course, we can use tokens to move multiple machines one after the other.

Get-VM –Name * | % {Move-VMStorage $_.Name “C:\ClusterStorage\Volume2\$($_.Name)” }

The above will move the storage only for all of our virtual machines to a new disk – great for rebalancing our storage.

We can set QoS for our virtual machines – this is only bandwidth based and does not priorities traffic by type.

Set-VMNetworkAdapter –VMName MyVM –Name MyNIC –MaximumBandwidth 100MB –MinimumBandwidthAbsolute 20MB –Passthru

We can also set Access Control Lists on our VM (essentially at the virtual switch level rather than in the OS) using PowerShell. We do this to ensure that our environment is safe. For example, this should be done to prevent access to the management layer from tenant VMs.

Add-VMNetworkAdapterACL –VMName myVM –RemoteIPAddress 10.1.1.1 –Direction Outbound –Action Deny

As you can see, the ACL is not fully featured and is based on IP address though it can be filtered by VM NIC also, it secures traffic between IP addresses and the direction (Inbound, Outbound of BOTH).

You can view any ACLs applied to a VM by using the command

Get-VMNetworkAdapterACL –VMName MyVM

We can set memory within a VM as follows

Set-VMMemory –VMName MyVM –DynamicMemoryEnabled $True – MaximumBytes 2GB –MinimumBytes 1GB –StartupBytes 2GBPasteur

Again, we can apply to multiple machines either by listing them

Set-VMMemory –VMName MyVM1, MyVM2, MyVM2

Or by tokenizing the name

Set-VMMemory –VMName MyVM*

We can also start multiple VMs in the same way

Get-VM MyVM* | Start-VM -Passthru

If we want to monitor our VMs, we can use VM Resource Metering to record what resources each VM is consuming – this can only be set via PowerShell

Get-VM | EnableVMResourceMetering

Will enable resource metering for all VMs

Get-VM –Name MyVM | Measure-VM

Will return the performance statistics for an individual VM that is being measured / metered.

 

 

PowerShell History Viewer

Friday, March 21st, 2014

Here’s a nice feature of the Active Directory Administrative Centre – the PowerShell History Viewer. Simply undertake your daily tasks as usual and the PowerShell History Viewer will capture what you do and allow you to see the commands it used to perform those tasks. You’ve then taken the first steps needed to automate your most common tasks as you will:

  1. Know which tasks you carry out most regularly and
  2. Know the PowerShell commands to perform those tasks.

To expose the PowerShell History Viewer in the Active Directory Administrative Centre, click on the up arrow at the bottom right of the Admin Centre.

 

You can now perform your usual tasks and the PowerShell for those tasks will be captured there.

The PowerShell used is displayed below – checking the “Show All” box will show all PowerShell commands issued to run the Admin Centre, not just those related to you entering instructions.

 

For the “full” command, right click it and select “Copy”

 

You can then paste it into your favourite editor such as the PowerShell Integrated Scripting Engine (ISE). If you would prefer to just read the string in the GUI then clicking the + sign to the left of the command renders it readable.

 

 

Shutting down the Admin Centre will clear the history so do take a note of any commands you need first.

 

 

 

 

 

XenApp and XenDesktop recommended optimisations and settings

Friday, March 21st, 2014

Excellent blog post here detailing recommended optimisations to make when presenting Windows 8 or XenApp using Citrix products. Applies just as well for RDS installs.

The Windows 7 optimization guide can be found at http://support.citrix.com/article/CTX127050.

How to set XenDesktop Delivery Controllers to use SSL

Monday, March 17th, 2014

Good Citrix article at http://support.citrix.com/article/CTX130213.

Note the need to add dashes to the GUID or you get a “Parameter is incorrect” message.

Listing group membership and extracting into a CSV file for multiple groups

Friday, February 21st, 2014

I’ve been hunting around the web for a powershell script that will list the members of multiple groups and haven’t been able to write one so I’ve written my own.

This script isn’t intended to be perfect but it will give you the bare bones of how to write your script. For example, this script work on the basis of entering all or part of a group name and then reporting on that group. If you enter a blank or * from the group name then it will export user membership in all groups as direct members (add a  recursive switch if you need membership of nested groups). This is useful if you have a group naming convention as you can easily drill down into the groups you want.

It also doesn’t filter out computer accounts so it depends if that’s an issue for you and it reports against the whole AD but you can always filter the Get-ADGroup command to scope it to an individual OU or area of AD.

In any event, like I said I wasn’t able to find anything around to do this so hopefully, if you need to do this, this script will give you a good head start on exporting these values from your directory.

Here’s the script – if copying and pasting into notepad remember to correct some characters such as ‘ and “.

 

Import-Module ActiveDirectory
Write-Host “********************************************************”
Write-Host “* This script will dump out users in named groups, all *”
Write-host “* groups or a range of groups. You will be guided *”
Write-host “* through the process *”
Write-Host “* *”
Write-Host “* All output will be saved to C:\Support\ScriptOutput\ *”
Write-Host “********************************************************”
write-host

$strFileName = $(
$selection = read-host ‘Enter the name of file to save results to. Include an extention. (Default = Groupmembership.csv)’
if ($selection) {$selection} else {‘GroupMembership.csv’}
)

$strFileName = “C:\Support\ScriptOutput\” + $strFilename
If (Test-Path $strFileName){
Remove-Item $strFileName
}

Write-Host
Write-Host ‘Enter name of group you would like to export’
Write-Host ‘The script will look for matching groups’
Write-Host
Write-Host ‘Entering the first part of the group name will return all matching groups’
Write-Host ‘For example, Entering “LG-APP-” without the quotation marks will return all application groups’
Write-Host
Write-Host ‘Pressing return will list membership of ALL groups’
Write-host
Write-Host ‘***** WARNING *****’
Write-Host
Write-Host ‘Exporting all group memberships will take some time as it will’
Write-Host ‘include all built in groups and distribution lists – use with caution’
Write-Host

$strGroupNames = $(
$selection = Read-Host ‘Enter name of group you would like to export (no value will return all groups)’
if ($selection) {$selection + ‘*’} else {‘*’}
)

Write-Host
Write-Host ‘Exporting teams with names like ‘ $strGroupNames ‘ to ‘ $strFilename
$data= ‘Group,UserName,UserID’
Write-Output $data | out-file -FilePath $strFileName -Append
$groups = Get-ADGroup -filter {name -like $strGroupNames}
foreach ($Group in $Groups)
{
$usernames = Get-ADGroupMember $($group.name)

foreach ($user in $usernames)
{
$data = $Null
$data = $data + $group.name + “,” + $user.name + “,” + $user.saAMAccountName
Write-Output $data | out-file -FilePath $strFileName -Append

}
}

 

Performing Exchange 2010 datacentre failovers

Friday, February 21st, 2014

A nice wizard driven process is available to walk you through your particular scenario here.

Install and Configure MDT 2013 (Part 5)

Saturday, January 4th, 2014

In the last part of this series we looked at creation of media for remote deployments and centralised monitoring of deployments. In this part of the series we will look at replacing media with the use of Linked Deployment Shares for remote offices.

As with Media, the contents of a Linked Deployment Share are dictated by the Selection Profile associated with the Linked Deployment Share. If the “Everything” Selection Profile is used then the linked share will be an exact replica of the central Deployment Share at the time that the Linked Deployment Share is created. That’s important to note as any changes made to the centralised Deployment Share will not be replicated to the Linked Deployment Share unless and until you force the Linked Deployment Share to be updated. The contents of the Linked Deployment Share can be updated manually (by running the update command) or more regularly by use of either DFS or a scheduled task running robocopy or a PowerShell command.

Creation of the Linked Deployment Share is relatively simplistic and is achieved by right clicking the Linked Deployment Shares node in the Advanced Configuration section of the Deployment Workbench GUI and selecting “New Linked Deployment Share”.

 

However, the wizard that is launched requires that the share that will be turned into a Linked Deployment Share must already exist and have been created. For the purposes of this post I have created a new share on the same server. This is what you would need to do if you were to use DFS as a replication mechanism. If a scripted replication is used then I would recommend the use of a PowerShell command (shown later in this post) or remembering to manually update the share as and when the centralised Deployment Share is modified. What you will do in your environment will in large part depend on how often the centralised Deployment Share is updated and whether you will remember to update and linked Deployment Shares. If you have several remote shares then you may want to use a series of PowerShell commands in a single script to update all remote shares at the same time.

As for the centralised share, I have created the remote share as an administrative share by appending a $ sign to the name. The share permissions are “Everyone | Full Control” in line with Microsoft Best Practice. I have removed the “Users” NTFS permissions and replaced them with Read / Execute permissions for an MDT specific account.

 

 

We can then create our Linked Deployment Share. If you have been following this series then the wizard should be familiar to you.

 

 

Note that we DON’T select the file path for our Linked Deployment Share (as by design it should be on a remote server). Instead, we enter an UNC path to the share including the $ sign as it’s a hidden share and select an appropriate Selection Profile to determine which content should be replicated to the remote share. We also select to either merge the contents or replace the contents of the existing share. It’s not immediately obvious (as we have created an empty share) but this allows you to pre-stage the content in the remote share (by using an external drive to manually transfer the data). This works in the same way as for creation of media in that the Linked Deployment Share object is created in the GUI but no data is copied into the share itself.

 

 

If we right click the Linked Deployment Share object created we can inspect its properties.

 

 

 

As you can see, there are no additional tabs (unlike for the Deployment Share. If you want to configure the WinPE settings for the Linked Deployment Share you need to do that through its root Deployment Share.

We can then replicate the content by right clicking the deployment share and selecting “Replicate Content”.

 

The replication will immediately start and the amount of time taken will depend on the amount of content you have, speed of link and hardware resources available. The boot images will be recreated for the Linked Deployment Share, specifically to ensure that the bootstrap.ini file contains the correct value for DeployRoot (the location of the Linked Deployment Share).

 

 

 

Once replication has been completed the summary screen present you with a “View Script” button that allows you to access the PowerShell command used to replicate the share.

 

 

Import-Module “C:\Program Files\Microsoft Deployment Toolkit\bin\MicrosoftDeploymentToolkit.psd1”

New-PSDrive -Name “DS001” -PSProvider MDTProvider -Root “D:\DeploymentShare”

Update-MDTLinkedDS -path “DS001:\Linked Deployment Shares\LINKED001” -Verbose

 

This script can be used to automate replication of the Linked Deployment share but this will recreate the WinPE boot disks each time also. This can be overcome by clearing the below checkbox in the Linked Deployment Share properties.

 

 

You should note that the customsettings.ini file is NOT replicated between servers with the default settings being created in the Linked Deployment Share. Using this method of replication you will then need to manually update the customsettings.ini file.

 

 

As well as either manually replicating the data as above or using a Scheduled task running the PowerShell script, you can simply set up a DFS-R share and replicate either the Linked Deployment Share (so that only a subset of data replicates) or the original Deployment Share (so that all data replicates).

If replicating the original Deployment Share some changes need to be made to the bootstrap.ini file so that, when booting, WinPE sets the DeployRoot value based in the clients default gateway. An example configuration is shown below.

[Settings]
Priority=DefaultGateway, Default

[Default]
OSInstall=Y
SkipBDDWelcome=Yes

[DefaultGateway]
10.1.1.1=London
10.2.2.1=Tokyo
10.3.3.1=NYC

[London]
Deployroot=\\LondonMDT\DFSRoot\DeploymentShare$

[Tokyo]
Deployroot=\\TokyoMDT\DFSRoot\DeploymentShare$

[NYC]
Deployroot=\\NYCMDT\DFSRoot\DeploymentShare$

 

I think that you can see that your choices are between using the built in non-automated solution or creating your own automated solution and configuring MDT to function between sites. I would suggest that the latter, while needing more set-up (especially when you factor in deploying and configuring DFS-R) is the more functional and robust solution.

In the next post we’ll go through how to configure a database for MDT to centralise the functionality provided by the customsettings.ini file.

 

Install and Configure MDT 2013 (Part 4)

Saturday, January 4th, 2014

In Part 3 of this series we looked at Selection Profiles and how to target the injection of drivers as part of a task sequence. As promised this post will show you how to enable monitoring to track installations as they occur and also creating media so that computers can be deployed offline when the MDT server cannot be contacted or where the link to the MDT server is small or unreliable.

Monitoring is enabled per Deployment Share by accessing the properties of the deployment share in the deployment workbench. To enable monitoring we simply tick the check box and apply.

 

 

The necessary firewall ports are open by default.

 

 

If you change the port numbers, an additional firewall rule will be created leaving the original ports exposed.

 

 

As well as creating the firewall rule, enabling monitoring also does two other things.

  1. A new service (the Microsoft Deployment Toolkit Monitor Service) is created. This service receives in events from the computers being monitored and displays them in the monitoring node of the deployment workbench.
  2. The CustomSettings.ini file is also modified to add a new entry specifying the URL to be used for monitoring.

 

 

As the customsettings.ini file is updated there is no need to update the deployment share (nor the WinPE boot images) when enabling monitoring as this setting is read in post boot as part of the deployment process.

When deploying machines you will now be able to track the build process within the workbench GUI.

 

 

Right clicking the status and selecting properties provides further details so that you can see which step the deployment has reached.

 

 

Once the installation has completed the GUI is updated.

 

If you access the properties of the monitoring report you can then connect to the machine by RDP (if remote desktop has been enabled) or using VMConnect.exe if the Hyper-V tools have been installed on the machine running the deployment workbench.

 

 

Monitoring can definitely make your life easier as you will know when a machine has completed building. In that way you can work on something else and only return to the machine when everything is ready.

Another thing that can make your life easier is being able to build machines while disconnected from the network – perhaps in secure areas of the network or in remote sites with a low number of users which don’t warrant a local server and / or where remote sites have a small or unreliable connection.

Media can be created as an ISO or to place on a USB thumb drive to allow it to be booted to. In larger deployments, this may mean that there is a large amount of files, installers, drivers and other items to include in the build media. To reduce the amount of data placed in any media created the creation of media leverages Selection Profiles to select which items should be included. For example, we can include just the Windows 8.1 operating system, HP drivers, general applications and any task sequences required to drive the installation.

We therefore create a new Selection Profile to select the items to be included in the media. The process for this is detailed in Part 3 of this series.

 

 

As you can see, it is not possible to select individual items and so creation of your folder structure is paramount, especially regarding items which may consume large amounts of space in the media image. For example, when we imported the Windows Server 2012 R2 images, it imported all 4 images into a single folder. While these will not take any more room than a single image (because of the way in which Windows 2012 R2 is packaged) I use this as a device to demonstrate how adding multiple items to a single folder can lead to large media sets being created.

Once we have a Selection Profile created specifically for our Media we can create the Media. To create our media we right click the Media node under Advanced Configuration in the deployment workbench and select “New Media”.

 

 

We specify a location to create our media in and also the Selection Profile created to state which items to include.

 

 

NOTE: Do NOT use a path under the deployment share. If we choose to replicate our share then this will mean the data being shipped twice.

The media creation process is very quick taking a few seconds. A Media object is created under our media node.

 

 

And a folder structure is created in the path we specified.

 

 

Just as with our Deployment Share, the media created can be configured to dictate how the installation process will run. By right clicking the media and selecting “Properties” we can access an interface similar to that used for the Deployment Share.

 

 

Above you can see that both an x86 and an x64 boot image have been selected to be created. The size of the created media can be reduced by only creating one type of boot image. The important thing to remember is that any build process started using this media will NOT be automated unless the rules section (media specific customsettings.ini and bootstrap.ini files) are updated to configure that automation.

 

 

Note: The bootstrap.ini file should NOT contain a DeployRoot value as all required content should be contained in the created media rather than being accessed from a Deployment Share.

 

 

Once the customsettings.ini and bootstrap.ini files have been modified to suit requirements, the media folders can be populated with data and the boot files created. To write the items included in the Selection Profile to disk we need to update the media by right clicking the media object created and selecting “Update Media Content”.

 

 

This process will take much longer, the length of time required dependent upon the specific items included in the Selection Profile.

 

 

 

Once complete, two sets of media will have been created. An ISO file (LiteTouchMedia.iso) and a content folder containing all the files needing to be written to a bootable USB drive.

 

 

In my example media, the ISO file has grown beyond the 4.7GB that can be held on a standard DVD drive. While it can still be used to build virtual machines you may need to use a USB thumb drive to create physical machines.

To create a bootable thumb drive you will need a physical machine (to plug the USB drive into) or a solution that supports USB over IP. My personal preference is to create the bootable USB drive in a Windows 7 or 8 workstation or laptop. The steps to create bootable MDT media on a USB drive are as follows:

  1. Open a Command Prompt with Administrator privileges in either Windows 7 Pro or Windows 8 Pro.
  2. Insert the target USB boot media device into an available USB port.
  3. Type “DiskPart” in the command prompt.
  4. Type “List Disk” (make note of the disk number of the target USB drive).
  5. Type “Select Disk X”, where X is the target USB drive noted in step 4.
  6. Type “Clean”.
  7. Type “Create Partition Primary”.
  8. Type “Select Partition 1”.
  9. Type “format FS=fat32 quick”.
  10. Type “Active”.
  11. Type “Exit”.
  12. Copy the contents of the “Content” folder from the media location specified above to the USB drive.

 

 

Note: The above commands set the file system to be fat32. This supports a maximum disk size of 8 terabytes.

You can then test your bootable media on your central site by powering down your MDT server or disconnecting it from the network and ensuring that clients can build to completion before sending the media to remote sites.

Note: Neither the ISO or the USB thumb drive will be password protected meaning anyone having access to the media will be able to read any usernames or passwords used in the customsettings.ini and boostrap.ini files. In addition, use of media does not allow for versioning meaning that, as MDT is updated, your old media may still be available and un use around the estate.

That brings us to the end of this post which has demonstrated how to enable monitoring and also how to deploy machines in more remote locations. In the next part of this series we’ll cover off Linked Deployment Shares to enable deployment in remote sites where there is sufficient requirements to place a localised deployment share.