Archive for the ‘IT’ Category

Powershell launch options

Tuesday, December 29th, 2015

There are a few things that we can do when launching powershell.

1) We can simply start qowrdshell by typing “powershell” at the command prompt.
2) We can launch powershell faster by not displaying the banner – we do this using the command “powershell -nologo”
3) We can launch an older version of powershell by stating which version to launch e.g. “powershell -version 3”
4) We cab launch powershell using a specific configuration fie (e.g. “powershell -psconsolefile my console.psc1”
5) we can launch powrrshell an tell it to run a command on launch – powershell -command “& {get-service}”

Common PowerShell commands for Hyper-V

Sunday, November 9th, 2014

This blog post will list some simple things you can do with PowerShell to make Hyper-V easier to manage – the details and commands for this post are taken from TechEd Australia 2014 – you can watch the original session at http://channel9.msdn.com/Events/TechEd/Australia/2014/DCI313.

Want to find a command to use ?

e.g. Get-Command –Module Hyper-V –Name *vhd*

Swap the text in RED for the type of command you want to look for. If you are not sure if it’s a Hyper-V command then don’t limit it to a module bur search all loaded modules.

Get-command *Adapter*

If you want to list all of the available commands in a module you need to know the module name.

Get-Command –Module Hyper-V

If you are using Get-Help to understand how to use individual commands, use

Get-Help Update-Help

to understand how to update the help files to the latest versions of the help files

If you just want to see how to construct an individual command type

Show-Command

This will pop up a screen showng all the available commands. You can the search for a command and the GUI will allow you to browse the options for that command.

Creating new VMs is a cinch – enter a command similar to the below:

 New-VM –Name “VM Name” –MemoryStartupBytes 512MB –Path “C:\CulesterVolumes\Volume1\MyVMs

If you want to create 10 new VMs then this can be done with a single command line by creating an array holding 10 values and then ForEach of those items in the array, create a new VM. An example of the code required is:

$CreateVM = @(): (1..10) | %{ $CreateVM += new-VM –Name “MyVM$_”}

This will create 10 new virtual machines named MyVM1, MyVM2 tjhrough to MyVM10.

Want to know which virtual machines are powered down ?

Get-VM | where-object {$_.state = ‘Off’}

The above is for PowerShell 3 – in PowerShell 4 it’s been made even easier !

Get-VM | Where-Object State –eq Off

As you can see, we no longer need to tell PowerShell that state is an attribute if our object, it understands the context. It also doesn’t need to be told that “off” is a value rather than a variable, it just works it out – amazing !

If you want to be able to review the results in something other than the PowerShell command line interface, just pipe the results to “Out-Gridview” to open them in a grid in a new interface

Get-VM | Out-GridView

This is very useful in very large environments as it allows searching of the results.

Want to create a new disk ?

New-VHD C:\CustomerVolumes\Volume1\MyVHD\MyDisk.VHD –SizeBytes 60GB

This will create a new 60GB disk. If you want a dynamic disk, add the –Dynamic switch.

Want to convert a VHD to the newer VHDx format (to expand the disk beyond 2TB) ?

Convert-VHD C:\CustomerVolumes\Volume1\MyVHD\MyDisk.VHD C:\CustomerVolumes\Volume1\MyVHD\MyDisk.VHDX -Passthru

The –Passthru switch ensures that the results of the command are disaplyed on the screen (it says to pass through the output to the pipeline, as no commands follow it simply writes to the screen) – without this the command simply executes. Don’t forget, you can use a command line to get all VM, get all of their disks and convert all disks to the new format.

If you want to connect the disk you created to a virtual machine then simply

Add-VMHardDiskDrive –VMName MyVM –Path C:\CustomerVolumes\Volume1\MyVHD\MyDisk.VHDX -Passthru

Want to move all of your VMs from cluster to another ? Well, you can do live motion or a shred noting move but to do this using PowerShell we can simply export and import.

Get-VM | Export-VM –Path C:\MyExportedVMs –Passthru

Now that we can create disks and VMs, we can also create networking within Hyper-V also. To create a new virtual switch we can simply

New-VMSwaitch “MyvSwitch” –SwitchType Internal

This will create a new vSwitch to enable communications only within the node itself – To create a switch available for external use we can use the command

New-VMSwitch “QoS Switch” -NetAdapterName “Wired Ethernet Connection 3” -MinimumBandwidthMode Weight

i.e. External is assumed by default – the above switch will be enabled for QoS for traffic which is especially useful where NICs have been presented to the Management OS also.

If we want to add a network adapter to our virtual machine we use a command similar to the below

Add-VMNetworkAdapter –VMName “MyVM” –Name “NewNIC” –Passthru

If we want to present a NIC to the ManagementOS (for converged networking) then, instead of using the –VMName switch, we use the –ManagementOS switch)

Add-VMNetworkAdapter –ManagementOS –Name “NewNIC” –SwitchName “MyvSwitch” –Passthru

As you can see, we connect to the switch at the time. For our “standard” VM NICs, we can connect the network adapter we created to our vSwitch as follows

Connect-VMNetworkAdapter –VMName “MyVM” –SwitchName “MyvSwitch

Once our VM is created we can migrate it between nodes

Move-VM –Name MyVM -DestinationHost Server2

If we don’t have a cluster and want to perform a “shared nothing” migration between stand alone nodes or even between clusters then we can do this

Move-VM MyVM Server2 –IncludeStorage –DestinationStoragePath C:\ClusterVolumes\Volume2\MyVM

Of course, we can use tokens to move multiple machines one after the other.

Get-VM –Name * | % {Move-VMStorage $_.Name “C:\ClusterStorage\Volume2\$($_.Name)” }

The above will move the storage only for all of our virtual machines to a new disk – great for rebalancing our storage.

We can set QoS for our virtual machines – this is only bandwidth based and does not priorities traffic by type.

Set-VMNetworkAdapter –VMName MyVM –Name MyNIC –MaximumBandwidth 100MB –MinimumBandwidthAbsolute 20MB –Passthru

We can also set Access Control Lists on our VM (essentially at the virtual switch level rather than in the OS) using PowerShell. We do this to ensure that our environment is safe. For example, this should be done to prevent access to the management layer from tenant VMs.

Add-VMNetworkAdapterACL –VMName myVM –RemoteIPAddress 10.1.1.1 –Direction Outbound –Action Deny

As you can see, the ACL is not fully featured and is based on IP address though it can be filtered by VM NIC also, it secures traffic between IP addresses and the direction (Inbound, Outbound of BOTH).

You can view any ACLs applied to a VM by using the command

Get-VMNetworkAdapterACL –VMName MyVM

We can set memory within a VM as follows

Set-VMMemory –VMName MyVM –DynamicMemoryEnabled $True – MaximumBytes 2GB –MinimumBytes 1GB –StartupBytes 2GBPasteur

Again, we can apply to multiple machines either by listing them

Set-VMMemory –VMName MyVM1, MyVM2, MyVM2

Or by tokenizing the name

Set-VMMemory –VMName MyVM*

We can also start multiple VMs in the same way

Get-VM MyVM* | Start-VM -Passthru

If we want to monitor our VMs, we can use VM Resource Metering to record what resources each VM is consuming – this can only be set via PowerShell

Get-VM | EnableVMResourceMetering

Will enable resource metering for all VMs

Get-VM –Name MyVM | Measure-VM

Will return the performance statistics for an individual VM that is being measured / metered.

 

 

PowerShell History Viewer

Friday, March 21st, 2014

Here’s a nice feature of the Active Directory Administrative Centre – the PowerShell History Viewer. Simply undertake your daily tasks as usual and the PowerShell History Viewer will capture what you do and allow you to see the commands it used to perform those tasks. You’ve then taken the first steps needed to automate your most common tasks as you will:

  1. Know which tasks you carry out most regularly and
  2. Know the PowerShell commands to perform those tasks.

To expose the PowerShell History Viewer in the Active Directory Administrative Centre, click on the up arrow at the bottom right of the Admin Centre.

 

You can now perform your usual tasks and the PowerShell for those tasks will be captured there.

The PowerShell used is displayed below – checking the “Show All” box will show all PowerShell commands issued to run the Admin Centre, not just those related to you entering instructions.

 

For the “full” command, right click it and select “Copy”

 

You can then paste it into your favourite editor such as the PowerShell Integrated Scripting Engine (ISE). If you would prefer to just read the string in the GUI then clicking the + sign to the left of the command renders it readable.

 

 

Shutting down the Admin Centre will clear the history so do take a note of any commands you need first.

 

 

 

 

 

XenApp and XenDesktop recommended optimisations and settings

Friday, March 21st, 2014

Excellent blog post here detailing recommended optimisations to make when presenting Windows 8 or XenApp using Citrix products. Applies just as well for RDS installs.

The Windows 7 optimization guide can be found at http://support.citrix.com/article/CTX127050.

How to set XenDesktop Delivery Controllers to use SSL

Monday, March 17th, 2014

Good Citrix article at http://support.citrix.com/article/CTX130213.

Note the need to add dashes to the GUID or you get a “Parameter is incorrect” message.

Listing group membership and extracting into a CSV file for multiple groups

Friday, February 21st, 2014

I’ve been hunting around the web for a powershell script that will list the members of multiple groups and haven’t been able to write one so I’ve written my own.

This script isn’t intended to be perfect but it will give you the bare bones of how to write your script. For example, this script work on the basis of entering all or part of a group name and then reporting on that group. If you enter a blank or * from the group name then it will export user membership in all groups as direct members (add a  recursive switch if you need membership of nested groups). This is useful if you have a group naming convention as you can easily drill down into the groups you want.

It also doesn’t filter out computer accounts so it depends if that’s an issue for you and it reports against the whole AD but you can always filter the Get-ADGroup command to scope it to an individual OU or area of AD.

In any event, like I said I wasn’t able to find anything around to do this so hopefully, if you need to do this, this script will give you a good head start on exporting these values from your directory.

Here’s the script – if copying and pasting into notepad remember to correct some characters such as ‘ and “.

 

Import-Module ActiveDirectory
Write-Host “********************************************************”
Write-Host “* This script will dump out users in named groups, all *”
Write-host “* groups or a range of groups. You will be guided *”
Write-host “* through the process *”
Write-Host “* *”
Write-Host “* All output will be saved to C:\Support\ScriptOutput\ *”
Write-Host “********************************************************”
write-host

$strFileName = $(
$selection = read-host ‘Enter the name of file to save results to. Include an extention. (Default = Groupmembership.csv)’
if ($selection) {$selection} else {‘GroupMembership.csv’}
)

$strFileName = “C:\Support\ScriptOutput\” + $strFilename
If (Test-Path $strFileName){
Remove-Item $strFileName
}

Write-Host
Write-Host ‘Enter name of group you would like to export’
Write-Host ‘The script will look for matching groups’
Write-Host
Write-Host ‘Entering the first part of the group name will return all matching groups’
Write-Host ‘For example, Entering “LG-APP-” without the quotation marks will return all application groups’
Write-Host
Write-Host ‘Pressing return will list membership of ALL groups’
Write-host
Write-Host ‘***** WARNING *****’
Write-Host
Write-Host ‘Exporting all group memberships will take some time as it will’
Write-Host ‘include all built in groups and distribution lists – use with caution’
Write-Host

$strGroupNames = $(
$selection = Read-Host ‘Enter name of group you would like to export (no value will return all groups)’
if ($selection) {$selection + ‘*’} else {‘*’}
)

Write-Host
Write-Host ‘Exporting teams with names like ‘ $strGroupNames ‘ to ‘ $strFilename
$data= ‘Group,UserName,UserID’
Write-Output $data | out-file -FilePath $strFileName -Append
$groups = Get-ADGroup -filter {name -like $strGroupNames}
foreach ($Group in $Groups)
{
$usernames = Get-ADGroupMember $($group.name)

foreach ($user in $usernames)
{
$data = $Null
$data = $data + $group.name + “,” + $user.name + “,” + $user.saAMAccountName
Write-Output $data | out-file -FilePath $strFileName -Append

}
}

 

Performing Exchange 2010 datacentre failovers

Friday, February 21st, 2014

A nice wizard driven process is available to walk you through your particular scenario here.

Install and Configure MDT 2013 (Part 5)

Saturday, January 4th, 2014

In the last part of this series we looked at creation of media for remote deployments and centralised monitoring of deployments. In this part of the series we will look at replacing media with the use of Linked Deployment Shares for remote offices.

As with Media, the contents of a Linked Deployment Share are dictated by the Selection Profile associated with the Linked Deployment Share. If the “Everything” Selection Profile is used then the linked share will be an exact replica of the central Deployment Share at the time that the Linked Deployment Share is created. That’s important to note as any changes made to the centralised Deployment Share will not be replicated to the Linked Deployment Share unless and until you force the Linked Deployment Share to be updated. The contents of the Linked Deployment Share can be updated manually (by running the update command) or more regularly by use of either DFS or a scheduled task running robocopy or a PowerShell command.

Creation of the Linked Deployment Share is relatively simplistic and is achieved by right clicking the Linked Deployment Shares node in the Advanced Configuration section of the Deployment Workbench GUI and selecting “New Linked Deployment Share”.

 

However, the wizard that is launched requires that the share that will be turned into a Linked Deployment Share must already exist and have been created. For the purposes of this post I have created a new share on the same server. This is what you would need to do if you were to use DFS as a replication mechanism. If a scripted replication is used then I would recommend the use of a PowerShell command (shown later in this post) or remembering to manually update the share as and when the centralised Deployment Share is modified. What you will do in your environment will in large part depend on how often the centralised Deployment Share is updated and whether you will remember to update and linked Deployment Shares. If you have several remote shares then you may want to use a series of PowerShell commands in a single script to update all remote shares at the same time.

As for the centralised share, I have created the remote share as an administrative share by appending a $ sign to the name. The share permissions are “Everyone | Full Control” in line with Microsoft Best Practice. I have removed the “Users” NTFS permissions and replaced them with Read / Execute permissions for an MDT specific account.

 

 

We can then create our Linked Deployment Share. If you have been following this series then the wizard should be familiar to you.

 

 

Note that we DON’T select the file path for our Linked Deployment Share (as by design it should be on a remote server). Instead, we enter an UNC path to the share including the $ sign as it’s a hidden share and select an appropriate Selection Profile to determine which content should be replicated to the remote share. We also select to either merge the contents or replace the contents of the existing share. It’s not immediately obvious (as we have created an empty share) but this allows you to pre-stage the content in the remote share (by using an external drive to manually transfer the data). This works in the same way as for creation of media in that the Linked Deployment Share object is created in the GUI but no data is copied into the share itself.

 

 

If we right click the Linked Deployment Share object created we can inspect its properties.

 

 

 

As you can see, there are no additional tabs (unlike for the Deployment Share. If you want to configure the WinPE settings for the Linked Deployment Share you need to do that through its root Deployment Share.

We can then replicate the content by right clicking the deployment share and selecting “Replicate Content”.

 

The replication will immediately start and the amount of time taken will depend on the amount of content you have, speed of link and hardware resources available. The boot images will be recreated for the Linked Deployment Share, specifically to ensure that the bootstrap.ini file contains the correct value for DeployRoot (the location of the Linked Deployment Share).

 

 

 

Once replication has been completed the summary screen present you with a “View Script” button that allows you to access the PowerShell command used to replicate the share.

 

 

Import-Module “C:\Program Files\Microsoft Deployment Toolkit\bin\MicrosoftDeploymentToolkit.psd1”

New-PSDrive -Name “DS001” -PSProvider MDTProvider -Root “D:\DeploymentShare”

Update-MDTLinkedDS -path “DS001:\Linked Deployment Shares\LINKED001” -Verbose

 

This script can be used to automate replication of the Linked Deployment share but this will recreate the WinPE boot disks each time also. This can be overcome by clearing the below checkbox in the Linked Deployment Share properties.

 

 

You should note that the customsettings.ini file is NOT replicated between servers with the default settings being created in the Linked Deployment Share. Using this method of replication you will then need to manually update the customsettings.ini file.

 

 

As well as either manually replicating the data as above or using a Scheduled task running the PowerShell script, you can simply set up a DFS-R share and replicate either the Linked Deployment Share (so that only a subset of data replicates) or the original Deployment Share (so that all data replicates).

If replicating the original Deployment Share some changes need to be made to the bootstrap.ini file so that, when booting, WinPE sets the DeployRoot value based in the clients default gateway. An example configuration is shown below.

[Settings]
Priority=DefaultGateway, Default

[Default]
OSInstall=Y
SkipBDDWelcome=Yes

[DefaultGateway]
10.1.1.1=London
10.2.2.1=Tokyo
10.3.3.1=NYC

[London]
Deployroot=\\LondonMDT\DFSRoot\DeploymentShare$

[Tokyo]
Deployroot=\\TokyoMDT\DFSRoot\DeploymentShare$

[NYC]
Deployroot=\\NYCMDT\DFSRoot\DeploymentShare$

 

I think that you can see that your choices are between using the built in non-automated solution or creating your own automated solution and configuring MDT to function between sites. I would suggest that the latter, while needing more set-up (especially when you factor in deploying and configuring DFS-R) is the more functional and robust solution.

In the next post we’ll go through how to configure a database for MDT to centralise the functionality provided by the customsettings.ini file.

 

Install and Configure MDT 2013 (Part 4)

Saturday, January 4th, 2014

In Part 3 of this series we looked at Selection Profiles and how to target the injection of drivers as part of a task sequence. As promised this post will show you how to enable monitoring to track installations as they occur and also creating media so that computers can be deployed offline when the MDT server cannot be contacted or where the link to the MDT server is small or unreliable.

Monitoring is enabled per Deployment Share by accessing the properties of the deployment share in the deployment workbench. To enable monitoring we simply tick the check box and apply.

 

 

The necessary firewall ports are open by default.

 

 

If you change the port numbers, an additional firewall rule will be created leaving the original ports exposed.

 

 

As well as creating the firewall rule, enabling monitoring also does two other things.

  1. A new service (the Microsoft Deployment Toolkit Monitor Service) is created. This service receives in events from the computers being monitored and displays them in the monitoring node of the deployment workbench.
  2. The CustomSettings.ini file is also modified to add a new entry specifying the URL to be used for monitoring.

 

 

As the customsettings.ini file is updated there is no need to update the deployment share (nor the WinPE boot images) when enabling monitoring as this setting is read in post boot as part of the deployment process.

When deploying machines you will now be able to track the build process within the workbench GUI.

 

 

Right clicking the status and selecting properties provides further details so that you can see which step the deployment has reached.

 

 

Once the installation has completed the GUI is updated.

 

If you access the properties of the monitoring report you can then connect to the machine by RDP (if remote desktop has been enabled) or using VMConnect.exe if the Hyper-V tools have been installed on the machine running the deployment workbench.

 

 

Monitoring can definitely make your life easier as you will know when a machine has completed building. In that way you can work on something else and only return to the machine when everything is ready.

Another thing that can make your life easier is being able to build machines while disconnected from the network – perhaps in secure areas of the network or in remote sites with a low number of users which don’t warrant a local server and / or where remote sites have a small or unreliable connection.

Media can be created as an ISO or to place on a USB thumb drive to allow it to be booted to. In larger deployments, this may mean that there is a large amount of files, installers, drivers and other items to include in the build media. To reduce the amount of data placed in any media created the creation of media leverages Selection Profiles to select which items should be included. For example, we can include just the Windows 8.1 operating system, HP drivers, general applications and any task sequences required to drive the installation.

We therefore create a new Selection Profile to select the items to be included in the media. The process for this is detailed in Part 3 of this series.

 

 

As you can see, it is not possible to select individual items and so creation of your folder structure is paramount, especially regarding items which may consume large amounts of space in the media image. For example, when we imported the Windows Server 2012 R2 images, it imported all 4 images into a single folder. While these will not take any more room than a single image (because of the way in which Windows 2012 R2 is packaged) I use this as a device to demonstrate how adding multiple items to a single folder can lead to large media sets being created.

Once we have a Selection Profile created specifically for our Media we can create the Media. To create our media we right click the Media node under Advanced Configuration in the deployment workbench and select “New Media”.

 

 

We specify a location to create our media in and also the Selection Profile created to state which items to include.

 

 

NOTE: Do NOT use a path under the deployment share. If we choose to replicate our share then this will mean the data being shipped twice.

The media creation process is very quick taking a few seconds. A Media object is created under our media node.

 

 

And a folder structure is created in the path we specified.

 

 

Just as with our Deployment Share, the media created can be configured to dictate how the installation process will run. By right clicking the media and selecting “Properties” we can access an interface similar to that used for the Deployment Share.

 

 

Above you can see that both an x86 and an x64 boot image have been selected to be created. The size of the created media can be reduced by only creating one type of boot image. The important thing to remember is that any build process started using this media will NOT be automated unless the rules section (media specific customsettings.ini and bootstrap.ini files) are updated to configure that automation.

 

 

Note: The bootstrap.ini file should NOT contain a DeployRoot value as all required content should be contained in the created media rather than being accessed from a Deployment Share.

 

 

Once the customsettings.ini and bootstrap.ini files have been modified to suit requirements, the media folders can be populated with data and the boot files created. To write the items included in the Selection Profile to disk we need to update the media by right clicking the media object created and selecting “Update Media Content”.

 

 

This process will take much longer, the length of time required dependent upon the specific items included in the Selection Profile.

 

 

 

Once complete, two sets of media will have been created. An ISO file (LiteTouchMedia.iso) and a content folder containing all the files needing to be written to a bootable USB drive.

 

 

In my example media, the ISO file has grown beyond the 4.7GB that can be held on a standard DVD drive. While it can still be used to build virtual machines you may need to use a USB thumb drive to create physical machines.

To create a bootable thumb drive you will need a physical machine (to plug the USB drive into) or a solution that supports USB over IP. My personal preference is to create the bootable USB drive in a Windows 7 or 8 workstation or laptop. The steps to create bootable MDT media on a USB drive are as follows:

  1. Open a Command Prompt with Administrator privileges in either Windows 7 Pro or Windows 8 Pro.
  2. Insert the target USB boot media device into an available USB port.
  3. Type “DiskPart” in the command prompt.
  4. Type “List Disk” (make note of the disk number of the target USB drive).
  5. Type “Select Disk X”, where X is the target USB drive noted in step 4.
  6. Type “Clean”.
  7. Type “Create Partition Primary”.
  8. Type “Select Partition 1”.
  9. Type “format FS=fat32 quick”.
  10. Type “Active”.
  11. Type “Exit”.
  12. Copy the contents of the “Content” folder from the media location specified above to the USB drive.

 

 

Note: The above commands set the file system to be fat32. This supports a maximum disk size of 8 terabytes.

You can then test your bootable media on your central site by powering down your MDT server or disconnecting it from the network and ensuring that clients can build to completion before sending the media to remote sites.

Note: Neither the ISO or the USB thumb drive will be password protected meaning anyone having access to the media will be able to read any usernames or passwords used in the customsettings.ini and boostrap.ini files. In addition, use of media does not allow for versioning meaning that, as MDT is updated, your old media may still be available and un use around the estate.

That brings us to the end of this post which has demonstrated how to enable monitoring and also how to deploy machines in more remote locations. In the next part of this series we’ll cover off Linked Deployment Shares to enable deployment in remote sites where there is sufficient requirements to place a localised deployment share.

 

Windows 2012 R2 Tiered Storage and Storage Spaces

Friday, January 3rd, 2014

In this post I shall walk you through deploying Storage Spaces and tiered storage using Windows 2012 R2.

Windows, as an operating system, has long been used to host and present shared storage. However, its management of disks has been pretty limited with Master Boot Record (MBR disks) being limited to 2TB in size and 4 primary partitions. This was improved by GUID Partition Table (GPT) disks allowing 128 partitions and 64TB disks (these limitations are imposed by Windows rather than the GPT standard). These stay in place. Storage Pools are really a replacement for the dynamic disk feature of Windows. Dynamic disks allowed for RAID arrays to be created in software for people who could not afford hardware array controllers. So, no-one used them right? After all, why would you put additional stress on your server when hardware array controllers are relatively cheap or even ship on board for most badged servers?

Again, that’s still true here with Storage Spaces but the scalability, resiliency, and optimization are improved beyond simply offering software RAID. What Storage Spaces provide in addition to RAID are two key items:

  1. Different RAID types across the same physical disks – very akin to a SAN and its ability to virtualise its disks
  2. Tiered storage between differing disks types so that files accessed more often are moved to faster storage

Now, I can see why most enterprise organisations may not be interested in the first item. After all, shared storage is probably done on a SAN which offers these facilities. I’m assuming though that this is part of Microsoft looking forward to the point when expensive shared storage is no longer required as it will be replaced by resilient server based storage presented by SMB3.

The second item is very intriguing though, even for standard file servers or other servers. What if a server contains two or more types of disks, SSD, SAS & SATA? The data can then be tiered across the disks locally with less often accessed data being placed on the slower storage. True, its Windows so this will not be block based, only file based but, if you are a smaller shop, or even an enterprise, it does allow for improved performance for some files while less often access files can be placed on larger, slower disks with the files moved between disks as needed.

NOTE: It is only possible to mark the disks as being either HDD or SSD and so only two levels of tiering are possible for any given server.

The underlying “physical” disks can still be RAIDed using hardware RAID (item 1) with tiering provided by the operating system. Couple this with data deduplication and Windows Resilient File System (ReFS – http://msdn.microsoft.com/en-us/library/windows/desktop/hh848060(v=vs.85).aspx) and you now have a 21st Century method of storing files that maximises performance, reduces storage requirements and maximises the quantity of data that a single set of disks can hold.

In short, while it may seem that the changes to storage that Windows 2012 R2 brings are, once again, not for the Enterprise, the truth is that there is something here that everyone can benefit from.

To demonstrate Storage Spaces and Tiered Storage for you I have created a VM named FILE1 in my lab and have attached 10 virtual disks to that VM – each disk is a 10GB fixed size VHDX file.

 

These disk are exposed in the operating system as offline disks.

 

As I have said, in my lab these are just stand alone VHDs. In a production environment, these physical disks may have already been grouped together in one or more RAID sets for item 1 of the Storage Spaces functionality.

We can now work with these disks to create one or more Storage Spaces. Storage Spaces are accessed from the Server Manager console under “File and Storage Services”.

 

 

Where disks have not been initialised they will have an “unknown” partition table. Ones that have previously been used will be shown with however they have been configured.

 

 

All disks that are currently unused will be placed into a “Primordial” Storage Pool. This is a default built-in pool that exists to represent unused disks within the GUI and PowerShell – i.e. all unused disks will appear here and get moved to a different pool when you assign them to that other pool.

To create a storage pool, simply click on the Storage Pools link and select “New Storage Pool” from the Tasks menu.

 

 

 

The wizard will start.

 

Give the Storage Pool a name.

 

 

And assign physical disks to the pool. We will later, when we create Virtual Disks within the pool, set which level of RAID should be used. We can set the disk to three types of allocation “Automatic”, Manual” or “Hot Spare”.

 

 

In my Pool I have two hot spares that can be bought online if one of the disks fails (an improvement over dynamic disks which did not provide for hot spares). Volumes will automatically be allocated to disks. If I had selected “Manual” for the Allocation type then volumes would need to be manually assigned to individual disks within the pool providing greater control but increased administrative overhead.

Our storage space is then created.

 

 

Note the choice to automatically launch the virtual disk creation wizard. This creates what you and I would think of as being a volume on the disk if you will. It is a virtual disk but gets presented within the operating system as a physical disk. At this point, we have allocated 8 of our 10 spare disks to the storage pool. Looking at Disk Manager you can see that these are now no longer available for use by the operating system.

 

 

Our Primordial storage space also has only two disks still available for allocation with the balance of disks participating in our new storage pool.

 

 

In my lab the disks are of media type “Unknown” (right click the disk and select properties).

 

 

We can issue a PowerShell command to instruct the operating system that some of our disks will be standard hard drives and some will be SSD drives.

Set-PhysicalDisk –FriendlyName <MyDiskName> -MediaType <MediaType>

Where MediaType is either SSD or HDD. For example:

 

 

We can repeat this for each disk in the pool. To demonstrate Storage Tiering I have marked 4 of the drives as SSD and 4 of the drives as standard HDD.

 

 

NOTE: Running the above command before the disks have added to a storage pool results in an error. Adding the disk to a storage pool allows the media type to be set.

 

 

After assigning media types to the disk, close and reopen Server Manager to refresh the disk information. With our storage pool created we can now run our “New Virtual Disk” task.

 

 

 

This starts a wizard.

 

 

We select the pool of storage we want to create the virtual disk on.

 

 

We provide a meaningful name for the VHD to be created. This could indicate the type of data the drive will hold or could, as I have done, indicate which disk is allocated to which drive letter. I have NOT chosen to use tiering for this disk but will for the disk I create next so that you can see the difference in subsequent screens.

 

 

 

We then get to assign a RAID level to the VHD to be used across the portions of the physical disks that we will be using.

 

 

The layout values are roughly similar to

  • Simple = RAID0
  • Mirror=RAID10 + JBOD
  • Parity=RAID50 + JBOD

Here, we have not used a hardware load balancer as there would be little benefit in RAIDing already raided drives in software. The above is merely to demonstrate the power of storage virtualisation provided within windows.

Next we select to use a thick or thin provisioned virtual disk.

 

 

If we use a fixed size VHD we consume the space on the disks now, whether we will use it or not. Thin provisioning, while more economical with our disk space, does come at a price though and that this the overhead associated with expanding the disk out as more data is added. As the effect of this is lessened once files begin to be deleted (are they ever on a file server?) this will affect performance of the whole. Whether this is noticeable to users will depend on a number of factors such as network speeds, server speeds, physical disk speeds etc. but it will be slower than using a fixed disk.

Here I have selected to use a thinly provisioned image. Our pool contains 8 disks each of 10GB. 2 of those were marked as Hot Spares leaving a maximum of 60GB available for allocation. After overhead, we have as little as 55.5GB available for use (6 x 9.25GB).

NOTE: The Server Manager GUI reports ALL disk space available for use even though some of the disks are allocated as Hot Spares

 

 

We can then specify a size for our virtual disk. Note that I have set mine to be 500GB in size, even though our Storage Pool only has 72GB of free space.

 

 

We can increase the free space simply by adding more disks to the Storage Pool. Disks can also be of different sizes and different speeds (unlike with hardware RAID) as they are seen as JBOD with the storage being carved up. We can then follow the wizard to the end at which point of virtual disk will be created.

NOTE: Additional space will be consumed as storage will be allocated for use as a write cache. The quantity of storage used for write cache can be set when creating the disk using PowerShell. For example:

$HDD = Get-StorageTier -FriendlyName HDDTier

Get-StoragePool “user Data” | New-VirtualDisk -FriendlyName “F Drive” -ResiliencySettingName Parity –StorageTiers $HDD -StorageTierSizes 8GB -WriteCacheSize 1GB

 

Once our virtual disk has been created then, by default, we will be asked to create a volume on that disk.

 

 

Again, this is a wizard driven exercise.

 

 

The disk available to use is the one we created above. Note that the disk number is 11, the next available disk number along. Even though the physical disks are not surfaced within Disk Management the disk numbers are still allocated and in use.

 

 

The same as when we are working within Disk Management, we can allocate a size to the volume created.

 

 

And present it as a drive letter or mount point.

 

 

We then select the file system to be used, NTFS or ReFS together with a block size. ReFS only allows for 64K blocks whereas NTFS can take advantage of the 4K block size.

 

 

Note also that short file name generation is disabled by default to speed up writing to disk. This is also disabled for Windows 8 which may cause issues for you if you have software that relies on the short file name format being present. For example, the legacy versions of the Citrix Access Gateway Secure Client relies on this and will no work on volumes created with Windows 8 (for example, Windows 7 images created using SCCM 2012 SP1). Note also that, while I have named the drive “Slow User Data” this is not totally true as we have used automatic allocation for the space used and so, inevitably, some will be allocated on the faster drives in the storage space.

Following the wizard to the end now creates the drive for us which is exposed in Windows Explorer and in Disk Management.

 

 

 

As you can see, the operating system believes that it has 499GB of storage (of the 500GB allocated to the drive) after overhead demonstrating that the drive is thin provisioned.

We can follow the same process once more but, this time, choose to create tiered storage. Tiered storage is only supported with fixed disks.

 

 

In addition, Parity striping is not supported. Only simple volumes or striped volumes can be used.

 

 

As discussed above, if hardware array controllers are in use managing RAID within hardware, it may be worth considering deploying the storage as “Simple” to maximise the space on the basis that hardware failure at the disk level is already catered for by the array controller.

As tiered storage cannot be thinly provisioned, this choice is greyed out.

 

 

Note that we are now presented with a different screen to configure the tiered storage.

 

 

The virtual disk will be split between the two types of storage in these quantities which also sets the fixed disk size. Note also that the creation of our thinly provisioned disk from the previous step consumed ~ 15GB of storage even without data being written to the disk. This overhead should be taken into account for any sizing exercise for new deployments.

Again we can follow the wizard to the end at which point we will be asked to create a volume on the disk as previously. We can then continue to create additional disks, either fixed size or thinly provisioned, until our disk space in the Storage Pool is fully consumed. As with any disk, monitoring should be enabled to alert when disk space is getting low but this is especially true for Storage Pools where thinly provisioned disks can rapidly consume any spare space leaving no writable area available.

When creating our pool, we can also set the disk allocation to Manual.

 

This does prevent the use of Storage Tiering though.

Microsoft do not recommend mixing and matching automatic and manual allocation of disks in the same pool.

 

 

If we do set all disks to be manually allocated then two new screens are added to the virtual disk creation wizard. The first asks which disks should be used to store the data.

 

 

The second sets performance setting for the virtual disk.

 

 

As you can see, Storage Spaces are an exciting addition to the Windows Operating System which now allows you to virtualise and optimise your storage whatever your budget.