Listing group membership and extracting into a CSV file for multiple groups

February 21st, 2014  / Author: Philip Flint

I’ve been hunting around the web for a powershell script that will list the members of multiple groups and haven’t been able to write one so I’ve written my own.

This script isn’t intended to be perfect but it will give you the bare bones of how to write your script. For example, this script work on the basis of entering all or part of a group name and then reporting on that group. If you enter a blank or * from the group name then it will export user membership in all groups as direct members (add a  recursive switch if you need membership of nested groups). This is useful if you have a group naming convention as you can easily drill down into the groups you want.

It also doesn’t filter out computer accounts so it depends if that’s an issue for you and it reports against the whole AD but you can always filter the Get-ADGroup command to scope it to an individual OU or area of AD.

In any event, like I said I wasn’t able to find anything around to do this so hopefully, if you need to do this, this script will give you a good head start on exporting these values from your directory.

Here’s the script – if copying and pasting into notepad remember to correct some characters such as ‘ and “.

 

Import-Module ActiveDirectory
Write-Host “********************************************************”
Write-Host “* This script will dump out users in named groups, all *”
Write-host “* groups or a range of groups. You will be guided *”
Write-host “* through the process *”
Write-Host “* *”
Write-Host “* All output will be saved to C:\Support\ScriptOutput\ *”
Write-Host “********************************************************”
write-host

$strFileName = $(
$selection = read-host ‘Enter the name of file to save results to. Include an extention. (Default = Groupmembership.csv)’
if ($selection) {$selection} else {‘GroupMembership.csv’}
)

$strFileName = “C:\Support\ScriptOutput\” + $strFilename
If (Test-Path $strFileName){
Remove-Item $strFileName
}

Write-Host
Write-Host ‘Enter name of group you would like to export’
Write-Host ‘The script will look for matching groups’
Write-Host
Write-Host ‘Entering the first part of the group name will return all matching groups’
Write-Host ‘For example, Entering “LG-APP-” without the quotation marks will return all application groups’
Write-Host
Write-Host ‘Pressing return will list membership of ALL groups’
Write-host
Write-Host ‘***** WARNING *****’
Write-Host
Write-Host ‘Exporting all group memberships will take some time as it will’
Write-Host ‘include all built in groups and distribution lists – use with caution’
Write-Host

$strGroupNames = $(
$selection = Read-Host ‘Enter name of group you would like to export (no value will return all groups)’
if ($selection) {$selection + ‘*’} else {‘*’}
)

Write-Host
Write-Host ‘Exporting teams with names like ‘ $strGroupNames ‘ to ‘ $strFilename
$data= ‘Group,UserName,UserID’
Write-Output $data | out-file -FilePath $strFileName -Append
$groups = Get-ADGroup -filter {name -like $strGroupNames}
foreach ($Group in $Groups)
{
$usernames = Get-ADGroupMember $($group.name)

foreach ($user in $usernames)
{
$data = $Null
$data = $data + $group.name + “,” + $user.name + “,” + $user.saAMAccountName
Write-Output $data | out-file -FilePath $strFileName -Append

}
}

 

Performing Exchange 2010 datacentre failovers

February 21st, 2014  / Author: Philip Flint

A nice wizard driven process is available to walk you through your particular scenario here.

Install and Configure MDT 2013 (Part 5)

January 4th, 2014  / Author: Philip Flint

In the last part of this series we looked at creation of media for remote deployments and centralised monitoring of deployments. In this part of the series we will look at replacing media with the use of Linked Deployment Shares for remote offices.

As with Media, the contents of a Linked Deployment Share are dictated by the Selection Profile associated with the Linked Deployment Share. If the “Everything” Selection Profile is used then the linked share will be an exact replica of the central Deployment Share at the time that the Linked Deployment Share is created. That’s important to note as any changes made to the centralised Deployment Share will not be replicated to the Linked Deployment Share unless and until you force the Linked Deployment Share to be updated. The contents of the Linked Deployment Share can be updated manually (by running the update command) or more regularly by use of either DFS or a scheduled task running robocopy or a PowerShell command.

Creation of the Linked Deployment Share is relatively simplistic and is achieved by right clicking the Linked Deployment Shares node in the Advanced Configuration section of the Deployment Workbench GUI and selecting “New Linked Deployment Share”.

 

However, the wizard that is launched requires that the share that will be turned into a Linked Deployment Share must already exist and have been created. For the purposes of this post I have created a new share on the same server. This is what you would need to do if you were to use DFS as a replication mechanism. If a scripted replication is used then I would recommend the use of a PowerShell command (shown later in this post) or remembering to manually update the share as and when the centralised Deployment Share is modified. What you will do in your environment will in large part depend on how often the centralised Deployment Share is updated and whether you will remember to update and linked Deployment Shares. If you have several remote shares then you may want to use a series of PowerShell commands in a single script to update all remote shares at the same time.

As for the centralised share, I have created the remote share as an administrative share by appending a $ sign to the name. The share permissions are “Everyone | Full Control” in line with Microsoft Best Practice. I have removed the “Users” NTFS permissions and replaced them with Read / Execute permissions for an MDT specific account.

 

 

We can then create our Linked Deployment Share. If you have been following this series then the wizard should be familiar to you.

 

 

Note that we DON’T select the file path for our Linked Deployment Share (as by design it should be on a remote server). Instead, we enter an UNC path to the share including the $ sign as it’s a hidden share and select an appropriate Selection Profile to determine which content should be replicated to the remote share. We also select to either merge the contents or replace the contents of the existing share. It’s not immediately obvious (as we have created an empty share) but this allows you to pre-stage the content in the remote share (by using an external drive to manually transfer the data). This works in the same way as for creation of media in that the Linked Deployment Share object is created in the GUI but no data is copied into the share itself.

 

 

If we right click the Linked Deployment Share object created we can inspect its properties.

 

 

 

As you can see, there are no additional tabs (unlike for the Deployment Share. If you want to configure the WinPE settings for the Linked Deployment Share you need to do that through its root Deployment Share.

We can then replicate the content by right clicking the deployment share and selecting “Replicate Content”.

 

The replication will immediately start and the amount of time taken will depend on the amount of content you have, speed of link and hardware resources available. The boot images will be recreated for the Linked Deployment Share, specifically to ensure that the bootstrap.ini file contains the correct value for DeployRoot (the location of the Linked Deployment Share).

 

 

 

Once replication has been completed the summary screen present you with a “View Script” button that allows you to access the PowerShell command used to replicate the share.

 

 

Import-Module “C:\Program Files\Microsoft Deployment Toolkit\bin\MicrosoftDeploymentToolkit.psd1”

New-PSDrive -Name “DS001” -PSProvider MDTProvider -Root “D:\DeploymentShare”

Update-MDTLinkedDS -path “DS001:\Linked Deployment Shares\LINKED001” -Verbose

 

This script can be used to automate replication of the Linked Deployment share but this will recreate the WinPE boot disks each time also. This can be overcome by clearing the below checkbox in the Linked Deployment Share properties.

 

 

You should note that the customsettings.ini file is NOT replicated between servers with the default settings being created in the Linked Deployment Share. Using this method of replication you will then need to manually update the customsettings.ini file.

 

 

As well as either manually replicating the data as above or using a Scheduled task running the PowerShell script, you can simply set up a DFS-R share and replicate either the Linked Deployment Share (so that only a subset of data replicates) or the original Deployment Share (so that all data replicates).

If replicating the original Deployment Share some changes need to be made to the bootstrap.ini file so that, when booting, WinPE sets the DeployRoot value based in the clients default gateway. An example configuration is shown below.

[Settings]
Priority=DefaultGateway, Default

[Default]
OSInstall=Y
SkipBDDWelcome=Yes

[DefaultGateway]
10.1.1.1=London
10.2.2.1=Tokyo
10.3.3.1=NYC

[London]
Deployroot=\\LondonMDT\DFSRoot\DeploymentShare$

[Tokyo]
Deployroot=\\TokyoMDT\DFSRoot\DeploymentShare$

[NYC]
Deployroot=\\NYCMDT\DFSRoot\DeploymentShare$

 

I think that you can see that your choices are between using the built in non-automated solution or creating your own automated solution and configuring MDT to function between sites. I would suggest that the latter, while needing more set-up (especially when you factor in deploying and configuring DFS-R) is the more functional and robust solution.

In the next post we’ll go through how to configure a database for MDT to centralise the functionality provided by the customsettings.ini file.

 

Install and Configure MDT 2013 (Part 4)

January 4th, 2014  / Author: Philip Flint

In Part 3 of this series we looked at Selection Profiles and how to target the injection of drivers as part of a task sequence. As promised this post will show you how to enable monitoring to track installations as they occur and also creating media so that computers can be deployed offline when the MDT server cannot be contacted or where the link to the MDT server is small or unreliable.

Monitoring is enabled per Deployment Share by accessing the properties of the deployment share in the deployment workbench. To enable monitoring we simply tick the check box and apply.

 

 

The necessary firewall ports are open by default.

 

 

If you change the port numbers, an additional firewall rule will be created leaving the original ports exposed.

 

 

As well as creating the firewall rule, enabling monitoring also does two other things.

  1. A new service (the Microsoft Deployment Toolkit Monitor Service) is created. This service receives in events from the computers being monitored and displays them in the monitoring node of the deployment workbench.
  2. The CustomSettings.ini file is also modified to add a new entry specifying the URL to be used for monitoring.

 

 

As the customsettings.ini file is updated there is no need to update the deployment share (nor the WinPE boot images) when enabling monitoring as this setting is read in post boot as part of the deployment process.

When deploying machines you will now be able to track the build process within the workbench GUI.

 

 

Right clicking the status and selecting properties provides further details so that you can see which step the deployment has reached.

 

 

Once the installation has completed the GUI is updated.

 

If you access the properties of the monitoring report you can then connect to the machine by RDP (if remote desktop has been enabled) or using VMConnect.exe if the Hyper-V tools have been installed on the machine running the deployment workbench.

 

 

Monitoring can definitely make your life easier as you will know when a machine has completed building. In that way you can work on something else and only return to the machine when everything is ready.

Another thing that can make your life easier is being able to build machines while disconnected from the network – perhaps in secure areas of the network or in remote sites with a low number of users which don’t warrant a local server and / or where remote sites have a small or unreliable connection.

Media can be created as an ISO or to place on a USB thumb drive to allow it to be booted to. In larger deployments, this may mean that there is a large amount of files, installers, drivers and other items to include in the build media. To reduce the amount of data placed in any media created the creation of media leverages Selection Profiles to select which items should be included. For example, we can include just the Windows 8.1 operating system, HP drivers, general applications and any task sequences required to drive the installation.

We therefore create a new Selection Profile to select the items to be included in the media. The process for this is detailed in Part 3 of this series.

 

 

As you can see, it is not possible to select individual items and so creation of your folder structure is paramount, especially regarding items which may consume large amounts of space in the media image. For example, when we imported the Windows Server 2012 R2 images, it imported all 4 images into a single folder. While these will not take any more room than a single image (because of the way in which Windows 2012 R2 is packaged) I use this as a device to demonstrate how adding multiple items to a single folder can lead to large media sets being created.

Once we have a Selection Profile created specifically for our Media we can create the Media. To create our media we right click the Media node under Advanced Configuration in the deployment workbench and select “New Media”.

 

 

We specify a location to create our media in and also the Selection Profile created to state which items to include.

 

 

NOTE: Do NOT use a path under the deployment share. If we choose to replicate our share then this will mean the data being shipped twice.

The media creation process is very quick taking a few seconds. A Media object is created under our media node.

 

 

And a folder structure is created in the path we specified.

 

 

Just as with our Deployment Share, the media created can be configured to dictate how the installation process will run. By right clicking the media and selecting “Properties” we can access an interface similar to that used for the Deployment Share.

 

 

Above you can see that both an x86 and an x64 boot image have been selected to be created. The size of the created media can be reduced by only creating one type of boot image. The important thing to remember is that any build process started using this media will NOT be automated unless the rules section (media specific customsettings.ini and bootstrap.ini files) are updated to configure that automation.

 

 

Note: The bootstrap.ini file should NOT contain a DeployRoot value as all required content should be contained in the created media rather than being accessed from a Deployment Share.

 

 

Once the customsettings.ini and bootstrap.ini files have been modified to suit requirements, the media folders can be populated with data and the boot files created. To write the items included in the Selection Profile to disk we need to update the media by right clicking the media object created and selecting “Update Media Content”.

 

 

This process will take much longer, the length of time required dependent upon the specific items included in the Selection Profile.

 

 

 

Once complete, two sets of media will have been created. An ISO file (LiteTouchMedia.iso) and a content folder containing all the files needing to be written to a bootable USB drive.

 

 

In my example media, the ISO file has grown beyond the 4.7GB that can be held on a standard DVD drive. While it can still be used to build virtual machines you may need to use a USB thumb drive to create physical machines.

To create a bootable thumb drive you will need a physical machine (to plug the USB drive into) or a solution that supports USB over IP. My personal preference is to create the bootable USB drive in a Windows 7 or 8 workstation or laptop. The steps to create bootable MDT media on a USB drive are as follows:

  1. Open a Command Prompt with Administrator privileges in either Windows 7 Pro or Windows 8 Pro.
  2. Insert the target USB boot media device into an available USB port.
  3. Type “DiskPart” in the command prompt.
  4. Type “List Disk” (make note of the disk number of the target USB drive).
  5. Type “Select Disk X”, where X is the target USB drive noted in step 4.
  6. Type “Clean”.
  7. Type “Create Partition Primary”.
  8. Type “Select Partition 1”.
  9. Type “format FS=fat32 quick”.
  10. Type “Active”.
  11. Type “Exit”.
  12. Copy the contents of the “Content” folder from the media location specified above to the USB drive.

 

 

Note: The above commands set the file system to be fat32. This supports a maximum disk size of 8 terabytes.

You can then test your bootable media on your central site by powering down your MDT server or disconnecting it from the network and ensuring that clients can build to completion before sending the media to remote sites.

Note: Neither the ISO or the USB thumb drive will be password protected meaning anyone having access to the media will be able to read any usernames or passwords used in the customsettings.ini and boostrap.ini files. In addition, use of media does not allow for versioning meaning that, as MDT is updated, your old media may still be available and un use around the estate.

That brings us to the end of this post which has demonstrated how to enable monitoring and also how to deploy machines in more remote locations. In the next part of this series we’ll cover off Linked Deployment Shares to enable deployment in remote sites where there is sufficient requirements to place a localised deployment share.

 

Windows 2012 R2 Tiered Storage and Storage Spaces

January 3rd, 2014  / Author: Philip Flint

In this post I shall walk you through deploying Storage Spaces and tiered storage using Windows 2012 R2.

Windows, as an operating system, has long been used to host and present shared storage. However, its management of disks has been pretty limited with Master Boot Record (MBR disks) being limited to 2TB in size and 4 primary partitions. This was improved by GUID Partition Table (GPT) disks allowing 128 partitions and 64TB disks (these limitations are imposed by Windows rather than the GPT standard). These stay in place. Storage Pools are really a replacement for the dynamic disk feature of Windows. Dynamic disks allowed for RAID arrays to be created in software for people who could not afford hardware array controllers. So, no-one used them right? After all, why would you put additional stress on your server when hardware array controllers are relatively cheap or even ship on board for most badged servers?

Again, that’s still true here with Storage Spaces but the scalability, resiliency, and optimization are improved beyond simply offering software RAID. What Storage Spaces provide in addition to RAID are two key items:

  1. Different RAID types across the same physical disks – very akin to a SAN and its ability to virtualise its disks
  2. Tiered storage between differing disks types so that files accessed more often are moved to faster storage

Now, I can see why most enterprise organisations may not be interested in the first item. After all, shared storage is probably done on a SAN which offers these facilities. I’m assuming though that this is part of Microsoft looking forward to the point when expensive shared storage is no longer required as it will be replaced by resilient server based storage presented by SMB3.

The second item is very intriguing though, even for standard file servers or other servers. What if a server contains two or more types of disks, SSD, SAS & SATA? The data can then be tiered across the disks locally with less often accessed data being placed on the slower storage. True, its Windows so this will not be block based, only file based but, if you are a smaller shop, or even an enterprise, it does allow for improved performance for some files while less often access files can be placed on larger, slower disks with the files moved between disks as needed.

NOTE: It is only possible to mark the disks as being either HDD or SSD and so only two levels of tiering are possible for any given server.

The underlying “physical” disks can still be RAIDed using hardware RAID (item 1) with tiering provided by the operating system. Couple this with data deduplication and Windows Resilient File System (ReFS – http://msdn.microsoft.com/en-us/library/windows/desktop/hh848060(v=vs.85).aspx) and you now have a 21st Century method of storing files that maximises performance, reduces storage requirements and maximises the quantity of data that a single set of disks can hold.

In short, while it may seem that the changes to storage that Windows 2012 R2 brings are, once again, not for the Enterprise, the truth is that there is something here that everyone can benefit from.

To demonstrate Storage Spaces and Tiered Storage for you I have created a VM named FILE1 in my lab and have attached 10 virtual disks to that VM – each disk is a 10GB fixed size VHDX file.

 

These disk are exposed in the operating system as offline disks.

 

As I have said, in my lab these are just stand alone VHDs. In a production environment, these physical disks may have already been grouped together in one or more RAID sets for item 1 of the Storage Spaces functionality.

We can now work with these disks to create one or more Storage Spaces. Storage Spaces are accessed from the Server Manager console under “File and Storage Services”.

 

 

Where disks have not been initialised they will have an “unknown” partition table. Ones that have previously been used will be shown with however they have been configured.

 

 

All disks that are currently unused will be placed into a “Primordial” Storage Pool. This is a default built-in pool that exists to represent unused disks within the GUI and PowerShell – i.e. all unused disks will appear here and get moved to a different pool when you assign them to that other pool.

To create a storage pool, simply click on the Storage Pools link and select “New Storage Pool” from the Tasks menu.

 

 

 

The wizard will start.

 

Give the Storage Pool a name.

 

 

And assign physical disks to the pool. We will later, when we create Virtual Disks within the pool, set which level of RAID should be used. We can set the disk to three types of allocation “Automatic”, Manual” or “Hot Spare”.

 

 

In my Pool I have two hot spares that can be bought online if one of the disks fails (an improvement over dynamic disks which did not provide for hot spares). Volumes will automatically be allocated to disks. If I had selected “Manual” for the Allocation type then volumes would need to be manually assigned to individual disks within the pool providing greater control but increased administrative overhead.

Our storage space is then created.

 

 

Note the choice to automatically launch the virtual disk creation wizard. This creates what you and I would think of as being a volume on the disk if you will. It is a virtual disk but gets presented within the operating system as a physical disk. At this point, we have allocated 8 of our 10 spare disks to the storage pool. Looking at Disk Manager you can see that these are now no longer available for use by the operating system.

 

 

Our Primordial storage space also has only two disks still available for allocation with the balance of disks participating in our new storage pool.

 

 

In my lab the disks are of media type “Unknown” (right click the disk and select properties).

 

 

We can issue a PowerShell command to instruct the operating system that some of our disks will be standard hard drives and some will be SSD drives.

Set-PhysicalDisk –FriendlyName <MyDiskName> -MediaType <MediaType>

Where MediaType is either SSD or HDD. For example:

 

 

We can repeat this for each disk in the pool. To demonstrate Storage Tiering I have marked 4 of the drives as SSD and 4 of the drives as standard HDD.

 

 

NOTE: Running the above command before the disks have added to a storage pool results in an error. Adding the disk to a storage pool allows the media type to be set.

 

 

After assigning media types to the disk, close and reopen Server Manager to refresh the disk information. With our storage pool created we can now run our “New Virtual Disk” task.

 

 

 

This starts a wizard.

 

 

We select the pool of storage we want to create the virtual disk on.

 

 

We provide a meaningful name for the VHD to be created. This could indicate the type of data the drive will hold or could, as I have done, indicate which disk is allocated to which drive letter. I have NOT chosen to use tiering for this disk but will for the disk I create next so that you can see the difference in subsequent screens.

 

 

 

We then get to assign a RAID level to the VHD to be used across the portions of the physical disks that we will be using.

 

 

The layout values are roughly similar to

  • Simple = RAID0
  • Mirror=RAID10 + JBOD
  • Parity=RAID50 + JBOD

Here, we have not used a hardware load balancer as there would be little benefit in RAIDing already raided drives in software. The above is merely to demonstrate the power of storage virtualisation provided within windows.

Next we select to use a thick or thin provisioned virtual disk.

 

 

If we use a fixed size VHD we consume the space on the disks now, whether we will use it or not. Thin provisioning, while more economical with our disk space, does come at a price though and that this the overhead associated with expanding the disk out as more data is added. As the effect of this is lessened once files begin to be deleted (are they ever on a file server?) this will affect performance of the whole. Whether this is noticeable to users will depend on a number of factors such as network speeds, server speeds, physical disk speeds etc. but it will be slower than using a fixed disk.

Here I have selected to use a thinly provisioned image. Our pool contains 8 disks each of 10GB. 2 of those were marked as Hot Spares leaving a maximum of 60GB available for allocation. After overhead, we have as little as 55.5GB available for use (6 x 9.25GB).

NOTE: The Server Manager GUI reports ALL disk space available for use even though some of the disks are allocated as Hot Spares

 

 

We can then specify a size for our virtual disk. Note that I have set mine to be 500GB in size, even though our Storage Pool only has 72GB of free space.

 

 

We can increase the free space simply by adding more disks to the Storage Pool. Disks can also be of different sizes and different speeds (unlike with hardware RAID) as they are seen as JBOD with the storage being carved up. We can then follow the wizard to the end at which point of virtual disk will be created.

NOTE: Additional space will be consumed as storage will be allocated for use as a write cache. The quantity of storage used for write cache can be set when creating the disk using PowerShell. For example:

$HDD = Get-StorageTier -FriendlyName HDDTier

Get-StoragePool “user Data” | New-VirtualDisk -FriendlyName “F Drive” -ResiliencySettingName Parity –StorageTiers $HDD -StorageTierSizes 8GB -WriteCacheSize 1GB

 

Once our virtual disk has been created then, by default, we will be asked to create a volume on that disk.

 

 

Again, this is a wizard driven exercise.

 

 

The disk available to use is the one we created above. Note that the disk number is 11, the next available disk number along. Even though the physical disks are not surfaced within Disk Management the disk numbers are still allocated and in use.

 

 

The same as when we are working within Disk Management, we can allocate a size to the volume created.

 

 

And present it as a drive letter or mount point.

 

 

We then select the file system to be used, NTFS or ReFS together with a block size. ReFS only allows for 64K blocks whereas NTFS can take advantage of the 4K block size.

 

 

Note also that short file name generation is disabled by default to speed up writing to disk. This is also disabled for Windows 8 which may cause issues for you if you have software that relies on the short file name format being present. For example, the legacy versions of the Citrix Access Gateway Secure Client relies on this and will no work on volumes created with Windows 8 (for example, Windows 7 images created using SCCM 2012 SP1). Note also that, while I have named the drive “Slow User Data” this is not totally true as we have used automatic allocation for the space used and so, inevitably, some will be allocated on the faster drives in the storage space.

Following the wizard to the end now creates the drive for us which is exposed in Windows Explorer and in Disk Management.

 

 

 

As you can see, the operating system believes that it has 499GB of storage (of the 500GB allocated to the drive) after overhead demonstrating that the drive is thin provisioned.

We can follow the same process once more but, this time, choose to create tiered storage. Tiered storage is only supported with fixed disks.

 

 

In addition, Parity striping is not supported. Only simple volumes or striped volumes can be used.

 

 

As discussed above, if hardware array controllers are in use managing RAID within hardware, it may be worth considering deploying the storage as “Simple” to maximise the space on the basis that hardware failure at the disk level is already catered for by the array controller.

As tiered storage cannot be thinly provisioned, this choice is greyed out.

 

 

Note that we are now presented with a different screen to configure the tiered storage.

 

 

The virtual disk will be split between the two types of storage in these quantities which also sets the fixed disk size. Note also that the creation of our thinly provisioned disk from the previous step consumed ~ 15GB of storage even without data being written to the disk. This overhead should be taken into account for any sizing exercise for new deployments.

Again we can follow the wizard to the end at which point we will be asked to create a volume on the disk as previously. We can then continue to create additional disks, either fixed size or thinly provisioned, until our disk space in the Storage Pool is fully consumed. As with any disk, monitoring should be enabled to alert when disk space is getting low but this is especially true for Storage Pools where thinly provisioned disks can rapidly consume any spare space leaving no writable area available.

When creating our pool, we can also set the disk allocation to Manual.

 

This does prevent the use of Storage Tiering though.

Microsoft do not recommend mixing and matching automatic and manual allocation of disks in the same pool.

 

 

If we do set all disks to be manually allocated then two new screens are added to the virtual disk creation wizard. The first asks which disks should be used to store the data.

 

 

The second sets performance setting for the virtual disk.

 

 

As you can see, Storage Spaces are an exciting addition to the Windows Operating System which now allows you to virtualise and optimise your storage whatever your budget.

 

Delaying DHCP Offer (80/20 rule)

January 3rd, 2014  / Author: Philip Flint

When setting up a split scope DNS solution you may want to delay the offer of IP addresses from the 20% scope so that it acts as a true standby. To do this, just click on the advanced tab on the scope and set the subnt delay value.

 

Documenting Group Policy Objects

January 1st, 2014  / Author: Philip Flint

We all know that there is a simple way to document GPOs, just right click the GPO and select “Save Report” which will create an HTML file but those can be a little hard to understand and don’t include all of the data such as what individual settings do. An alternative is to use the Microsoft Security Compliance Manager (SCM). Version 3.0 is now available for download from http://technet.microsoft.com/en-gb/solutionaccelerators/cc835245.aspx.

By default, SCM imports baselines for the following products:

 

 

Hopefully Microsoft will release the baseline packs for 2012 R2, Windows 8.1, Exchange 2013 and SQL at some point but that doesn’t necessarily prevent the tool being used for documenting most standard settings.

To document the settings, backup your GPO in the usual way and then use the “Import a Group Policy Backup” link in the “Get Information” section.

 

 

Browse to where you have your backup GPO

 

 

Select a name for the “baseline” or GPO settings and click on OK

 

 

Your settings will now be shown as an imported GPO

 

 

You can click on the Excel link to export the settings to Excel

 

 

Choose to enable the content in the excel spreadsheet created

 

 

You will now have your settings in Excel format together with an explanation of each setting and, where covered by the built in security baseline information, details of any vulnerabilities that the setting may address, counter measures that can be deployed to overcome that vulnerability and any impact that setting of the GPO value may cause.

NOTE: Click on the image below to see the level of detail provided for each setting.

 

 

Obviously this is a bit more long winded than simply exporting a report but I hope that you can also see how this does provide far more information around what has been configured and, as it is in Excel, enables you to add a further column with an explanation as to why each setting has been configured.

 

Install and Configure MDT 2013 (Part 3)

January 1st, 2014  / Author: Philip Flint

In Part 2 of this series we looked at automating installations further using the customsettings.ini file and bootstrap.ini file. In this post we will look at Selection Profiles, what they are used for and automating the assignment of drivers to boot images and within a task sequence.

Selection profiles allow us to :

  • Group drivers and packages together to inject into the WinPE boot disks so that the drivers and updates are there for when we need to boot to different sets of hardware
  • Group drivers together so that we can install (inject) them during a task sequence
  • Group items together to control what items are included in media we create for offline installations
  • Group items together to replicate linked deployment shares

We’ll look at the first two of these items in this post, the other two I’ll reference in later posts where I show you how and why you would want to create media and linked deployment shares.

Selection Profiles are found under the “Advanced Configuration” section of the MDT workbench. Six Profiles exist by default but you are free to create your own. This is simply a matter of right clicking “Selection Profiles” and clicking on “New Selection Profile”.


We give the profile a name.

 

 

And then select the folders that should form part of that profile.

 

We then click through to the end and our selection profile is created. Updating the selection profiles updates the SelectionProfiles.xml file held in the “Control” folder of the deployment share.

You can see from the above image that I have selected the M4500 folder so only drivers from that folder will be held in the selection profile I created. If I create a new folder under the M4500 folder in my “Out-of-Box Drivers” section then that new folder will automatically be included in the selection profile.

 

This is important to remember as if I had created a selection profile at the “Dell” level, any new Dell drivers would be added to my selection profile. The above is all there really is to creating selection profiles. The real beauty of them comes in how we use them. The two places we will look at in this post are placing them in our WinPE images and using them in task sequences.

Using Selection Profiles in WinPE Boot Images

To use the selection profile in our task sequences we access the properties of our deployment share, select the “Windows PE” tab and then the “Drivers and Patches” tab.

 

We can simply choose the set of drivers to use by updating the “Selection Profile” selection box. However, what you should note is that the Drivers and Patches tab is specific to a single platform, either the 32-bit x86 boot images or the 64 bit x64 boot images controlled by the “platform” selection box.

When applying a selection profile to our boot images we therefore have to select the platform as well as the selection profile as illustrated below.

 

 

As you can see, applying one set of drivers for one machine type to a boot image for one platform does not make sense. For this reason, many people create additional folders within the “Out-of-Box Drivers” section to split the 64 Bit and 32 Bit drivers and in this way limit the size of the drivers injected into the image.

 

By doing this it is then possible to create selection profiles for 32 bit and 64 bit drivers and assign those to the individual boot images.

 

 

NOTE: Any changes made here will not be deployed until the deployment share has been updated by right clicking and selecting “Update Deployment Share” and any updated image deployed to DVDs / USB drives or WDS servers.

While updating your WinPE images you may also want to take a moment to increase the scratch size space to 128MB and possibly the background image.

The default location for the background image is %installdir%\Samples\background.bmp where %installdir% refers to the installation location for MDT. This is usually c:\Program Files\Microsoft Deployment Toolkit.

 

Finally, you can also install roles inside the WinPE disk which may be important if you have a secure network (802.1x) or perhaps want to run PowerShell scripts from within WinPE.

Again, any changes here will only be activated once you update the deployment share and re-distribute the boor files created.

 

Using Selection Profiles in Task Sequences

 

As well as using selection profiles to inject drivers into WinPE images, they can also be used to install or inject drivers into any endpoint builds by updating the task sequence used. Be default the task sequence includes an item named “Inject Drivers” in the Postinstall section of the task sequence (this can be accessed by right clicking the task sequence and selecting “Properties”).

 

This item is actually a “Run Command Line” task that runs the “ZTIdrivers.wsf” script which installs what it believes are the best available drivers from all of the Out-of-Box Drivers into the operating system. This behaviour can be overridden as follows:

  1. Select the options tab of the “Inject Drivers” task and choose to “Disable this step”.
  2. Next – choose to add a new task and navigate to the “Inject Drivers” section.
  3. This will create a new task labelled “Inject Drivers” which can be confusing so rename it to state the type of drivers being injected.
  4. Select the Selection Profile containing the drivers to be injected.You can also choose just to install only matching drivers or install all drivers if you believe that there may be an issue with driver matching.
  5. To ensure that these drivers are only injected for the type of computer concerned, click on the options tab and insert the WMI query below changing the model of machine to the correct value as discovered using the ZTIGather.wsf script.On the options tab click on Add | Query WMIEnter the query below swapping Precision M4500 for the model of your endpoint, click on OK and then Apply.

    Select * from Win32_ComputerSystem where Model like “%Precision M4500%”

    The model name for the computer must EXACTLY match the value found using the ZTIGather.wsf script including any full stops (periods) or spaces. If you do not want to copy this script and associated files across to an endpoint then you can discover the model by running the following command from an administrative command prompt.

    wmic computersystem get model

  6. Click away from the Inject Drivers task and click on it once more and its values will be updated in the GUI.
  7. Your drivers will now be injected for that model type.

 

You can simply rinse and repeat the above steps to use the same task sequence to build multiple types of hardware from a single task sequence. If you want to have a “default” set of drivers applied if no specific drivers exist for a model then we need to track whether or not drivers have been installed. To do this we create a “Property” within our customsettings.ini file and use that to track whether or not we have installed the drivers.

In the customsettings.ini file make the following changes:

[Settings]

Priority=Default

Properties=DriversApplied

[Default]

DriversApplied=NO

Here we have created our own custom variable named “DriversApplied” and set its initial value to NO. We can then update that value to YES when we install drivers in the task sequence. If it still remains at NO at the end of the task sequence we can then run the built in ZTIdrivers.wsf script.

To do this, we create a folder (New Group) for each type of driver within our task sequence and place the step to inject the drivers using a selection profile within that folder.

 

We use the Up | Down buttons to place the group (folder) in the correct place and also top indent the inject task underneath (inside) the folder.

We then associate our WMI query with the folder (rather than the inject task) so that all steps in that folder are run if the WMI filter is matched.

 

We then add a new task to set the task sequence variable “DriversApplied” to be “YES” if any drivers have been injected.

 

 

We repeat this step for each of the driver injection stages. The final step is to set the Unknown Computers folder to only be used if DriversApplied still equals “NO”. We do this by applying a filter (the same as for the WMI query above) to check if the task sequence variable is still set to NO.

 

 

 

 

As you can see, we can now use the same task sequence (and associated base operating system) to install multiple models of machines. We can use our customsettings.ini file to therefore make other decisions around automating our build, for example, still using different task sequences or perhaps the same task sequence but different sets of applications.

In the next part of this blog series I’ll discuss enabling monitoring to track installations as they occur and also creating media so that computers can be deployed offline when the MDT server cannot be contacted or where the link to the MDT server is small or unreliable.

 

Install and Configure MDT 2013 (Part 2)

December 30th, 2013  / Author: Philip Flint

In Part 1 of this series we walked through installing and configuring MDT 2013 for the first time. In this part I’ll show you how to globally automate the deployment. In terms of automation there are two types of automation, global automation settings that are applied by default to all machines and then those that are directed at individual machines. Firstly I will explain and demonstrate the global settings and then I will run through how to centrally apply individual settings to different makes and model of machines.

If we right click our deployment share we can access the configuration settings (properties) used for that share. Under the rules tab we can access two sets of settings, customesettings.ini (the settings you see in the pane) and Bootstrap.ini (the settings accessed by clicking the button).

 

These settings are held on the hard drive in the deployment share.

 

 

BootStrap.ini is used at boot time to provide basic configuration to the machine and allow it to connect to the deployment share. Because of this, changes to bootstrap.ini have to be placed inside the WinPE image by updating the deployment share whenever changes are made to bootstrap.ini.

Once the machine is booted customSettings.ini takes over and controls the rest of the deployment process typically automating entries in the MDT GUI to drive the deployment.

Let’s deal with bootstrap.ini first. If we click on the button above to open the ini file we see that, by default, its settings look like the below:

The [Settings] section is read in first and the process then follows the value in the priority field. Here it loads up the section [Default] which only has one value, the location of the deployment share to read data from. By default, all installations will use the same deployment share in this location.

We can add additional item to bootstrap.ini. The “typical” values we may want to add are:

UserID=Administrator

UserDomain=Example

UserPassword=Pa$$w0rd

KeyboardLocale=en-GB

SkipBDDWelcome=YES

 

I have highlighted the values to be changed in RED. A word of caution though, the username and password are held in plain text in the boot.ini file and, as you can see, this is held in the deployment share which, by default, provides all users with read access. If you were to use your domain administrator account in a production environment users could possibly have access to this high privilege level account !

Instead, I prefer to create a new account and grant access to that account to that share at the NTFS levek. I also remove the account from the Domain Users group and add it to a “No Access” group. This prevents the account being used to access information shared to domain users. It does not prevent it being used to access data shared to everyone or authenticated users though so the amount of protection this strategy supplies will depend on your internal environment. For those shares, access can be denied to the “No Access” group securing the environment once more.

 

He account is granted read access to the share and the built in users group removed. In this way admins still have access to the share but standard users do not and so the likelihood of someone “seeing” any user names or passwords in the customsettings.ini or bootstrap.ini files are limited.

 

Applying these settings, my boostrap.ini file therefore looks as below:

 

Once the boorstrap.ini file has been updated, we need to update the deployment share (to regenerate the WinPE boot .iso and .wim files) and then redeploy those, either to DVD / USB sticks or to WDS as needed. That is, we run the “Update Deployment Share” as below and once that is completed (after 15 – 20 minutes) we copy the files created from the “D:\DeploymentShare\Boot” location to wherever we will be using them to boot our endpoints.

 

Once the boot.ini file has been updated, we can re-test out deployment and ensure that the keyboard set to be used is the correct language (in my case, British English) and that we are not asked for a user name and password to connect to the deployment share. If you prefer you can leave out the line that passes the Password through. In that case anyone accessing the setup screen will not be able to reuse your image without first entering the password. However, if you would like remote users to self service their rebuilds then this would mean you would need to tell them the password so it’s a little bit of “swings and roundabouts” and how you configure this will, to an extent, depend on your circumstances.

Assuming that the test passes, you can now move on to customizing your customsettings.ini file. By default, the customsettings.ini file looks as below:

 

The Information Center provides detailed information on which settings can be applied to the customsettings.ini file. This information can be found by clicking on the “Planning MDT Deployments” link.

 

 

This will open a PDF file and under the “Toolkit Reference | Properties | Providing Properties for Skipped Deployment Wizard Pages” node you will find listed the values you need to enter into customsettings.ini to skip these items in the wizard.

 

 

Accessing the Properties Definition item (1 level up) allows you to get a fuller explanation of each property together with examples of how to configure it.

So, for example, If we want to automatically select the Task Sequence name to be run and just build 2012 R2 servers we enter the two lines below.

SkiptaskSequence=Yes

TaskSequenceID=WIN2k12R2-001

To automate the complete process you may want to consider using the settings below as well as setting the pre-set items to Yes:

SkipComputerName=YES

SkipDomainMembership=YES

SkipUserData=YES

SkipCapture=YES

DoCapture=NO

SkipLocaleSelection=YES

SkipTaskSequence=NO

SkipTimeZone=YES

SkipApplications=YES

SkipSummary=YES

TimeZone=085

TimeZoneName=GMT Standard Time

 

The Time Zone values for all regions are listed at http://msdn.microsoft.com/en-us/library/ms912391(v=winembedded.11).aspx.

The above settings will allow you to select a task sequence to be run but will assign a random name to the computer and, unless your task sequence adds the computer to the domain, will add the computer to a workgroup. If you want to pre-set any of the values you add the corresponding entry from the right hand column of the above table and assign a value to it as demonstrated by the task sequence example above.

A completed “basic” customsettings.ini file which would allow you to select the task sequence to apply would therefore look like the below:

Booting using our boot media now results in a single screen asking us which task sequence we would like to run.

 

 

The only thing is, configuring MDT as above means that ALL of your deployments must be identical other than the contents of the task sequence. Now, this may or may not be OK. For example, if we want to use MDT to refresh workstations and laptops as well as build new servers it could be that, when we are booting to a client machine we want to preserver (or be asked to preserve) user data whereas if we are booting to a server machine then we will not want to be given this option. This can be achieved by detecting (gathering) information about the endpoint at boot time and then applying different settings based on characteristics of the endpoint.

Luckily, this isn’t as different as it may sound. When the boot process runs, MDT runs a gather script named ZTIGather.vbs. This script is located in the Scripts folder of the deployment share meaning we can run it ourselves against an existing client and inspect the output of the script quite easily.

 

We can then use those discovered values to drive our deployment proves using customsettings.ini. If we run the script it creates a folder called MININT in the root of the operating system drive.

 

 

Drilling down inside that folder we eventually get to the ZTIGather.log file created.

 

This can be opened in notepad or you can download the SCCM 2007 log reader as part of the System Center Configuration Manager 2007 Toolkit V2 from http://www.microsoft.com/en-us/download/details.aspx?id=9257. This will allow you to install the “Trace32” application (the log file reader) by selecting to install “Common Tools” only.

 

 

An example of the type of information gathered is displayed below.

 

 

As can be seen, we can see whether the device is a laptop, desktop or server architecture, the make and model of the machine, the amount of RAM, processor speed etc. This can therefore allow us to use these values to determine the endpoint type which in turn allows us to determine what happens in our deployment based on the endpoint itself. For example, we can auto select the task sequence to run based on the endpoint type. We can also read items such as the endpoints IP address or gateway address. If we know the gateway address we can determine which site the endpoint is in. If we know which site it is in we may know which department it is in (if the LAN is subnetted in that way) or, in the alternative, which country it is in allowing us to determine which language packs should be installed.

A sample deployment decision tree may look like the below:

 

 

To accommodate this, we create additional sections within our customsettings.ini file. The first thing we do is create a new section which we will call [ClientType]. We then give that the highest priority of sections to be parsed.

 

[Settings]

Priority=ClientType,Default

Properties=MyCustomProperty

[ClientType]

SubSection=Server-%IsServer%

 

In the [ClientType] section we have instructed the file to go to a subsection named either Server-True or Server-False based on the value of IsServer in the ZTIGather.log file. This leads us to have 2 additional sections – [Server-True] and [Server-False].

 

[Settings]

Priority=ClientType,Default

Properties=MyCustomProperty

[ClientType]

SubSection=Server-%IsServer%

[Server-True]

SkiptaskSequence=Yes

TaskSequenceID=WIN2k12R2-001

[Server-False]

Subsection=Desktop-%IsDesktop%

 

From the above we can see that if the IsServer value is true then we will build using the WIN2K12R2-001 task sequence. If the endpoint is not a server then it must either be a desktop or a laptop so we then query the IsDesktop value. Based on that value we will either be directed to the [Desktop-True] or the [Desktop-False] section.

 

[Settings]

Priority=ClientType,Default

Properties=MyCustomProperty

[ClientType]

SubSection=Server-%IsServer%

[Server-True]

SkiptaskSequence=Yes

TaskSequenceID=WIN2k12R2-001

[Server-False]

Subsection=Desktop-%IsDesktop%

[Desktop-True]

SkiptaskSequence=Yes

TaskSequenceID=HPDesktop-001

[Desktop-False]

SubSection=%make%

 

If the device is a desktop, the HP task sequence will be run, if not, we will progress to a subsection based on the make of the device – either [Dell] or [Lenovo], the two types of laptops we have in use

[Settings]

Priority=ClientType,Default

Properties=MyCustomProperty

[ClientType]

SubSection=Server-%IsServer%

[Server-True]

SkiptaskSequence=Yes

TaskSequenceID=WIN2k12R2-001

[Server-False]

Subsection=Desktop-%IsDesktop%

[Desktop-True]

SkiptaskSequence=Yes

TaskSequenceID=HPDesktop-001

[Desktop-False]

SubSection=%make%

[Dell]

SkiptaskSequence=Yes

TaskSequenceID=DellLaptop-001

[Lenovo]

SkiptaskSequence=Yes

TaskSequenceID=LenovoLaptop-001

 

These settings are also accompanied by our [Default] section to cover those items which are not picked up by the above and also to supply the balance of settings. So, our complete customsettings.ini file will look as below:

 

[Settings]

Priority=ClientType,Default

Properties=MyCustomProperty

[ClientType]

SubSection=Server-%IsServer%

[Server-True]

SkiptaskSequence=Yes

TaskSequenceID=WIN2k12R2-001

[Server-False]

Subsection=Desktop-%IsDesktop%

[Desktop-True]

SkiptaskSequence=Yes

TaskSequenceID=HPDesktop-001

[Desktop-False]

SubSection=%make%

[Dell]

SkiptaskSequence=Yes

TaskSequenceID=DellLaptop-001

[Lenovo]

SkiptaskSequence=Yes

TaskSequenceID=LenovoLaptop-001

[Default]

OSInstall=Y

SkipCapture=YES

SkipAdminPassword=YES

SkipProductKey=YES

SkipComputerBackup=YES

SkipBitLocker=YES

SkipComputerName=YES

SkipDomainMembership=YES

SkipUserData=YES

SkipCapture=YES

DoCapture=NO

SkipLocaleSelection=YES

SkipTaskSequence=NO

SkipTimeZone=YES

SkipApplications=YES

SkipSummary=YES

TimeZone=085

TimeZoneName=GMT Standard Time

 

In our [Default] section we have set SkipTaskSequence=NO so that the full list of task sequences will be displayed if an unknown endpoint type is used.

NOTE: The above may or may not work for you – it is merely shown as an example of how to automate the deployment process. The best advice is to start the boot process for MDT and then press F8 to access a command prompt and then read the ZTIGather.log file. You will then be able to see exactly what values and settings will be detected by the boot process (information gathering process) for each end point type. The above process flow can be simplified for very basic deployments by just creating section names based on endpoint attributes such as make, model, serial number, MAC address etc. In this way an appropriate task sequence can be applied to each machine type with separate settings applied to individual machines. For example:

 

[Settings]

Priority= SerialNumber,Make,Default

[AFD1258E]

MandatoryApplications001={GUID}

[Dell]

SkiptaskSequence=Yes

TaskSequenceID=DellLaptop-001

[Lenovo]

SkiptaskSequence=Yes

TaskSequenceID=LenovoLaptop-001

 

Here, we apply the task sequence based on the make of machine. However, on the CEO’s laptop we want to always install a piece of software whenever it is rebuilt and so we record the serial number of his machine and assign the application to that serial number. The GUID is the GUID displayed in the application we have imported into MDT (example below). The GUID needs to be enclosed by the braces, { and }, when being entered in the customsettings.ini file.

 

Indeed, if this application is ONLY to be installed by particular people (and hence automatically) we can tick the “Hide this application in the Deployment Wizard” check box to ensure that it is not offered mistakenly to unknown machines.

As well as using discovered values in this way we can also use items such as the endpoints default gateway to determine what to install, for example language packs. Below is an example of how to configure the customsettings.ini file based on default gateways.

 

[Settings]

Priority=DefaultGateway, Default

[DefaultGateway]

10.1.1.1=LONDON

10.2.2.1=TOKYO

10.3.3.1=NEWYORK

[LONDON]

Packages001=XXX00004:Program4

Packages002=XXX00005:Program5

[TOKYO]

Packages001=XXX00006:Program6

Packages002=XXX00007:Program7

Packages003=XXX00008:Program8

[NEWYORK]

Packages001=XXX00006:Program4

I’d just like to finish up this post by showing you the settings used to join the endpoint to the domain. Any account used must have the rights to join computers to the domain delegated to it and any other rights removed. Remember, anyone with access to the MDT server hard drive or deployment share will be able to see the user name and password used as they are in clear text which is why I create a separate account for accessing that share and then restricting the permissions to only that account.

 

[NEWYORK]

SkipDomainMembership=NO

JoinDomain=EXAMPLE.COM

DomainAdmin=svc_DomainJoin

DomainAdminDomain=example

DomainAdminPassword=Password01

MachineObjectOU=OU=Computers,OU=NYC,DC=Example,DC=com

 

Here, we have detected from the machines default gateway that it is based in New York and so will join our domain and place the machine in the computers OU under the NYC OU. JoinDomain is the domain we want to join and DomainAdmin, DomainAdminDomain and DomainAdminPassword are the credentials of the account that will be used to join the computer to the domain.

If you want some good examples of customsettings.ini configuration I would refer you to http://scriptimus.wordpress.com/2011/06/23/mdt-2010-sample-customsettings-ini-for-fully-automated-deployments/. While this refers to MDT 2010 they should still work with MDT 2013 and, as above, you can confirm the corrected values from the built in documentation and the examples provided there.

In the next part of this blog series we will be covering off Selection Profiles and application of drivers to endpoints and to the WinPE images used to boot client machines for MDT.

 

 

Install and Configure MDT 2013 (Part 1)

December 29th, 2013  / Author: Philip Flint

This post is just a quick walk through on how to install and configure the Microsoft Deployment Toolkit for the first time. MDT helps you automate the installation of Microsoft Operating Systems including associated drivers, patches and software. It can be extended by using Windows Deployment Services to allow PXE booting to the boot image and automated to minimise user input to allow very Lite Touch installations.

In short, it’s a “free” version of those bits of SCCM that allow you to perform operating system deployments.

MDT 2012 requires two pieces of software to be installed:

  1. The Windows Assessment and Deployment Kit for Windows 8.1
  2. The Microsoft Deployment Toolkit 2013

These can be downloaded from here and here respectively but you may want to search for updated versions.

The 2013 version of the toolkit supports deployment of Windows 8.1, Windows 8, Windows 7, Windows Server 2012 R2, Windows Server 2012, and Windows Server 2008 R2.

First, we install the Windows ADK for Windows 8.1. When asked select to install the deployment tool, the Windows PE environment and the User State Migration Tool following the prompts to complete the install.

 

When installation has completed, we then install MDT 2013 selecting the default settings.

 

Installation of the ADK and MDT are supported on Windows 2012 / R2 as well as 2008 R2. Once installed we start the workbench application which can be found by searching for “Deployment Workbench”

 

The workbench contains 2 nodes, the Information Centre which contains documentation and Deployment Shares which is the engine of MDT with each share conta9ining a location from which operating systems can be deployed.

 

The first task is to create a new deployment share. This can be done by right clicking the “Deployment Share” node and selecting “New Deployment Share”.

 

Set the share location

 

A name for the deployment share

 

 

And a description

 

 

Next, we select options for the “deployment share”.

 

 

What these options will actually do is begin to make choices for the installation GUI experience. That is, which pages are shown in the installation GUI. In a subsequent post I shall show you how to amend these choices. For now the defaults can be accepted or amended as you see fit.

We can then accept the summary of information and proceed to create the deployment share.

Once the deployment share is created a folder shared with the above details will be created. This folder on the hard drive will contain a selection of sub folders:

This folder structure matches closely what you see within the Deployment Workbench but contains a few additional specialist folders.

MDT is now ready to configure. To make any configuration easier to understand, I first create a folder structure.

 

With our folder structure in place we are now ready to import an operating system. Below I show how to import Windows Server 2012 R2 operating systems ready for deployment but we can just as easily import desktop operating systems.

First, we mount our ISO image and copy the contents to the hard drive. I have created a folder called D:\Sources to hold any source installation files.

With the files extracted, we right click the appropriate sub-folder under Operating Systems (in my case, the Servers sub folder) and select “Import Operating System”.

 

As I have a full set of source files I can select to import a full set of source files.

If we had previously created a reference image, we could then capture it as a master image at which point we would select “Custom image file” instead. If we had an existing WDS infrastructure, we could leverage our work to date and import those existing images using “Windows Deployment Services images”.

NOTE: If importing WDS images, we have to import ALL images. We do not get to select which ones to import though they can be subsequently deleted.

We then browse to the folder we extracted from the DVD. As the source files exist on the same volume as my deployment share I have selected to move the files instead of copying them to speed up the process. If it’s important that the original files are preserved for some reason then you can deselect this option to copy them. If you set the source location as the original DVD then you will have no choice other than to copy the files.

 

 

We can then give the folder to which the images will be copied a meaningful name and proceed with the import.

 

 

My source contains four editions of Windows Server 2012.

 

My source folder is emptied.

 

And the source files are moved to the folder name I selected in my deployment share.

 

The operating Systems now appear in the MDT interface and are available for installation.

 

 

Next, we import any necessary drivers. To create a reference image I shall be using a virtualised environment negating the need for drivers. However, drivers are imported in much the same way as operating system files. Firstly, download and extract the driver files from any executables to obtain the necessary inf files and other associated files. Where drivers cannot be extracted then they may need to be installed as complete applications. As a minimum you should endeavour to extract any network and mass storage drivers to enable the operating system to install and connect to the network.

To import the drivers we right click the appropriate node and choose to “Import Drivers”.

 

We browse to the location containing our extracted drivers (multiple sub folders can exist in this location, one for each driver or set of drivers).

 

 

We can choose to import drivers even if they exist elsewhere. This is useful if we have a structure as shown here (one folder for each model of machine) where multiple models may utilise the same driver. By allowing the driver to be reimported we can ensure that the driver exists in that folder. However, care should be taken when creating any task sequences that only the appropriate copy or version of the driver is installed.

 

 

Our drivers will then be imported.

 

 

We can also import software using the same process. I personally create at least two folders, one named global (top contain software to be deployed to all endpoints) and one name Departments or similar, which then contains a sub folder for each department holding the software appropriate to that department. Other folder structures can be used, you should use whichever method makes it easiest to navigate the structure, especially when it comes to installing end client software.

To install an application we simply right click the folder where we would like to locate the application and then select “New application”.

NOTE: An applications is different from a package. In MDT an application is a piece of software you install, a package is an operating system patch or language pack.

We are the presented with three choices:

 

An application with source files will allow you to install a “standard” application. An application witho9ut source files or one on the network is an “application” that is run directly from a share while an application bundle is merely a placeholder to group together multiple applications that must be installed as a single item.

Here I will install Foxit Enterprise Reader. I quite like this as a PDF reader as it’s faster and, I think, more functional than Adobe.

We complete the details about the application.

 

 

Browse to the location of the application, again selecting to move the application install files.

 

Again, we select a meaningful name for the folder in which the application source files will be placed.

 

 

We then enter the command line to silently install the software. As with all MSI packages, it may be that a transform file is required to install the software. Foxit has a number of switched which can be obtained by mailing the support department at Foxit. The ones we shall user are:

MAKEDEFAULT (When enabled, this setting will make Foxit Reader the default PDF application and will associate all .PDF files with Foxit)

LAUNCHCHECKDEFAULT (When application starts up it will verify that Foxit is the default browser)

VIEW_IN_BROWSER (This will allow PDF files to be read in your web browser)

STARTMENU_SHORTCUT (Place Foxit shortcut in Start menu)

DESKTOP_SHORTCUT (Place FoxIt shortcut on the desktop)
The installation command line we shall use is therefore

msiexec /i “
EnterpriseFoxitReader612.1224_enu.msi” /qn /norestart MAKEDEFAULT=1 LAUNCHCHECKDEFAULT=0 VIEW_IN_BROWSER=1 STARTMENU_SHORTCUT=1 DESKTOP_SHORTCUT=0

A log file of the installation can be taken using the standard msiexec switch /l*v if needed.

NOTE: Note the use of the /norestart switch. As an alternative for windows installer files/ we can use REBOOT=reallysuppress. The application being installed must NOT reboot the computer as part of the installation routine. Any reboots should be handled by MDT so that WinPE can prepare for shutdown and recover from restarts.

 

 

We can then follow the prompts to complete the import of the application. Once imported, it appears in our list of applications.

 

Accessing the properties of the application (by right clicking) exposes the settings we just provided. Here, we can also instruct MDT to reboot the computer after application installation if required.

 

Once we have our operating systems, applications and drivers imported we can move to using MDT to deploy computers or, in the alternative, creating a reference image to be imported which contains all our base software. The reasons for creating a base image containing the software are:

  1. It reduces installation times as the installation (including all software) will be from an image (rather than using installers) and so a fully configured computer can be deployed in ~ 30 minutes rather than the 2 – 3 hours required when using installers (assuming multiple base applications).
  2. It’s a trivial exercise to rebuild or recreate the reference image as the operating system files, drivers and application files already exist within MDT so if we need to update an item of software this is not onerous. This is compared to removing applications from the base build and using a task sequence to install all applications which can take far longer.

To prepare for installing operating systems with MDT we next create a boot image which the computers will use to boot to. This boot image will contain the executables for the MDT GUI and our network and mass storage drivers that we have imported. It also contains some command / configuration files which I shall detail in a later post.

To create the boot media we “update” the deployment share by right clicking the deployment share and selecting “Update Deployment Share”.

 

The WinPE image that ship with the ADK is the one used to create the boot image. Indeed, a number of boot images will be created in each of i386 and x64 variants. You are asked whether you would like to update the existing image to optimise the creation process or completely regenerate the image.

 

As this is the first image being created both options result in the same thing. Working through the process creates the boot image and this process takes around 15 – 20 minutes.

Once complete a number of files will be created in the Boot folder of the deployment share.

If integrating MDT with WDS then the .wim file can be used to boot to from PXE otherwise the .iso file should either be attached (when building a VM) or burning to a CD / USB thumb drive for direct booting.

The final thing we need to do is to create a task sequence for the MDT GUI to follow. This lists the tasks that will be followed to create a base image. For our purposes, a default task sequence can be used. I shall try and detail how to bespoke the task sequence in later posts.

To create a new task file we right click the appropriate folder and select “New Task Sequence”

We give the task sequence a name.

We next select a template to build the task sequence from.

 

 

If we have a pre-existing computer to use as a reference image we can just select “sysprep and capture”. If we have a pre-prepared VHD and don’t own SCVMM, we can select one of the later task sequences.

As I have a server O/S to deploy I shall choose “Standard Server Task Sequence”. If we were installing a desktop O/S we would simply select the very similar “Standard Client Task Sequence”.

We then select the operating system that we would like to install using this task sequence.

 

If using a KMS enabled copy of the operating system disk we can simply select not to use a key at this time

 

If we are using a retail disk (such as those from MSDN) then we can enter a retail key or if we have MAK keys then we can select the second option.

We next provide the standard Windows information – here I have set the IE home page to the internal intranet but for a server you may want to leave at the default “about:blank”.

 

Next, we can either specify a local administrator password (the default option) or choose to supply one at the time of deployment.

 

Once completed, the task sequence appears in the MDT GUI.

 

 

 

The physical files constituting the task sequence can be found in the Control folder of the deployment share.

 

 

 

We can now build and capture our reference machine. I do this in a virtualised environment to remove any reliance on network or mass storage drivers. We simply mount the LiteTouch ISO to the VM and boot it.

 

The VM will boot to the WinPE / LiteTouch ISO created earlier and the GUI start to enable installation.

 

You are presented with various options such as being able to set a static IP address if your virtualised environment does not connect to DHCP (as some production based server environments do not) or running the windows recovery wizard. If you need access to a command prompt, simply press F8. This will allow you to inspect the local hard drive or any installation logs.

NOTE: The computer will NOT reboot / the process will NOT complete while the command prompt it open. This should be closed when not in use to allow the build to complete.

If we choose to run the deployment wizard we are next asked for credentials to use to connect to the deployment share.

 

The wizard then ask which Task Sequence should be used to deploy the machine.

 

We are then asked if we would like to join a domain or a workgroup – for a reference image I would recommend joining a workgroup to ensure that no remnants of the domain membership are “left behind” once the image is taken.

 

 

We then set the language and time zone for the server.

 

 

Select any applications to be installed. Here I am selecting Foxit, as a demonstration, even though this is unlikely to be installed in a server reference image. For a client reference image you may want to consider installing items such as C++ redistributables, Silverlight etc.

 

 

As we are creating a master image (from the reference image we are deploying) I have selected to capture an image. Once this captured image has been imported as a reference image for deployment, we would simply select “Do not capture an image of this computer” to deploy additional machines.

 

 

As we are deploying to a virtual machine, the choice to deploy bitlocker is skipped over. We can now click on “Begin” to start the automated installation and capture routine.

 

 

And that’s it – a basic installation and configuration of WDS so that you can deploy, capture and redeploy desktops free gratis and for nothing (well, you will need a server to run the software but most organisations have capacity for this in their virtualised environment).

In part 2 of this series I run though automating the environment even more and show you how to cope when you have a variety of endpoints to deploy to.