Windows Server 2012 R2 Data Deduplication in Action

Must say, I’m most impressed with Windows 2012 R2 data deduplication. I am using it in my home lab and can now have far more VMs on standby ready to go with technology specific labs switched off but consuming very little space. Below are the results that I am getting so this is “real world” with most of the VMs running the same operating system.



That’s a total of 54 VMs, ISOs and templates taking up only 133GB of space !

To enable this feature, open up Server Manager and navigate to the File and Storage Services node then select the disk on which you want to enable deduplication (dedupe is only supported for flat files and VHDs so don’t try and dedupe the disk with your operating system on).

Right click the disk and select “Configure Data Deduplication”.



The single pane of settings is then fairly self explanatory.



As you can see, mine is set to deduplicate VMs so I have excluded n0n-VM files such as anything in the ISOs folder. I can’t say that I have seen performance suffer enabling this but I am just running in a lab and not in production but it seems to me that, with shared storage being so expensive and performance being so critical, it now makes sense to place SSDs in your SAN and put your deduped VMs on those for maximum performance at reduced cost.

Preventing access to specific applications in XenClient 2

XenClient is fabulous. You have a locked down work image and you have an “open” image where you can run whatever you like in it. You can then “share” applications betweioen running VM’s so you can have your “ticker” or “game” or whatever avaialble on your work machine or you can have your work application viewable from your personal machine. But what if you don’t want certain applications to be available ?

This is controlled from an XML file which instructions the agent where to collect icons from (essentially the all users atsrt menu and any logged on users menu) but exclude anything on the blacklist (Outlook Express, anything in c:windows etc) unless its included on the list of whitelisted applications (internet explorer for example). So, very easy to configure IF you know where the configuration file is …. which isn’t at all obvious.

The XML file can be found in the following locations:

Windows 7


C:Documents and SettingsAll UsersApplication DataCitrixXciApplicationsXciDiscoveryConfig.xml

Installing Windows 8 on XenServer

Having trouble installing the recently released developer preview on XenServer ?

Thomas Koetzing has the fix on his blog

Create a VM with a Windows 7 template and copy the virtual machine UUID from the general tab in XenCenter or use xe vm-list in the CLI. Next you need to run the command xe vm-param-set uuid=<VMUUID> platform:viridian=false

The installation can the proceed as normal.

Build a private cloud

Want to know how to build a private cloud ? Think its difficult ? Maybe you’re a small organisation and think that private cloud is too expensive for you ?

Don’t be fooled – private cloud can be easy and you may even find that you have lots of the pieces in place already. Just follow the Microsoft guidelines here =>

If you want to extend your private cloud solution then they even have a pre-approved list of vendors and what they can do for you to make your solution complete.

Virtual Server Sprawl

You’re planning on going virtual with your servers. Everything is going well. You have your hypervisor deployed and you’ve converted your servers from physical to virtual. You’ve “lived the dream” and put your dev and test environment on there as soon as you could and allowed other people to create the virtualised servers they needed. Life is good.

But wait (enter the sound of screeching tyres). Is this so good and what can it mean for the business ? After all, this is why your virtualised, right ? To be able to save costs and deploy servers quickly. Only, without the financial constraints that existed to stop additional servers being provisioned in the physical world there was nothing to hold anyone back in the virtual world. If things have got out of hand then lets face it, you trusted these people to retire their servers and they’ve let you down. If this sounds familiar then you are no doubt a victim of “Virtual Server Sprawl“.

One of the benefits of virtualisation was to be able to not just load multiple services onto one instance of an operating system (physical server) but to be able to run multiple instances of the operating system and dedicate each of those instances to specific services. Server virtualisation also promised to make those services more highly available with vMotion, LiveMotion, XenMotion, etc, Virtual Server Sprawl is not the creation of additional servers. that was always expected and planned for in any virtual server migration. Virtual Server Sprawl results from “the uncontrolled creation, administration and lifecycle management of virtual servers“. The important word here is uncontrolled. Out of hand virtual server sprawl can become a nightmare for the server team with issues arising around licensing, maintenance, backups and security as well as environment stability. All of this can translate into cost that erodes the very savings server virtualisation was meant to realise. As Thomas Bittman of Gartner put it “Virtualisation without good management is more dangerous than not using virtualisation in the first place”.

I use the three stage definition as it highlights the areas that need to be controlled to prevent Virtual Server Sprawl and, as always, these issues reflect those in the physical server world. It’s just that with physical servers IT departments have had physical constraints placed on them to prevent physical server sprawl (finance, physical number of servers, limited power and cooling in the data centre, separation of dev / test LAN from main network) and frequently these limitations have not been re-invoked in the virtual world.

Lets look at each of the three areas in turn:


Uncontrolled creation of servers arises when servers can be deployed at will with no consequences. Generally this can be handled by process and if an automated provisioning process is used, by assigning users “credits” that reflect the number of machines they can create. This can also help with lifecycle management in that users will be more willing to retire servers which are no longer needed to reclaim credits. Its not enough just to assign credits to users though. Internal IT will not be constrained by credits and, as above, will create additional servers to provision additional services. As there is no capital expenditure (CapEx) procedure to follow they are more likely to add additional servers and if the virtualised environment supports memory overcommit then “That’s OK, we have almost unlimited memory”. The truth is more likely to be that the virtualised environment will page out the RAM more, shared disk sub-systems may hit performance issues and shared network connections may be overwhelmed. At least by having some sort of process that requests, authorises, provisions and reports on the virtual server provisioning process then these issues may be minimised.


Once a server is provisioned, who maintains that server ? As the server provisioning process is accelerated is the server deployed in a secure way and what effect will it have in a production environment ? If server deployment is delegated outside of the core server team then will firewalls be turned off “because it’s easier”. Will the servers be placed in the correct OU in Active Directory to have the right policies applied. If not, will the anti-virus server deploy the engine and patterns and will appropriate exceptions be applied for the role of the server ? Indeed, will anyone have made arrangements to back-up the server or will all VM’s be backed up by default even if they have no data on them (web servers) or are only used for testing. Who will patch the servers ? These issues can be mitigated by the use of virtual machine templates that include all latest kernel updates, service packs and patches for the operating system. That add the provisioned machine automatically to an OU appropriate to its role (messaging, database, file, web etc) in order that the correct GPO’s can be applied. That assign an appropriate amount of hard disk space, CPU, RAM etc. That have the basics of the anti-virus solution correctly pre-installed so that engines and patterns can be downloaded and updated  on first boot.

Lifecycle Management

Let’s assume that the server was authorised for creation and it was provisioned fine. However, it was only meant to be used for 30 days to test out a scenario. If the server is never retired it continues to consume resources on our physical host., not only memory but possibly also expensive shared storage space If the virtual server has software installed on it (extremely likely even if ti is just an operating system) then it may be running unlicensed if its license has been transferred or re-assigned to another machine (either its replacement in production or another test machine). In the physical world physical servers would be retired to be repurposed or retired totally to reclaim space in the server rack, regain network ports or return scarce power sockets to use. These items that self correct IT teams use of physical servers don’t exist in our virtualised environment. Basic change control and reporting processes can limit the effect of virtual machines being provisioned beyond their useful lifetime.

The issues around Virtual Server Sprawl are readily identifiable and easy to anticipate.

  • Increased paging of RAM due to overcommitment / over usage of physical RAM in host
  • Reduction in available storage space which may be expensive in shared storage environments
  • Additional network traffic and possibly incorrect assignment of servers to appropriate VLANs
  • Security vulnerabilities with machines incorrectly configured or patched
  • Incorrect policies assigned due to mis-placement in AD
  • Lack of backup, DR or business continuity
  • Possibility of licensing issues with applications on redundant servers

There are many software packages you can buy to help but they are only as good as they are configured and if they are used rigorously. For those on a budget Virtual Server Sprawl is reasonably easy to control with forward planning.

  • Process & Authorisation
  • Configuration & Templates
  • Monitoring & Reporting

Should Virtual Server Sprawl stop you from virtualising your environment ? In my opinion absolutely not, but you definitely need to be aware of its existence and plan accordingly.

Interested in VDI but think its too expensive for you ?

The head on over to the joint Microsoft and Citrix site and see how you can save a whopping 70% on the cost of implementing your first solution.

Read all about it at There’s even a training lab at that walks you through installing the whole solution.

Service Pack 1 announced for Windows 2008 R2

Great news. Microsoft have started to release news about SP 1 for Windows 2008 R2. Still slated for release in Q4, there are two major announcements for anyone interested in virtualisation – RemoteFX which essentially supercharges the vide experience for end users of Remote Desktop Services. So powerfullis this that for once Citrix will be licensing the Microsoft solution on graphics acceleration ratehr than the other way round. Read more about it here.

The other big announcement is dynamic memory allocation in Hyper-V. You can read about that here. VMWares “killer” feature has always been memory over commit. Essentially it just pages non used memory to the hard drive so in highly virtualised environments where VM’s need to use their RAM this can lead to excessive paging and poorly performing infrastructures. However, it is still the number 1 reason why people choose VMWare over other virtualisation vendors so even though, in my opinion, its not as great as its cracked up to be, if you ant to do virtualisation then you have to offer this functionality. The good news its, that’s one less reason to spend a fortune on VMWare if you are on a budget.