Virtual Server Sprawl

You’re planning on going virtual with your servers. Everything is going well. You have your hypervisor deployed and you’ve converted your servers from physical to virtual. You’ve “lived the dream” and put your dev and test environment on there as soon as you could and allowed other people to create the virtualised servers they needed. Life is good.

But wait (enter the sound of screeching tyres). Is this so good and what can it mean for the business ? After all, this is why your virtualised, right ? To be able to save costs and deploy servers quickly. Only, without the financial constraints that existed to stop additional servers being provisioned in the physical world there was nothing to hold anyone back in the virtual world. If things have got out of hand then lets face it, you trusted these people to retire their servers and they’ve let you down. If this sounds familiar then you are no doubt a victim of “Virtual Server Sprawl“.

One of the benefits of virtualisation was to be able to not just load multiple services onto one instance of an operating system (physical server) but to be able to run multiple instances of the operating system and dedicate each of those instances to specific services. Server virtualisation also promised to make those services more highly available with vMotion, LiveMotion, XenMotion, etc, Virtual Server Sprawl is not the creation of additional servers. that was always expected and planned for in any virtual server migration. Virtual Server Sprawl results from “the uncontrolled creation, administration and lifecycle management of virtual servers“. The important word here is uncontrolled. Out of hand virtual server sprawl can become a nightmare for the server team with issues arising around licensing, maintenance, backups and security as well as environment stability. All of this can translate into cost that erodes the very savings server virtualisation was meant to realise. As Thomas Bittman of Gartner put it “Virtualisation without good management is more dangerous than not using virtualisation in the first place”.

I use the three stage definition as it highlights the areas that need to be controlled to prevent Virtual Server Sprawl and, as always, these issues reflect those in the physical server world. It’s just that with physical servers IT departments have had physical constraints placed on them to prevent physical server sprawl (finance, physical number of servers, limited power and cooling in the data centre, separation of dev / test LAN from main network) and frequently these limitations have not been re-invoked in the virtual world.

Lets look at each of the three areas in turn:

Creation

Uncontrolled creation of servers arises when servers can be deployed at will with no consequences. Generally this can be handled by process and if an automated provisioning process is used, by assigning users “credits” that reflect the number of machines they can create. This can also help with lifecycle management in that users will be more willing to retire servers which are no longer needed to reclaim credits. Its not enough just to assign credits to users though. Internal IT will not be constrained by credits and, as above, will create additional servers to provision additional services. As there is no capital expenditure (CapEx) procedure to follow they are more likely to add additional servers and if the virtualised environment supports memory overcommit then “That’s OK, we have almost unlimited memory”. The truth is more likely to be that the virtualised environment will page out the RAM more, shared disk sub-systems may hit performance issues and shared network connections may be overwhelmed. At least by having some sort of process that requests, authorises, provisions and reports on the virtual server provisioning process then these issues may be minimised.

Administration

Once a server is provisioned, who maintains that server ? As the server provisioning process is accelerated is the server deployed in a secure way and what effect will it have in a production environment ? If server deployment is delegated outside of the core server team then will firewalls be turned off “because it’s easier”. Will the servers be placed in the correct OU in Active Directory to have the right policies applied. If not, will the anti-virus server deploy the engine and patterns and will appropriate exceptions be applied for the role of the server ? Indeed, will anyone have made arrangements to back-up the server or will all VM’s be backed up by default even if they have no data on them (web servers) or are only used for testing. Who will patch the servers ? These issues can be mitigated by the use of virtual machine templates that include all latest kernel updates, service packs and patches for the operating system. That add the provisioned machine automatically to an OU appropriate to its role (messaging, database, file, web etc) in order that the correct GPO’s can be applied. That assign an appropriate amount of hard disk space, CPU, RAM etc. That have the basics of the anti-virus solution correctly pre-installed so that engines and patterns can be downloaded and updated  on first boot.

Lifecycle Management

Let’s assume that the server was authorised for creation and it was provisioned fine. However, it was only meant to be used for 30 days to test out a scenario. If the server is never retired it continues to consume resources on our physical host., not only memory but possibly also expensive shared storage space If the virtual server has software installed on it (extremely likely even if ti is just an operating system) then it may be running unlicensed if its license has been transferred or re-assigned to another machine (either its replacement in production or another test machine). In the physical world physical servers would be retired to be repurposed or retired totally to reclaim space in the server rack, regain network ports or return scarce power sockets to use. These items that self correct IT teams use of physical servers don’t exist in our virtualised environment. Basic change control and reporting processes can limit the effect of virtual machines being provisioned beyond their useful lifetime.

The issues around Virtual Server Sprawl are readily identifiable and easy to anticipate.

  • Increased paging of RAM due to overcommitment / over usage of physical RAM in host
  • Reduction in available storage space which may be expensive in shared storage environments
  • Additional network traffic and possibly incorrect assignment of servers to appropriate VLANs
  • Security vulnerabilities with machines incorrectly configured or patched
  • Incorrect policies assigned due to mis-placement in AD
  • Lack of backup, DR or business continuity
  • Possibility of licensing issues with applications on redundant servers

There are many software packages you can buy to help but they are only as good as they are configured and if they are used rigorously. For those on a budget Virtual Server Sprawl is reasonably easy to control with forward planning.

  • Process & Authorisation
  • Configuration & Templates
  • Monitoring & Reporting

Should Virtual Server Sprawl stop you from virtualising your environment ? In my opinion absolutely not, but you definitely need to be aware of its existence and plan accordingly.

Interested in VDI but think its too expensive for you ?

The head on over to the joint Microsoft and Citrix site and see how you can save a whopping 70% on the cost of implementing your first solution.

Read all about it at http://www.citrixandmicrosoft.com/. There’s even a training lab at https://cmg.vlabcenter.com/default.aspx?moduleid=281742e3-2613-42da-bd58-2c3578f039b4 that walks you through installing the whole solution.

Service Pack 1 announced for Windows 2008 R2

Great news. Microsoft have started to release news about SP 1 for Windows 2008 R2. Still slated for release in Q4, there are two major announcements for anyone interested in virtualisation – RemoteFX which essentially supercharges the vide experience for end users of Remote Desktop Services. So powerfullis this that for once Citrix will be licensing the Microsoft solution on graphics acceleration ratehr than the other way round. Read more about it here.

The other big announcement is dynamic memory allocation in Hyper-V. You can read about that here. VMWares “killer” feature has always been memory over commit. Essentially it just pages non used memory to the hard drive so in highly virtualised environments where VM’s need to use their RAM this can lead to excessive paging and poorly performing infrastructures. However, it is still the number 1 reason why people choose VMWare over other virtualisation vendors so even though, in my opinion, its not as great as its cracked up to be, if you ant to do virtualisation then you have to offer this functionality. The good news its, that’s one less reason to spend a fortune on VMWare if you are on a budget.

HP Sizing and Configuration Tool for Microsoft Hyper-V

The HP Sizing and Configuration Tool for Microsoft Hyper-V is a downloadable, automated tool that provides a quick and consistent methodology to determine a “best-fit” server configuration for your virtualized Hyper-V environment. This tool enables you to quickly compare different solution configurations and obtain a highly detailed, customizable server and storage solution complete with a detailed bill of materials.

This sizer allows users to create new Hyper-V solutions, open already saved solutions, and use data compiled from other tools like Microsoft’s Assessment and Planning (MAP) toolkit to build rich Hyper-V configurations built on HP ProLiant server and storage technologies.

The sizer allows rapid comparisons of various Hyper-V characterizations and server platform choices. You can select and customize configurations for your particular environment by adding or substituting server types, number of servers, and server components.

The sizer was developed from knowledge gained during performance characterization testing of Microsoft Hyper-V in the HP Solutions Engineering lab in Houston, Texas

HP Sizing and Configuration Tool for Microsoft Hyper-V

Should I virtualise my Domain Controllers ?

Now that’s a difficult question. If you asked me “Can I virtualise my Domain Controllers” then that’s a different question to which the answer is “Of course, its fully supported depending on your virtualisation platform and the version of Windows being used but if you’re on the latest Hyper-V and the latest Windows then its fine”. The question “Should I virtualise my Domain Controllers ?” recognises that you can but that you have a choice as to whether you do or not and, as with any IT decision, you should research, size and plan. What I’d like to talk about today is two items to consider when thinking of virtualising domain controllers.

The first is around synchronisation of system clocks. As mentioned in a previous article windows Servers use time synchronisation to ensure against replay attacks and thus increase the security of Kerberos authentication within an Active Directory environment. However, virtual platforms such as VMWare or Hyper-V also allow you to synchronise a virtual machines clock with the physical host. What this means though is that, if the server host is showing a different time from the root PDC Emulator then any virtualised domain member server or domain controller will set its clock against the domain and then set its clock against the physical host and then against the domain and then against the physical host and so on ad nauseum. This can cause five issues:

  1. If there is more than the amount of “difference” between the DC clock and other domain controller clocks then the server will not be able to synchronise
  2. Similarly, as the DC clock will different from those of clients, clients will fail authentication against this domain controller.
  3. This constant re-synchronisation will cause clock “flapping” so that any events or logs written will have events recorded in an incorrect order. This is an issue not only for domain controllers but also for other servers such as SQL or Exchange where they record the time of records being changed or messages arriving.
  4. If you run an environment where accurate times are important then this will into be possible with “flapping” clocks. For example, if you require staff to “clock in” and penalise them for late arrival then your solution will be at risk if your clock cannot keep accurate time.

So, by all means virtualise your domain controllers but don’t allow them to synchronise their clocks with the physical host. In Hyper-V this behaviour can be disabled by opening the Hyper-V Manager Console. selecting the virtual machine and clicking on Settings in the Actions pane for that virtual machine. Under the Management node select Integration Services and clear the Time Synchronization check box.

 

Click to enlarge
Click to enlarge

Click on Apply and that virtual machine will now synchronise its clock solely based on the settings within its operating system.

The second item to consider before virtualising your domain controllers concerns “snapshotting”. Snapshots allow you to take a point in time view of a server and then record differences to the virtual disk of that server over time. In this way you can “roll back” a virtual machine to the point the snap shot was taken by removing the changes made. However, this gives an issue when we consider domain controllers.

When a change is made on a Domain Controller it updates its own Update Sequence Number (USN) and, when a synchronisation is due with other domain controllers, issue the update to them. These USN’s are maintained per Domain Controller and a certain change may register on DC1 as 12345 and hold the USN of 7657622 on the far older DC2. You can see the USN on a particular Domain Controller by looking at the highestCommittedUSN value using ADSIEdit to connect to the RootDSE default naming context.

Click to enlarge
Click to enlarge
DC1 would look like above and DC2 would have the USN below, for example.
Click to enlarge
Click to enlarge

Now, it’s a basic premise that the USN on a domain controller should only ever get bigger, and not smaller. After all, transactions can’t just disappear. Indeed, domain controllers use this USN to keep track of the updates they have received from each other. The last USN received from each replicating partner is stored in a High Watermark Vector Table on each DC. In this way, the receiving domain controller knows which was the last change it received form a replicating partner. When it next wants to replicate it sends its high watermark value to the DC it wants to replicate from (the source domain controller). The source DC then uses the information in the high watermark value to determine which objects to replicate back to the target Domain Controller. This can be represented by the following table:

Step DC USN High Watermark Value Action
1 DC1 100 200 Initial Value
DC2 200 100
2 DC1 108 200 Changes made on DC1 (New user created for example)
DC2 200 100
3 DC1 108 200 DC2 requests changes, synchronises and updates it high watermark value for DC1
DC2 200 108
4 DC1 127 200 Further changes are made on DC1
DC2 200 108
5 DC1 127 200 Only changes 109 to 127 are synchronised to DC2
DC2 200 127

 

So far so good. So, what’s the issue. The issue is that if we had taken a snapshot of DC1 at, say, step 3 and rolled back then the following would happen.

Step DC USN High Watermark Value Action
1 DC1 100 200 Initial Value
DC2 200 100
2 DC1 108 200 Changes made on DC1 (New user created for example)
DC2 200 100
3 DC1 108 200 DC2 requests changes, synchronises and updates it high watermark value for DC1
DC2 200 108
4 DC1 127 200 Further changes are made on DC1
DC2 200 108
5 DC1 127 200 Only changes 109 to 127 are synchronised to DC2
DC2 200 127
6 DC1 108 200 Active Directory database “restored” on DC1
DC2 200 127
7 DC1 119 200 Further updates made on DC1 raising its USN past the old value of 127
DC2 200 127
8 DC1 147 200 DC2 requests changes past 127 – DC1 send changes 128 to 147 – the “new” changes in the range 109 to 127 are lost and never synchronised
DC2 200 127

 

So, by restoring Active Directory from a snapshot we would run the risk of losing updates IF Active Directory allowed us to do this. Fortunately the clever guys at Microsoft have worked this out and from Windows 2003 SP1 this is not likely to happen because AD will recognise that the USN’s have become out of sequence and will refuse to allow DC1 to synchronise. You will know if this has happened to you not only because your domain will not synchronise properly but you will see an event similar to the below logged in the event viewer on the “restored” Domain Controller.

Click to enlarge
Click to enlarge

 As you can see, the only solution for this is to forcibly demote the domain controller and start again. Of course, the situation is even worse if ALL domain controllers are snapshotted and then restored. It’s perfectly possible that you can end up without an operating Active Directory environment ! So, the original question was “Should I virtualise my Domain Controllers ?” and I say that this is a decision that you have to make yourself and the risk you want to assume. However, I would suggest that a best practice is to:

  • Never synchronise Domain Controller clocks with the virtualisation host
  • Never snapshot domain controllers
  • Always have at least one (and preferably two) physical domain controllers in case you have to force demote all virtualised domain controllers

If you follow the above advice I believe the risks in virutalising DC’s are relatively low.

Virtualisation and Exchange 2010 DAG

Just a quick note – are you supported if you virtualise Exchange 2010 mailbox servers hosting Database Availability Groups ? The short answer is “yes”. The long answer is “Yes ….. unless you want to use Live Migration / XenMotion / VMotion … then you are NOT supported”.

To explain, DAG is a high availability strategy in and of itself. If you want to virtualise servers hosting DAG then you may but those servers should run on stand alone virtualisation hosts. Layering any sort of hardware higher availability on top of DAG high availability will invalidate your support. In truth, this shouldn’t be an issue as virtualisation only provides redundancy at the hardware level with Live Migration / XenMotion / VMotion whereas Exchange DAG provides hardware, O/S, binary and data redundancy (i.e. full redundancy) but you should take account of this in any plans you have for virtualising Exchange 2010.

Installing System Centre Virtual Machine Manager 2008 R2

If you are like me then you may be a little nervous when it comes to installing a piece of software for the first time and so it’s comforting, I think, if you can see a walkthrough with an explanation or demonstration of the decision points when it comes to installing a piece of software for the first time. Below I’ve provided some screenshots and instructions for getting SCVMM up and running. My installation was performed in a lab environment with Widows 2008 Server R2 installed on my laptop providing the Hyper-V functionality so that the hypervisor can have access to the virtualisation extensions of the processor (i.e. hyper-v will work). I’ve then created two virtual machines, scvmmdc as a domain controller and scvmm to run SCVMM and control hyper-v on my laptop as a host. True, it’s now what you would expect to see in production but it does give you an idea of how to install the software.

The first thing you will note is that I’m not installing a full install to a clustered sql server and, equally, I have not clustered SCVMM to make it highly available … both of these things being best practice for a full multi host production environment. You can, of course, get away without doing either of these things in production as you can still control clustered hyper-v from the built in server administration tools, it’s just that you won’t have access to SCVMM if your single server is not up and running. So, the more physical hosts you have, the “better” practice it is to provide high availability for SCVMM.

The first thing to note after installing the setup disk is that the very first link gives you access to the SCVMM help file which gives excellent advice as to sizing the solution, supported SQL, required software etc (the Setup Overview link).

Straight from that guide, the software requirements are:

Software requirement Notes
A supported operating system For more information, (generally 2003 SP2 and later).
Windows Remote Management (WinRM) This software is included in Windows Server 2008 and the WinRM service is set to start automatically. If the WinRM service is stopped, the Setup Wizard starts the service.
Microsoft .NET Framework 3.0 This software is included in Windows Server 2008. If this software has been removed, the Setup Wizard automatically adds it (i.e. no need to download unless you want the latest version – always patch afterwards though).
Windows Automated Installation Kit (WAIK) 1.1 If this software has not been installed previously, the Setup Wizard automatically installs it (i.e. no need to download unless you want the latest version – always patch afterwards though).

If you use the same computer for your VMM server and your VMM database, you must install a supported version of Microsoft SQL Server.

Supported versions of SQL are

  • SQL Server 2008 Express Edition
  • SQL Server 2008 (32-bit and 64-bit) Standard Edition
  • SQL Server 2008 (32-bit and 64-bit) Enterprise Edition
  • SQL Server 2005 Express Edition SP2
  • SQL Server 2005 (32-bit and 64-bit) Standard Edition SP2
  • SQL Server 2005 (32-bit and 64-bit) Enterprise Edition SP2

If you do not specify a local or remote instance of SQL then the Setup Wizard will install SQL Server 2005 Express Edition SP2 on the local computer. The Setup Wizard also installs SQL Server 2005 Tools and creates a SQL Server instance named MICROSOFT$VMM$ on the local computer. To use SQL Server 2008 for the VMM database, SQL Server Management Tools must be installed on the VMM server. If you use Express Edition then SCVMM will not allow reporting and the database size is limited to 4GB.

After reading the pre-requisites we can prepare the server and domain to host SCVMM. The domain needs to be at Windows 2003 domain level as a minimum. Equally, if the SCVMM server is to host the self service portal then IIS needs to be installed and configured. For Windows 2003 this is a simple matter of installing the Application Server role. For Windows 2008 and above add the Web Server (IIS) role and ensure the following role services are selected:

  • Static Content
  • Default Document
  • Directory Browsing
  • HTTP Errors
  • ASP.NET
  • .Net Extensibility
  • ISAPI Extensions
  • ISAPI Filters
  • Request Filtering
  • IIS 6 Metabase Compatibility
  • IIS 6 WMI Compatibility

We can then check the server for suitability for hosting SCVMM. This can be done locally or form a remote machine but whichever machine is being used for this task, that machine needs to have the Microsoft Baseline Configuration Analyzer installed which can be downloaded from http://go.microsoft.com/fwlink/?LinkId=97952. Once the MBCA has been installed we can then click the link for the VMM Configuration Analyzer. This will allow you to download the analyzer tool to your local machine and pre-check the machine for suitability for hosting SCVMM.

When starting the Analyzer tool from the start menu we have the following choices (SCVMM is the name of my lab machine). The tool should really be run in the context of an account that is a domain administrator in order that the tool can accurately check the domain level.

After clicking Scan and waiting a short while you will be presented with a report for which you will need to correct any errors.

Once all errors are resolved we can move onto installing the SCVMM software. Simply click on “SCVMM Server” under the setup section of the welcome screen. Setup will extract some temporary files and then begin the installation routine. Read and accept the license terms if you agree with them and wish to proceed.

I recommend that you participate in the customer experience program if you wish to see Microsoft improve their software for you and all other users.


Complete the User registration details according to your corporate standards.

Complete the prerequisites check and, if passed, click on Next.

Select where to install the software binaries.

As I don’t have a separate SQL server in my lab I chose to install SQL Express locally on my server.

I created a new folder called “Library” and changed the path for the library share to used that new location. In the normal course of events I would usually put this on a drive other than C to allow for growth.

While it is a best practice to change the port numbers used (one for security and two, because you have to uninstall and reinstall SCVMM if you want to change the ports later) I have left them at their defaults for my lab. Similarly, it is a security best practice to leave the service account as the system account. A network account should be used if SCVMM is being installed in a clustered environment where SCVMM is itself clustered.

At the summary of settings page click on Install to proceed.

The software and pre-requisite software will then be installed.

Finally, once the installation has completed select Close to check for any SCVMM updates.

This will have installed SCVMM server. Next, we need to install the Administrators Console. After any required patching, reboot the server and start setup from the CD once more and select VMM Administrator Console in the setup section. Once again temporary files will be extracted and the installation process will begin. As before, we first read and accept the license agreement if we want to proceed.

There is no need to join the Customer Experience Improvement Program as this screen will pick up the choice made when installing SCVMM server (this choice is available if installing the administrative console onto an administrators workstation).

Complete the prerequisites check and, if passed, click on Next.

Select the installation location and click on Next.

Next, we assign the port that we want the console to use to communicate with the SCVMM Server. This is the port that you assigned when installing SCVMM Server above. The port setting that you assign for the VMM Administrator Console must identically match the port setting that you assigned for the VMM Administrator Console during the installation of the VMM server or communication will not occur.

On the Summary of Settings page, if all settings are fine then click on Install.

The installation will then proceed.

Once again, click on Close and check for any updates to the software.

Once any updates have been installed and the server has been rebooted we can proceed to install the optional VMM Self-Service Portal. The Self Service Portal allows identified users to create and manage virtual machines within a Hyper-V or VMWare environment where SCVMM is managing VMWare hosts. To begin the install simply click on the VMM Self-Service Portal link under the Setup section of the welcome page. Once again temporary files will be extracted and the installation process will begin. As before, we first read and accept the license agreement if we want to proceed.

Complete the prerequisites check and, if passed, click on Next (remember, IIS must have been installed to install this service).

We can then choose where to install the application binaries. Here, I have chosen the default location for my lab. In a production environment I would move these to a drive other than C.

Next we tell the installation what port we would like users to connect to the self-service portal over. Generally this is port 80 but if another web site is being hosted on the server then we can either select a different port or, more usually, set a different host / web address to be used by the solution by way of host headers. If port 80 is already in use (by the default web site for example) then we receive the error message below.

I’ve used the hostname selfservice and registered this in my DNS servers as a host (A) record to enable clients to find the site. Additionally, we once again have to connect to our SCVMM server and need to enter the port number chosen earlier for connections. We can then click on Next to move to the next screen.

On the Summary page we can now select Install if we are happy with all of our settings.

Once again we click on Close and check for any updates to the software.

Once the installation is complete we can once again check for any updates and reboot the server to ensure that all services start cleanly (checking the event log for any issues on startup).

Once restarted you can take time if necessary to harden your self-service portal environment by deploying SSL (to encrypt traffic), using integrated logon (to prevent users having to enter passwords) and disabling unwanted ISAPI filters. The full guide on recommended hardening measures can be found at http://go.microsoft.com/fwlink/?LinkId=123617.

If you have followed these steps you should now have a fully functional SCVMM server which can be connected to your Hyper-V or VMWare servers. Connecting to Hyper-V couldn’t be simpler. When you add a virtual machine host or library server that is in an Active Directory domain, SCVMM remotely installs an SCVMM agent on the Hyper-V host. The SCVMM agent deployment process uses both the Server Message Block (SMB) ports and the Remote Procedure Call (RPC) port (TCP 135) and the DCOM port range. You can use either SMB packet signing or IPSec to help secure the agent deployment process. You can also install SCVMM agents locally on hosts, discover them in the SCVMM Administrator Console, and then control the host using only the WinRM port (default port 80) and BITS port (default port 443). Even though we do not need to install the Local Agent manually as all of our servers reside in a domain I run through the procedure for installation below. First we insert the SCVMM disc into our hyper-v server (or map a drive to it) and start the setup routine and then click on Local Agent under setup. The installation of the Local Agent will then begin.

Accept the terms of the agreement to continue.

Select the installation path.

Change the ports the Hyper-V server will use to connect to the SCVMM server to those set earlier when SCVMM was installed.

Our server is not sitting in a DMZ – if it were we could encrypt traffic between the Hyper-V server and SCVMM.

You can then continue to Install the Agent

Click on Finish when completed.

Next we need to start the SCVMM Admin console on the SCVMM server by double clicking the link created on your desktop or by using the link in the Start menu.

From the Outlook like interface we can select the Hosts section and from there we can create a new host group if we have a number of physical Hyper-V or VMWare hosts we would like to control. For our purposes we’ll just use the All Hosts group. On the right hand side (Actions column) we can select Add Host.

In my lab the Hyper-V server is part of my domain as it would have to be if we were running a Hyper-V cluster and so we select the first choice and enter the domain administrator credentials to allow SCVMM access to the Hyper-V host.

Next, type in the name of the physical server running Hyper-V or browse for it in Active Directory. Note: Hyper-V does not need to be installed on the host at this point – if it is not then SCVMM will install and activate the role on the target server and reboot it.

Add the host machine to a host group.

Add a default path where Virtual Machines should be created on this host. When you add a stand-alone Windows Server-based virtual machine host as we are doing here to Virtual Machine Manager (SCVMM), you can add one or more virtual machine default paths, which are paths to folders where SCVMM can store the files for virtual machines that are deployed on the hosts. However, For a Hyper-V or VMWare cluster the default path is a shared volume on the cluster that SCVMM automatically creates when you add the host cluster. When you are adding the host cluster, you cannot specify additional default paths in the Add Host Wizard.

We then get asked to confirm the settings and can select to add our host.

Once added a job will auto-run to add the host followed by a further series of jobs to add in any already configured Virtual Machines running on that host into SCVMM.

You should now be able to control your Hyper-V host using SCVMM and configure the self-service portal for your users.