How to Deploy Microsoft Unified Access Gateway (UAG) for a test or lab environment

This article will walk through deploying the DirectAccess feature of Microsoft Forefront Unified Access Gateway (UAG) in a lab environment. My lab includes a domain controller (running Windows 2008 R2 Enterprise Edition), A file server, running Windows 2003 R2 Enterprise Edition, an external DNS server running Windows 2003 R2 Enterprise Edition, a UAG server running Windows 2008 R2 Enterprise Edition and a machine to act as a client running Windows 7 Ultimate. All of the machines are virtualised. Below is a diagram of the network and a table showing machine names, roles and IP addresses.

Click to Enlarge

Server / Workstation Purpose / Role O/S IP Address
DC Domain Controller, Certificate Authority, Internal DNS Windows 2008 R2 Enterprise
FS Internal File Server to test DirectAccess functionality Windows 2003 R2 Enterprise Edition (NLS Service)

UAG UAG Server Windows 2008 R2 Enterprise – internal – External

DNS External / Public DNS Server (setup as a domain) – public certificate CA Windows 2003 R2 Enterprise Edition
TEST-PC Client to test transparent access when on internal network and external network using DirectAccess Windows 7 Ultimate – when internal – when External

As can be seen from the above, the UAG server requires two network cards. The test-pc requires only 1 but its IP address will change depending on whether or not it is internal or external to the network. Advanced operating systems have link local IPv6 addresses but, other than that, IPv6 addresses are not used. Instead the UAG server performs 6to4 translation to allow DirectAccess to function. Moreover, as this is a lab it is not connected to the true internet. Instead we shall use the subnet as an internal network and a subnet as an external network emulating the internet.

The first step to building the lab is to install all of the operating systems. Then fix the IP address on the server to be the domain controller and promote the server to be the first domain controller for the domain (in my case This will also install and configure a basic DNS. We add a DNS record to this server for the host name ISATAP with the IP address if the internal interface of our UAG server (

We then add a record for the host name NLS with the IP address on our file server that we will use to host the Network Location Service. This is simply a web site that, when clients can connect to it they know they are internal to the network and if they cannot connect to it they assume that they are external to the network and try to tunnel through the UAG server. In production this service should be made highly available through Network Load Balancing in case a server fails or needs to be rebooted for any reason. If you have configured reverse lookups then you may create an associated PTR record.

Next, we install Certificate Services using server administrator. In “Add Roles” select “Active Directory Certificate Services”

All of the default settings are accepted. In a production environment different settings may well have been selected, especially the requirement to have a separate enterprise root which may be switched off. As this is a lab environment and to simplify the installation an Enterprise Certificate Authority installed directly on a domain controller offers the greatest ease of use and deployment.

Once the Certificate Authority has been installed we configure Active Directory to automatically issue computer certificates.

To configure computer certificate auto-enrolment

On a domain controller, click Start, type gpmc.msc, and then press ENTER.

Expand Group Policy Objects and click on the Default Domain Policy. Right click and select Edit.

In the console tree of the Group Policy Management Editor, open Computer ConfigurationPoliciesWindows SettingsSecurity SettingsPublic Key Policies.

In the details pane, right-click Automatic Certificate Request Settings, point to New, and then click Automatic Certificate Request.

In the Automatic Certificate Request Wizard, click Next.

On the Certificate Template page, click Computer, click Next, and then click Finish.

The client can now be added to Active Directory in the usual way. Once it has been rebooted logon as local admin. Open the Certificates Management Console. Click on start and type in mmc. Press return.

Select File | Add/Remove Snap in… to open the dialog. Select Certificates and click on Add.

Select Computer Account.

Select Local Computer and click on Finish.

Expand out the “Trusted Root Certificate Authorities | Certificates” node and check that the name of the internal certificate authority configured earlier appears in the list.

Expand the “Personal | Certificates” node and check that a certificate exists with the client computer name on it.

If these two tests pass then your domain is configured to automatically issue certificates to any computers that are members of the domain.

We can now add our UAG server to the domain in the usual manner. If you are going to move the server to its own OU you should do so as soon as it has been added to AD. Once the server has been rebooted ensure that you log on with a domain administrative account.

We next take our machine "dns", our external DNS server and promote it to a new domain (in my case which will automatically install dns on it. We also install an Enterprise Certificate Authority on the server to serve as a "public" certificate authority whose CRL can be checked online when a client is external to the LAN – this will allow us to issue a certificate to the public interface of the UAG whose CRL can be checked by our client. Add the IIS Application Server role to the DNS server first and then add the Certificate Services role in “Add / Remove Windows Components” area of “Add / Remove Programs“. We then patch the server to allow Windows 2008 R2 and Windows 7 to receive certificates form the server (

Now, set the iis site on the external DNS server to be protected by SSL. Open the “Internet Information Services (IIS) Server“, browse to the Default Web Site, right click and select Properties. On the Directory Security tab click on “Server Certificate“.

After clicking on Next select to assign an existing certificate.

Assign the certificate bearing the name of the server ( in my case).

Select to use port 443 to protect the site. The CA will now be able to issue certificates through a web interface. Now we need to update the Web Server certificate template to allow any certificates issued to be exported including their private keys. In the run box enter mmc.exe and then click on File | Add / Remove Snap-In…. Click on Add and then add in the Certificate Templates add in.

Select Web Server and choose to duplicate the template.

Name the certificate template UAG

On the Request Handling tab check to “Allow private key to be exported

After accepting these values close the interface and open up the Certificate Authority interface from the Administrative Tools menu. Right click Certificate Templates and select New | Certificate Template to Issue

Select the UAG template just created.

This will allow us to issue certificates which can be exported later on.

Next, we add the fileserver "fs" to our internal domain. Log the server on as a domain administrator and configure the network cards as below. A different card is used for the NLS service to ensure that the IP address for NLS is not registered in DNS against the servers true name.

First, rename the network connections to identify their function. This is not a required step but makes understanding which link is which much easier.

The card assigned for the server has standard settings.

The card that will be used for the network location service has the following settings.

Create a share called “Share” to test access to internal file based resources. Make the share available with “Full Control” permissions at the share level solely to a user called "testuser" (a new user account will need to be created). Change “Everyone” to have “Modify” rights at the NTFS level.

Log the “test-pc” client in using the “testuser” account and check access to the share by creating and deleting files. Map the drive H: from the client to the share using the Fully Qualified Domain Name (FQDN) of the share.

Log back on to the FS server and Install iis to act as a Network Location Server. Create an SSL certificate for use by the NLS service. The walkthrough here is for servers running Windows 2003 R2 – the process is very different in 2008 and 2008 R2 but is documented here.

Access the IIS Manager.


Right click the web sites node and select to create a new web sit

Name the site NLS

Select to use the IP address (and hence network card) assigned to the NLS service.

Browse to crate the site placeholder in an appropriate location. In a production environment it is not recommended to hold any files for web sites on the operating system partition.

Accept the default permissions. Click on Finish to complete the creation of the site. Open the folder created and select File | New | Text Document to create a blank document. Open the document created and enter some text such as "success!!!". Selet the document and click on File | Rename to rename the document to default.htm.

We can now “secure” the site using SSL. Right click the site created and choose Properties. Select the Directory Security tab and click on Server Certificate. Continue through the wizard and select to Create a new certificate. Choose to send the request immediately to an online certificate authority.

Enter a name for the certificate.

Enter appropriate details for organization and organisational unit.

At the Common Name screen enter the fully qualified domain name for the Network Location Service – this is the host name created earlier and entered onto the internal dns server. In my case,

Enter appropriate values for the State and City.

Configure the site to use the default port of 443 for SSL.

Lastly, submit the request to our internal Certificate Authority.

Complete the wizard and the certificate will be automatically installed and associated with the NLS site.

Now test the site created from the domain controller. Open up a browser and visit the site created at its url ( in my case). The web page created previously should be displayed.

We can now begin to install UAG in earnest. As stated, this guide walks through an installation in a test or lab environment and doesn’t necessarily follow best practice for following a full lockdown of UAG. At present we have our domain controller installed for internal use, a file server to act as an internal resource and Network Location Server for UAG, a client to access resources when internal or external to our network, an additional domain controller to provide DNS and Certificate Services as though it is a public certificate authority and finally our UAG server which has had its operating system installed and has been added to our domain.

The next task then is to configure networking on our UAG server after which we can install and configure the UAG software solution itself. Firstly, we configure the network connections. I suggest that you should name each of the network interfaces something along the lines of “Internal” and “External” or “DMZ” and “Dirty” to distinguish between the interfaces. In a production deployment the interfaces should sit in a DMZ. The external interfaces are protected by TMG technologies which are automatically installed when you install UAG.

The internal network card should be configured as below.

Note the lack of an internal gateway – this means that the UAG will only be able to access items on the same internal subnet as that on which it sits. This is not an issue in our lab environment but will almost certainly be an issue in a production environment. To overcome this appropriate static routes should be added to its routing table. To add a route use the route add command. This is run from within an elevated command prompt (right click cmd.exe and select “Run As Administrator“) and takes the form:

metric 1 –p

An example entry to add to would be

Route add mask metric 1 –p

The –p
qualifier makes the route permanent so that it will still apply even if we reboot the server.

Note also that we add the internal domain name to the DNS suffix to be used for this connection. Finally, note that IPV6 is still checked in the properties of the internal connection.

We can now configure thee external interface. DirectAccess requires that the external interface contains 2 sequential public IP addresses, for example and Even though we are in a lab environment we still need to enter true public addresses as the UAG configuration wizard will check that true public addresses (as opposed to private addresses) are being used. The external interface should therefore be configured as below.

Note that File and Printer Sharing and Client for Microsoft Networks have been unchecked. Uncheck Register the connection’s addresses in DNS. Disable “Enable LMHosts Lookup” and select “Disable NetBios over TCP/IP“.

Next, to reduce the risk of timeouts, we configure the UAG server to attempt to send traffic first over the internal network, rather than externally. To do this, in the network connections folder, select Alt+E to expose the additional menus. Then select Advanced and Advanced Settings.

From there, select the Internal network and move it to the top of the list.

Now that we have the network cards configured we can install the UAG software itself. This is a fairly trivial task. Simply insert the DVD and follow the prompts.

NOTE: If, from this point on, you have issues accessing resources such as the https version of the external certificate authority then you should note that the UAG server is being protected by Forefront Threat Management Gateway and you should make appropriate entries to allow the UAG server to access resources and then remove them once more as appropriate.

As this is a lab you will get a warning that states that the server contains less than the 4GB of recommended RAM – this can be ignored by clicking Continue in a lab environment.

Once rebooted you should patch to the latest level – at the time of writing this is Update 2 for UAG 2010 which can be downloaded here, Service Pack 1 for TMG 2010 which can be downloaded here and Update 1 for SP1 for TMG 2010 which can be downloaded here (make sure you download the 64 bit versions as will be installing on Windows 2008 R2).

Now that we have UAG installed we need to create an external DNS record (A Name) for users to connect to UAG if they want to use IP-HTTPS as their connection method. So we set up a domain / zone on our external dns server (DNS) for In that domain we create a new A record ( and refer it to the FIRST IP address on the external NIC of our UAG server.

Next, we need to update our internal DNS server to reply to ISATAP (Intra-Site Automatic Tunnel Addressing Protocol – a transition mechanism for IPv6) requests. We do this by running the command dnscmd /info /globalqueryblocklist
rom an elevated command prompt our internal DNS server (DC).

By default both ISATAP and WPAD requests are blocked. To unblock items we have to feed through the whole blocklist once again – i.e. we replace it with a new list. We do this by issuing the command dnscmd /config /globalqueryblocklist wpad where WPAD is the service to add back. If we want or need to add back more than one service then we simply separate them with spaces. Rerunning the query then confirms whether the blocklist has been correctly set.

UAG allows computers to access the internal network using DirectAccess by way of applying policies to a group of computers through a GPO. Users home computers and those in internet café’s will not be able to access the internal network by way of DirectAccess, only domain members will and then only those who have had the appropriate policy applied. To do this the UAG wizard asks for the group to which the Group Policy should be scoped. In this regard we now set up a global group in Active Directory called “DirectAccess Allowed”.

The computer account of our test computer (test-pc) is then added to this group. Do remember to click the “Object Types” button in order to change the search scope to include computers.

Next, we will want to produce a certificate to protect the outside edge of our UAG server. The common name MUST match the DNS name entered previously for users to use DirectAccess via IP-HTTPS ( The first thing to do is add the root certificate of our “external” dns server (dns) to our UAG server and to our client. To do this open a browser and browse to changing the address in red for whatever names or ip address you have used in your deployment. You will be asked to log on to the site. Simply enter the administrative credentials for the external domain (

Select “Download a CA certificate, certificate chain or CRL“.

Click on “Download CA certificate

Click on “Open” to install the certificate.

Select to “Install Certificate …

Select to place the certificate in the Trusted Root Certification Authorities

Now that the certificate issuer is trusted we can now request a new certificate from the external CA. from the UAG server browse once again to and select “Request a Certificate” select to submit an “Advanced Certificate Request” and “Create and Submit a Request to this CA“.

Select to create a certificate using the UAG template created earlier and enter the FQDN that users will use to connect to direct access (configured in DNS previously) as the name for the certificate.

Scroll down and click on Submit. Select to Install this Certificate.

This will install the certificate but in the Users Certificate store. We now need to export that certificate, including private key, from the users certificate store and import it into the computer certificate store. To do this Click on Start and enter mmc.exe in the search box. Press return. When the MMC console opens click on File | Add / Remove Snap-In … and select to add the Certificates snap in. When asked select to add it for “My User Account“.

Expand out the personal certificate store, right click on the certificate issued and click on Export.

Select to export the private key. This is a very important step and without it the certificate will not function when we assign it to the external interface in UAG later in the procedure.

Accept the defaults.

Secure the exported file with a password.

Save the exported certificate to a known location.

Close the MMC concole down and start a fresh MMC session. Add in the certificates snap in once more but this time select to add it for the local computer.

Expand the personal node and right click on the Certificatesfolder. Select to All Tasks |

Browse to where the exported certificate was saved. You will need to change the file type selection box to Personal Information Exchange to view the exported certificate.

After clicking on Open and advancing the dialiog enter the password used to encrypt the file and check to mark the certificate as exportable if desired.

Leave the certificate location at its default of Personal. This is the default as this is where we started the import process from.

The certificate to be used to secure IP-HTTPS traffic on the UAG external interface has now been imported to the computer store.

Now we can start the UAG GUI on the UAG server.

The Wizard will then walk us through configuring UAG for the first time.

Click in the interface to tell UAG which of your network cards should be assigned internal and external roles. As mine are labeled “Internal” and “External” their allocation is simplistic.

Internal IP address ranges can then be added.

In a production environment, depending on your network configuration you may want to add all private network addresses as being accessible through the private interface with appropriate routing put in place.

Next we define our server topology.

As our lab consists of a single server and not an array of UAG servers we can simply accept the default setting.

In the normal course of events we can run Windows Update to apply the latest patches.

This will then show all of the basic setup choices as having been completed.

Upon closing the dialog we can set a password to store the servers backup configuration and activate our configuration.

We can now, finally, move to configuring the UAG server itself for DirectAccess.

Click on the “Configure” button in the Clients section and add the Global Group provisioned earlier.

We can the click on “Configure” under the DirectAccess Server node. Select the appropriate IP addresses for both sides of the UAG.

Leave all methods of access enabled.

Select the Root certificate for our internal Certificate Authority as the root certificate by clicking on Browse for the first certificate choice. Select the certificate used to secure our external network ( in my case) by clicking on Browse for the second choice.

Click on Finish.

We can now click on Configure under the Infrastructure Servers section. This allows us ot state which servers should be contactable by external clients before they have been logged on (domain controllers, WSUS servers, anti-virus update points etc.) and which servers should not be contacted by clients (the NLS server for example).

Enter the URL of the host name configured for the NLS service ( in my case).

The next screen allows and denies specific servers from being accessed via the DirectAccess Tunnel.

Double click to add any patterns or specific machine names to be excluded. For example, it is likely that you will want to exclude the host name for Outlook Web Access as this should be accessed over the internet rather than through the tunnel. Similarly you may want to exclude and host names for OCS such as

Once all required additions and exclusions have been added to the list click on Next. The next screen is where we add servers that the external user machine will need to know about before the user has logged on. At a minimum this will include domain controllers (to allow log on and apply group policy) but may also include A/V and WSUS update servers.

To add additional servers merely click on the appropriate node and select Add Server. (To add to Others right click Others and add a group first. Servers may not be added directly to Others but can be added to groups created under that node).

Click on Finish to complete this step. Now we can configure encryption through to application servers. UAG functions much like a traditional VPN with the VPN tunnel terminating on the external interface of the UAG server. Encryption may be maintained all the way through to specific internal end points (servers). For this lab we will terminate the tunnel directly on the UAG. Click on Configure under the Application Servers node.

Accept the default and click on Finish.

We now have a configuration planned for our UAG. As with most edge devices the update is not automatic to allow for complex changes to be made without compromising security during the configuration stages. We now have to Generate out UAG policies and Activate them. To do this we click on the Generate Policies button at the bottom right of the UAG interface.

A dialog will open detailing our configuration. Click on Apply Now to complete the generation of the policies.

Click on OK and Finish when the script has executed. This step will have created the group policies in our domain at the route level and scoped them to the client group specified earlier (for client settings) and to the UAG server for DirectAccess Server settings.

In a production environment Active Directory should now be synchronised to ensure that all domain controllers are aware of the new policies. Even though the policies have been applied to client machines the configuration still needs to be applied to the UAG server. Once AD has been updated apply the policy to the UAG server either by rebooting or running the GPUPDATE /FORCE command from an elevated command prompt.

In the UAG interface click on File | Activate to activate the configuration on the UAG.

The UAG will backup its configuration – click on Activate to continue. The policy will then be applied and after some minutes will complete.

Once the policy has been applied we need to run GPUPDATE /FORCE on our test client while it is connected to the internal network or reboot it whilst connected in order to force the new group policies to be applied.

The client can then have its IP address changed and be placed on the external network and DirectAccess tested. Reboot the client and logon with the "testuser” account. Open a command prompt and type set logon. The name of the machine used to logon is returned. This should be the internal domain controller.

Ping the internal domain controller using its Fully Qualified Domain Name ( You should receive replies from an IPv6 address.

Similarly, ping the internal file server ( Even though it is running Windows 2003 R2 you receive a reply from the IPv6 address of the UAG.

Nope, open the mapped drive on the client machine (mapped to its FQDN). You should be able to open the file on the server and update it.

Save the file with its changes and access the same file from the file server console. Note that the time stamp has been updated for the file as well as the file contents from a computer external to the organisation with no need to establish a VPN tunnel.

You now have a functional DirectAccess solution in your lab environment to test and configure how you wish. If you have enough virtual resources then you can try adding other services internally or if you want to make the solution more highly available then there is an excellent guide to provisioning a farm of UAG servers at To do this you will have to redeploy the UAG server selecting to add the first node as an array member but the article is fairly self explanatory.

Configuring an internal Certificate Authority for lab environments

Sometimes people write really excellent articles on the web. This is one of those occassions where an article needs nothing adding to it. If you set up labs to learn new technologies, study for exams or just to pre-flight technologies before you put them live and struggle to have certificates working “inside” and “outside” of your lab based environment, the article at walks you through publishing CRL’s (to an “external” server for example) or even turning off revocation checking so that its no longer an issue (only advisable in lab environments).

Service Pack 1 announced for Windows 2008 R2

Great news. Microsoft have started to release news about SP 1 for Windows 2008 R2. Still slated for release in Q4, there are two major announcements for anyone interested in virtualisation – RemoteFX which essentially supercharges the vide experience for end users of Remote Desktop Services. So powerfullis this that for once Citrix will be licensing the Microsoft solution on graphics acceleration ratehr than the other way round. Read more about it here.

The other big announcement is dynamic memory allocation in Hyper-V. You can read about that here. VMWares “killer” feature has always been memory over commit. Essentially it just pages non used memory to the hard drive so in highly virtualised environments where VM’s need to use their RAM this can lead to excessive paging and poorly performing infrastructures. However, it is still the number 1 reason why people choose VMWare over other virtualisation vendors so even though, in my opinion, its not as great as its cracked up to be, if you ant to do virtualisation then you have to offer this functionality. The good news its, that’s one less reason to spend a fortune on VMWare if you are on a budget.

How do I use the Windows 2008 R2 Recycle Bin feature ?

New in Windows 2008 R2 active directory is the concept of Active Directory Optional Features and the first of these which have been made available is the Recycle Bin feature. Ever since Active Directory was launched you have been able to recover individual deleted items by undertaking an authoritative restore of sections of the database, even down to an individual object. From 2003 onwards deleted objects have been tombstoned and you have been able to use the ADRestore tool (available to download from However, the issue with these methods has always been with back links or, to put it another way, restoring these items with any group membership they had and, yes, it has been possible to do that with multiple authoritative restores of the database but that is at best tiresome and at worse can be dangerous. What the Recycle Bin feature does for you is restore with these back links / group memberships in place.

However, to use this feature the first thing you need to do is have your Forest at the Windows 2008 R2 level. Whilst your schema may be at the R2 level (meaning your forest can play host to 2008 R2 Domain Controllers) your domains and forest may still be running Domain Controllers with previous operating systems such as 2008 RTM or 2003 R2. The easy way to check your domain level in Windows 2008 R2 is to start the new Active Directory Administrative Centre. If you select the domain node on the left hand side (the netbios name of my domain is philipflint) then you will be able to check and raise the domain / forest functional levels in the action pane on the right hand side.



Click to Enlarge
Click to Enlarge


If your forest level is not at Windows 2008 R2 you can raise it.


Click to Enlarge
Click to Enlarge


We can now install the Recycle Bin feature. Care should be taken before undertaking the next procedure. Enabling the Recycle Bin feature for a domain / forest is a one way process with no way back. In a typical environment the recycle bin feature will grow the Active Directory database by 10 – 20% which may have an affect on performance especially in larger environments which many thousands of users where servers have been sized to run the complete database in RAM.

You should also note that, even though the Recycle Bin is an optional feature, it cannot be added as a Role Service nor as a Feature.


Click to Enlarge
Click to Enlarge


Instead the role is enabled by running a command in PowerShell. PowerShell is installed by default Windows 2008 R2 servers. However, PowerShell itself has no knowledge of Active Directory. Instead we need to load up the scripts and Verbs that PowerShell needs to be aware of to connect and control Active Directory. There are two ways to do this. The first, and simplest, is to click on Start | All Programs | Administrative Tools | Active Directory Module for Windows PowerShell.


Click to Enlarge
Click to Enlarge


The other alternative is to start PowerShell by clicking on the below icon on the taskbar and then running the command below to import the Active Directory modules.


Import-Module ActiveDirectory



Click to Enlarge
Click to Enlarge


We can now enable the Recycle Bin Feature. Below is a piece of code that you can change to use in your environment.

Enable-ADOptionalFeature –Identity ‘CN=Recycle Bin Feature,CN=Optional Features,CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration, DC=YourDomain,DC=ComOrNetOrLocal‘ –Scope ForestOrConfigurationSet –Target ‘YourDomain.ComOrNetOrLocal‘ –confirm:$false

I’ve highlighted in Red the three pieces of information you have to change. If you have a two tier domain name (such as then you will have to add another DC= section. An example is given below for a domain called

Enable-ADOptionalFeature –Identity ‘CN=Recycle Bin Feature,CN=Optional Features,CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration, DC=philipflint,DC=co,DC=uk‘ –Scope ForestOrConfigurationSet –Target ‘‘ –confirm:$false

After amendment for the appropriate domain name variables this command is simply cut and paste into the PowerShell window.


Click to Enlarge
Click to Enlarge


I was not given a chance to back out of the addition of the feature as I used the PowerShell switch –confirm:$false which provides any confirmation when asked. If you do not include this switch then you will be asked to confirm the action.

NOTE: This command needs to be run for each domain in your forest for which the Recycle Bin should be installed.

After synchronising the domain the Recycle Bin will be active on all Domain Controllers and you can now test it out by creating test OU’s and test users and deleting them and restoring them. I have created two users called ‘William Shakespeare‘ and ‘Enid Blyton’ in an OU called ‘Authors‘.

They are both members of the Global Group ‘Famous‘ and the Domain Local group ‘Published‘.


Click to Enlarge
Click to Enlarge


We can now delete the William Shakespeare account.



Click to Enlarge
Click to Enlarge


To restore a user that has been deleted I have provided a script for you below.

Get-ADObject -Filter {samAccountName -eq “UserLogonName“} -IncludeDeletedObjects | Restore-ADObject

As before, simply change the section in Red with the display name of the user you want to restore. I use the logon name as its something that you can ask the user that they are likely to know but if they don’t know this (‘Its always there, I just enter my password’) then you can use another field which uniquely identifies them, their email address for example.

Get-ADObject -Filter {mail -eq “UsersEmailAddress“} -IncludeDeletedObjects | Restore-ADObject

To restore Williams account we can just enter the following in the PowerShell window.

Get-ADObject -Filter {samAccountName -eq “william.shakespeare“} -IncludeDeletedObjects | Restore-ADObject


Click to Enlarge
Click to Enlarge


The user account is now restored along with all group memberships.


Click to Enlarge
Click to Enlarge

Memberships below.


Click to Enlarge
Click to Enlarge


Now, of course, its possible that a user may be deleted who is in an OU that has also been deleted. It is not possible to restore the user without first restoring the OU of which they were a member or, in extreme cases, the whole OU tree if multiple OU’s have been deleted.


Unless your records are up-to-date there is a chance that you may not know what your exact OU structure was and so you need a method of finding out what was the parent object of a deleted user. The code to do this is below.

Get-ADObject -SearchBase “CN=Deleted Objects, DC=YourDomain,DC=ComOrNetOrLocal‘ ” -ldapFilter:”(msDs-lastKnownRDN=ObjectName)” –IncludeDeletedObjects –Properties lastKnownParent

For example, if we run the above for our deleted William Shakespeare account we would run the following.

Get-ADObject -SearchBase “CN=Deleted Objects, DC=philipflint,DC=com” -ldapFilter:”(msDs-lastKnownRDN=William Shakespeare)” –IncludeDeletedObjects –Properties lastKnownParent



Click to Enlarge
Click to Enlarge


As can be seen from the output, we can see that the last know parent (i.e. the containing OU for this user) was the Authors OU directly under the domain node. Note that the Authors OU has not been deleted and so the user object may be directly restored. Below is a screenshot with the same command but where the Authors OU has been deleted.



Click to Enlarge
Click to Enlarge

In this case we can query the Authors OU to find its last known good parent until we find a containing object which has not been deleted.

Once we know which is the first object to be restored we can begin the restoration process. Previously I have given you the code to restore a user. The command to restore an OU is slightly different and I show it below.

Get-ADObject -ldapFilter:”(msDs-lastknownRDN=NameOfYourOU)” -IncludeDeletedObjects | Restore-ADObject

In our case we would therefore run the following three commands to restore the OU and the 2 deleted accounts (William Shakespeare and Enid Blyton).

Get-ADObject -ldapFilter:”(msDs-lastknownRDN=Authors)” -IncludeDeletedObjects | Restore-ADObject

Get-ADObject -Filter {samAccountName -eq “william.shakespeare“} -IncludeDeletedObjects | Restore-ADObject

Get-ADObject -Filter {samAccountName -eq “enid.blyton“} -IncludeDeletedObjects | Restore-ADObject


Click to Enlarge
Click to Enlarge


Note that all objects are restored with the appropriate backlinks in place


Click to Enlarge
Click to Enlarge


I hope you have found this useful, can see why this is such a powerful feature of the R2 and gives you one more good reason to go for the upgrade.

What is the difference between a Role and a Feature

Before Windows 2003 if you wanted to add functionality to a Windows Server you would have to access “Add / Remove Programs” in control panel and then “Add / Remove Windows Components” and choose which components to install. You may or may not have chosen the right components for what you were trying to achieve and you may have installed the correct dependencies (leading to a potentially unstable server if you didn’t) or, indeed, too many dependencies making your server less secure. This situation led to a high number of calls to Microsoft for “broken” software when, in reality, the solution had not been deployed correctly.

Because of this, from 2003 Microsoft onwards Microsoft introduced the “Configure your server” wizard which allowed users to add core functionality to a server with a reduced set of configuration options. That is, the wizard only installed those items necessary to get the server to do the chosen job. This not only led to more stable servers but also more secure servers.

This philosophy has now been extended out for Windows 2008 onwards such that a whole raft of functionality is no longer deployed by default leading to a more secure base server environment (secure by design). Instead, you have to expose this functionality to Windows Server if you want to use it and the wizard will then deploy that functionality for you without introducing flaws due to mis-configuration of the base requirements for a solution. This functionality has been encapsulated in two areas under Server Manager – Roles and Features. So, now you know how we got here, what’s the difference between the two ?

Well, its simple really, a role is something that the servers offers to someone else (clients) such as Logon (AD), IP addresses (DHCP), name resolution (DNS) etc. A feature is something the server consumes or uses itself, for example Network Load Balancing, Telnet Client, Failover Clustering etc. Now if you need to find a certain “feature” of Windows Server I hope this will help you know the most likely place to find it.