Windows Password Recovery and Reset Tool

It’s your first day on the job and you’re rearing to go. The previous administrator left two weeks ago so the servers have been running on their own with no administrative maintenance. Microsoft decides that today is also the day they are going to release a number of critical update patches to the Windows Server platform. You head into the server room ready to update the servers but realize that you don’t know the administrative password to log on to the machines. To make matters even more interesting, it appears that no one else in the office does either and the previous admin didn’t document them. Thankfully, you are a dedicated reader of the articles on the 2000 Trainers site and have a solution.

Note – The following utility is not supported by Microsoft and does pose the remote possibility of permanently damaging the registry. Use at your own risk and please read all the online material before attempting. In addition, while this utility can be used maliciously, it is meant to be a “save the day” tip for administrators. Please use it responsibly.

The “Offline NT Password and Registry Editor” is located at and can be used to reset the local administrator password on Windows platforms from Windows 3.51 to Windows 2003. The first thing you want to do is download either the floppy image or the ISO image for a CD-ROM depending on your preference. If you download the floppy image, be sure to grab the SCSI drivers if your boot partition is located on SCSI drives. For this high level walkthrough I used the floppy image.

Once you’ve unzipped the binaries, put a floppy in the drive and run the install.bat file. It will create the floppy image using the included rewrite utility. Place the floppy in the server and restart the server. After the linux kernel loads you will see the following screen:

In our example, we only have a single partition to select so we will choose device number one. The next prompt will be for the location of the registry. Just accept the default and press Enter. Since we want to reset the local administrator password, select option one at the next prompt.

At the next prompt, select option one again as we are editing user data and passwords. Notice how the local administrator account appears as an editable account at the next screen. Select the appropriate option for the administrator.

At the next screen we can change the password to whatever we want or use the asterisk wildcard to blank out the current password. Save your changes and write it back to the registry. Eject the floppy, restart the machine and log on as the administrator using the password you selected when modifying the account.

Filtering Group Policy Settings

If you’re already familiar with using Group Policy Objects in Active Directory environments, then you no doubt already know that GPOs can only be applied to 3 types of objects – sites, domains, and organizational units. You cannot apply a GPO to a user account object, nor to a security group.

While the rules are the rules, there are ways to “filter” GPOs for a more granular level of control over whom (or what) they actually apply to. For example, the Security tab in the Properties of a GPO dictates the permissions that users and groups have to a GPO. Any objects to which the Read and Apply Group Policy permissions are allowed will have the policy settings applied to them.

So, let’s say that you have a GPO that you want to apply to all user accounts in a particular OU, except for 2 specific users. You would create the policy, apply it to the OU, and then set the permissions on the GPO such that these 2 users are denied the Apply Group Policy permission. The policy’s settings will then apply to all other users in the OU, but not impact your 2 special-case users. This permission filtering method can be used with both Windows 2000 and Windows Server 2003 forests.

While filtering by permissions certainly gives you more flexibility over how policies are applied, an even more powerful alternative exists. In Windows Server 2003, you can also filter the objects to which Group Policy settings apply by using what it known as a WMI Filter. If you open the properties of a GPO and click the WMI Filter tab, you can then select or define your filter settings.

Of course, you’re going to need to know a little about WMI to create your filters, but the documentation is out there. Here’s a great example of how WMI filters can be used, compliments of Alan Finn:

WMI filters are configured by defining both the namespace and a WMI query. The following examples are formatted with the appropriate namespace listed first, followed by the WMI query that defines the filter:

1. Applies only to machines with the KB890175 hotfix installed:

root\CIMV2; SELECT * FROM Win32_QuickFixEngineering WHERE HotFixID = ‘KB890175’

2. Might be used on machines to launch a removal package for iTunes:

root\CIMV2; SELECT * FROM Win32_StartupComman WHERE Name = ‘iTunesHelper’

3. Can be used to filter machines with more than 256MB of RAM installed:

root\CIMV2; SELECT * FROM Win32_LogicalMemoryConfiguration WHERE TotalPhysicalMemory > 256000

4. A filter for XP machines with SP 2 installed:

(“root\CIMV2; SELECT * FROM Win32_OperatingSystem WHERE Caption = ‘Microsoft Windows XP Professional’ AND TargetInstance.CSDVersion = ‘Service Pack 2′”)

So, the next time that someone tells you that “you can’t do that with Group Policy”, you might want to dig a little deeper for the truth. With even a basic knowledge of WMI, you can filter your policies on just about any settings imaginable.

Tip provided by Dan DiNicolo and Alan Finn

Installing VMWare GSX Server

In the first article in this series, you learned about some of the key concepts and theories of virtual computing. With the basics now sorted, it’s time to get on to the fun part – working with virtualization software. In this article, we’re going to install one of the most popular options out there – VMWare’s GSX Server.

As this is probably your first installation of the GSX Server (and because in future articles we’ll be creating virtual machines and playing with them), I believe you might want to install GSX Server on a test computer, not a production one…

GSX Server uses the second approach to virtualization, discussed in my last article. Specifically, this means that it is installed as a normal application on top of an underlying operating system referred to as the “host” system.

Therefore, the first step of the installation process is to decide which operating system you are going to use as your host system. You can install GSX Server on a host machine running any version of Windows 2000 or Windows Server 2003, and even on Linux systems. For the purpose of this article, we’ll install GSX Server on a computer running Windows Server 2003 Enterprise Edition.

The hardware requirements for the host machine are relatively easy to fulfill. You need an x86-based machine with up to 32 processors and between 512MB and 64GB of RAM. Each processor has to be at least a Pentium II (or an AMD Athlon processor), and run at 733Mhz or faster. You should also have at least 130 MB (Windows), or 20MB (Linux) of free hard drive space for the server, and 1GB of space available for each virtual machine.

Downloading the appropriate files from Vmware

The next step is to download the GSX Server software from the VMWare Web site. On the download page of VMWare, scroll down to VMWare GSX Server 3.1, and choose either “Evaluate,” (to download the evaluation version), or “Download” to purchase the program (Figure 1).

According to VMWare, the only limitation of the evaluation version is time-related: it expires after one month.

Figure 1: WMware Download page

Next, choose the appropriate version for your host operating system – for the sake of this article, Windows. The download page for Windows VMWare offers three different installer files: the server, the Windows client package, and the Linux client package (Figure 2). The server will allow you to install GSX server and to access it either interactively through the console, or (eventually) remotely through the Web Management Interface. The client packages will allow you to install remote administration consoles on other computers running either Windows, or Linux.

Figure 2: VMware download page for Windows

Download the installer files you need, according to the operating system running on your host machine, and on the workstations you want to use for remote access. For example, I intend to use exclusively Windows clients for remote access, so I would download the server (currently VMware-gsx-server-installer-3.1.0-9089.exe), and the Windows client package (currently

Starting the Installation

Double click on the Master installer to start the installation of the Server. The Master installer extracts the installation files, and starts the installation wizard (Figure 3).

Figure 3: GSX Server Installation wizard

Click Next, accept the End User License Agreement (EULA), and click Next again (Figure 4).

Figure 4: VMware End User License Agreement (EULA)

On the next screen, you are asked to choose between a Complete installation, and a Custom one. (Figure 5).

Figure 5: Choose installation type

For a first installation, it is usually safer to choose the “Complete” option. However, for the time being, we want to choose the Custom option because it will allow us to see the different features that will be installed by the wizard, their sizes, and their uses.

Exploring the Features of the Master Install

Select Custom instead of Complete, and click Next. The Custom Setup Window shows the different features included in the GSX Server Master Installer: two Server features, and two client features. Clicking on a feature highlights its name in the left part of the window. At the same time, a fast description of the feature, and the amount of storage required for its installation appear in the right part of the window. (Figure 6).

Figure 6: Custom Setup Window

The first Server feature is the VMWare GSX Server itself. This feature requires only 47Mb of storage space, and is the only one really needed to create and configure virtual machines if you want to access it exclusively through the local console.

The second Server feature is the Web Management Interface that allows users to manage the virtual machines from a browser. This feature needs 23 MB of hard disk space, and requires that IIS, and either the Netscape Navigator 7.0, or the Mozilla 1.x browser be installed on the host machine to function.

The two client features are scripting tools that use either Perl or COM for remote management of the virtual machines. They require 14 MB, and 2932 KB of hard disk space respectively.

The Custom Setup Window also has two additional options to allow users to customize their installation. The Browse button allows the user to install GSX Server on any volume of the physical machine (Figure 7).

Figure 7: Install on any volume of the physical machine

The Space button (at the bottom of the Window between Help and Back), checks each volume on the physical machine to see whether they have enough disk space available to install the features selected, and shows the results of this check in a different window (Figure 8).

Figure 8: Checking disk space

Finally, a red cross in the icon on the left of a feature indicates that it will not be installed. At the same time, the feature description in the right part of the window will indicate that the feature requires 0Kb of storage (Figure 9).

Figure 9: Management Interface will not be installed

To include the feature in the Custom install, just click the icon, and choose the white rectangle with the picture of a disk. Clicking on the Help button will show you the icons corresponding to each available install state (Figure 10).

Figure 10: Custom Setup Help

Finishing the GSX Server Installation Process

Now that you have explored the four different features that will be installed by the wizard, their sizes, and their uses, click on Back. This will bring you back to the previous window, where you will select Complete instead of Custom, and click on Next to continue the installation.

If Internet Information Services (IIS) is not installed on your system at this point, the wizard will give you the choice between exiting the wizard, installing IIS and restarting the installation, or installing GSX Server without the Web–based Management Interface (Figure 11).

Figure 11: IIS is not installed on the physical machine

Now, you are currently installing GSX Server for learning and testing purposes, not for production purposes. So if you get this alert, I suggest you exit the setup, install IIS on the physical machine, restart the setup, and redo the previous steps until you see the Setup Type window shown back in Figure 5.

If IIS is installed, clicking Next will take you a window where you’ll be given the opportunity to change the directory in which GSX Server will be installed. The default directory is C:\program Files\VMWare, and each feature is installed in its own subdirectory:

Server feature: C:\Program Files\VMware\VMware GSX Server
Management Interface: C:\Program Files\VMware\VMware Management Interface
COM scripting tool: C:\Program Files\VMware\VMwareVmCOM Scripting API
Perl scripting tool: C:\Program Files\VMware\VMwareVmPerl Scripting API

Accept the default values, or pick the directory of your choice on a local drive and click Next. You’ll be given a last chance to change the choices you have made up to this point (Figure 12).

Figure 12: Last chance to make changes in the installation settings

If you want to make any changes, click the Back button until you reach the screen corresponding to the changes you want to make. Make your changes, and then redo all the steps until you come back to the window shown in Figure 12.

Once you’re satisfied with the selected settings, click Install. The installer generates the scripts, searches for previous installations of the application, and installs the different elements of the software.

If the CD-ROM Autorun feature is enabled on your host machine, the Installer will ask you if you want to disable it (Figure 13). Personally, I prefer to disable it because I find that it can create inconveniences when several virtual machines are running at the same time.

Figure 13: Disable the CD-ROM Autorun feature of the host machine

Note, however, that if you click Yes the feature will only be disabled after you reboot the host machine.

Whether you click Yes or No, the Installer will continue its work until it finishes the installation. At this point, you will see two new icons on your desktop: one named VMWare Virtual Machine Console, and one named VMWare GSX Server Console. You will also see the final window of the Master Installer (Figure 14).

Figure 14: Final window of the Master Installer

Click Finish to complete the installation. If prompted to reboot the host machine, do so. You have completed the installation of VMWare GSX Server on your host machine. Congratulations!

This finalizes this second article in this series on virtualization. I hope you had more fun with this one than with the first one. With the installation business out of the way, things will only get better from this point forward.

Welcome to the World of Virtualization

If you read technical magazines regularly, you’re bound to have seen at least one article about virtualization in the past six months. Virtual data centers, virtual disaster recovery facilities, virtual servers, virtual switches and virtual networks seem to have become almost omnipresent.

Even with the abundance of coverage that virtualization receives in the press it is often still difficult to learn the basics of its various purposes and benefits. What is virtualization? How can virtualization really help in business or academic environments? What are the benefits of going this route? These are all questions that need to be answered before you begin digging deeper into specific products or technologies.

The purpose of this series of articles is to approach virtualization via its most basic of components, namely virtual machine systems. Ultimately, we’ll use this introduction as a stepping-stone to work our way to some of the more complicated aspects of virtualization.

Definition and History

Virtualization – sometimes referred to as virtual machine systems technology – makes use of a software layer to enable multiple, diverse, and independent operating systems to run simultaneously on a single set of hardware.

IBM first developed virtual machines in the 60s. At the time, their main goal was to correct some of the limitations of the company’s OS 360 multi-programming operating system. IBM’s virtual machines were basically fully protected, isolated copies of the underlying physical machine’s hardware. A software component ran directly on the “real” hardware. This software component could then be used to create multiple virtual machines, each of which could run its own operating system.

Popular during the 60s and 70s, virtual machines practically disappeared during the 80s and 90s. It was not until the end of the 90s that they truly came back on the scene, not only in the traditional area of servers, but also in many other areas of the computing world.

The Structure of Virtual Machine Systems

Current virtual machine systems are essentially built on the same theoretical grounds as their IBM ancestors. A thin layer of software – the virtual machine monitor (VMM) – is interposed between two of the layers of a computer (Figure 1) to create a virtual machine environment.

Figure 1

The VMM creates a layer of abstraction between the physical machine’s hardware and the virtual machine’s operating system. The VMM then manages the resources of the underlying physical machines (referred to as the host machine) in such a way that the user can create several virtual, “guest” machines on top of the physical host machine. The VMM also virtualizes the physical hardware of the host machine and presents to each virtual guest machine a hardware interface that is compatible with the operating system the user chose to install on it.

Each of the guest machines is composed of a combination of the host machine’s hardware and the VMM. The layer of abstraction created by the VMM gives each guest machine the illusion of being a complete physical machine, and fools each guest operating system into believing that it is running on the normal hardware environment it is used to.

How Virtual Machine Systems are built

There are currently two main approaches to the building of virtual machine systems. In the first approach, the VMM sits between the hardware of the real machine and the guest systems (Figure 2). This approach was used in the 60s by the original IBM virtual machine systems, and is also used nowadays by modern implementations like VMWare’s ESX Server.

Figure 2

In the second approach, the VMM is installed as a normal process between the underlying real operating system, called the host system, and the virtual machines created by the users (Figure 3). This approach is currently used by some of the most popular virtualization software, like VMWare’s Workstation and GSX Server, and Microsoft’s Virtual PC 2004 and Virtual Server 2005.

Figure 3

Common characteristics of Virtual Machines Sytems

Regardless of the approach used to build them, virtual machine systems share a certain number of necessary characteristics: faithful reproduction of the guest operating system’s normal environment, adequate performance, isolation between the guest machines (and between each guest and the host), centralized control of the host’s resources, and encapsulation of the virtual machines.

The main goal of virtualization is to enable applications and guest operating systems to run on hardware, or host operating systems with which they would normally not be compatible. To attain this goal, the VMM must first reproduce the system it is emulating as faithfully as possible.

Eventually, the VMM must be able to map into software parts of the original hardware architecture that no longer exist. If the virtual machines are used to test prototype and beta software, the VMM might even have to map into software parts of system architecture that do not exist yet.

The VMM must also be able to provide the guest operating systems and applications with an environment that is essentially identical to the original machine so that any program running on a guest machine will have the same behavior and the same effects as the same program running in its original environment.

The VMM represents an additional layer of software between the hardware, or the host operating system and the guest operating systems and applications. This additional layer is likely to add overhead to the system, and affect the performance of the software running on the guest machines. To be useful, however, the virtual machine system must exhibit a performance level comparable to that of the original real machine.

If the VMM really reproduces faithfully the real system it is emulating, and if the environment provided by the VMM is essentially identical with the original machine, the definitions of the two interfaces, real and virtual, should match, and the performance of the virtual machine should hardly suffer from the virtualization.

The first modern virtual machines systems implemented on common Intel-like computers, used to suffer performance losses sometimes as high as 50%. Nowadays, however, there is hardly any difference between the performances of real and virtual machines. At the end of last year, I personally tested the performance of virtual machines installed on a VMWare GSX server, and found it absolutely comparable to the performance of a “real” physical machine. In many cases, the virtual machines actually performed better than the physical machine.

Virtual machine systems must allow applications hosted in the different virtual machines to run concurrently without interfering with each other. To achieve this goal, the VMM must be able to completely isolate the virtual machines from each other and from the real machine.

This isolation must be twofold. On one hand, the applications and data of each machine, virtual or real, must be out of the reach of all the other machines. On the other hand, the VMM must be able to ensure that the use of host system resources by one virtual machine does not have a negative impact on the performance of other virtual machines. This means that the VMM must constantly have complete control over the resources, such as memory, peripherals, I/O, and even, eventually, processor time, used by the virtual machines. It must be in charge of allocating resources, and it must be able to dynamically allocate and remove them as needed.

Finally, virtual machine systems must encapsulate all the software of each virtual machine. This encapsulation enhances the isolation of the virtual machines from the host machine. It also allows users to easily migrate virtual machines from one hardware platform to another, different one. This allows users to “save” the state of a virtual machine at a certain moment in time, change the configuration of the machine, for example install new applications or security patches, test them, then return the virtual machine to its original state of it.

Final Thoughts

This finalizes this first article on virtualization. I know it is rather theoretical and dry beginning, but I believe in having at least a general idea of how things work before using them 🙂

In the next article, we’ll start getting our hands dirty, and installing virtualization software – that’s where the real fun begins!

Establishing a Root CA

A Certificate Authority (CA) is an entity which is trusted to validate and certify the identities of others. In reality a CA is a company which maintains a software package that can manage the requests, issuance and revocation of certificate files. A CA is created by installing a certificate management software package such as Microsoft Certificate Services and implementing policies to identify and issue certificates to requestors. Certificate issuance policies fall into two general categories.

Software Issuance Policies – These policies use some form of existing credential to issue a certificate. In some cases this may be as simple as validating that your email address is in fact your email address as in the case of Thwarte ( In other cases you must have a trusted network credential. This is the method used by Active Directory integrated CAs. These CAs are referred to as enterprise CAs. Enterprise CAs will be discussed in more detail in a future article.

Manual Issuance Policies – These policies involve non-technical verification of identity and may include methods such as notarized letters, photo IDs or in some cases fingerprinting. These are generally only found in highly secure environments such as those found in large companies or the government.

Shadow Copies of Shared Folders (Part 2)

In the first part of Shadow Copies of Shared Folders we learned what shadow copies are and how they can be enabled on a server volume. To recap, we have setup a drive that contains users’ redirected My Documents folders (E: in this example), have configured SCSF to make a shadow copy at 7:00 AM and 12:00 PM, and configured SCSF to store shadow copies on an alternative drive (F: in this example). In part two we are going to see how to restore a previous version of a file and how to restore a deleted file from a shadow copy. We will also look at how client software can be installed on previous versions of Windows to enable access of shadow copies from client workstations.

To save some time I have created a file named Important.doc in my My Documents folder and then made three modifications to the file, taking a manual shadow copy after each update. Although the updates in these screen shots will be only a few minutes apart, the operation of SCSF will be exactly the same if the updates were only taken twice a day.

Let’s start out with the most simplistic example – restoring a previous version of a file on the server itself (i.e. from the server console). To start you must navigate to the shared folder using a UNC path, mapped drive, or another method that accesses the shared folder. Note that you must access the files through the share – if you access the files directly from the server’s hard drive you will not see the “Previous Versions” tab in the following steps.

Right click the file in Windows Explorer, select properties, and then select the “Previous Versions” tab.

The buttons on the tab are self explanatory:

View – Allows you to view a copy of the file at the selected date/time

Copy – Allows you to copy the file at the selected date/time to a new location

Restore – Allows you to restore the file at the selected date/time over the current file

Let’s restore the copy from 10:29 which looks like:

When you click Restore you will be prompted to confirm the choice:

As you can see the Date Modified on our file now shows the time that the file was last modified when the 10:29 PM shadow copy was made. Cool, eh?

While being able to restore a file that still exists to a previous version is very useful, what about if the file has been deleted? There would be no file to right click, so how can you access the restore tab in order to restore the file? Simple, right click the shared folder, or a folder inside of the shared folder and open its properties. You will find the same Previous Versions tab in folder properties that appears in file properties. You can then click the View button to see what the folder looked like (i.e. the files and other folders it contained) at the selected point in time. From there you can copy and paste the file(s) you want to restore from the previous version of the folder to the “real” folder. You can also use the Copy and Restore buttons on the Previous Versions tab to Copy/Restore the entire folder all at once – there is no need to go file by file.

For example, let’s say it is 11:00 PM and I just deleted Important.doc by accident (and yes, I used Shift + Delete so it’s not in the Recycling Bin). To get my file back I can right click the maubert folder, select the Previous Versions tab, choose a previous version of the folder (10:52 PM in this example), and click View:

(Note the date the Shadow Copy was made in the title and address bars.)

I can now copy and paste this file back to the folder, to a new location, or I can open and view the file directly from the previous version of the folder. One thing to note is when restoring an entire folder (i.e. using the “Restore” button on the Previous Versions tab) any files that were added after the Shadow Copy you are restoring was made are not removed. For Example, let’s say there is one file in a shared folder called fileA which I have several previous versions of. I then create a new file in the same shared folder called fileB that was added after the Shadow Copies that include fileA were made. If I then restore a previous copy of the folder that contains an older version of fileA, but does not contain fileB, fileA will be overwritten with the older copy and fileB will be left alone. Continuing with this example, let’s say several more Shadow Copies are taken that now include fileA and fileB. While newer Shadow Copies contain both files, there are still older Shadow Copies that only contain fileA. Again, if I restore the folder from one of the older Shadow Copies that contains only fileA, fileA will be replaced with an older copy, but fileB will still be left untouched. On the other hand, if I restore the folder from a Shadow Copy that contains fileA and fileB, both files will be replaced with previous versions. If all of that is confusing think of restoring a folder as a copy and paste operation from the previous version of the folder to the current version of the folder and saying “yes” to overwriting existing files – the same rules apply as a typical copy/paste operation.

Domain Renaming and Repositioning

In the Windows 2000 version of Active Directory, it was not possible to rename domains without demoting all domain controllers, which effectively destroyed the domain. In Windows Server 2003, domains can be renamed, as long as the forest in which they exist are configured to the Windows Server 2003 forest functional level. Of course, this means that you cannot rename a domain that includes either Windows 2000 or Windows NT 4.0 domain controllers, since the Windows Server 2003 forest functional level only supports Windows Server 2003 domain controllers. The tool to rename Windows Server 2003 domains is named RENDOM, and is found in the Valueadd\Msft\Mgmt\Domren folder on the Windows Server 2003 CD.

Along the same lines, Windows Server 2003 also allows you to rename individual domain controllers with a new computer name. In Windows 2000 Active Directory, this was only possible if you first used DCPROMO to demote a domain controller back to a member server, changed the name, and then re-promoted it. Renaming a domain controller is only possible if a domain is configured to the Windows Server 2003 domain functional level.

Renaming a Windows Server 2003 domain controller is handled differently than the traditional method (via the System tool in Control Panel). Instead, the NETDOM command line utility is used to handle the domain controller renaming function. For example, the series of commands to rename a domain controller from to would be:

C:\>netdom computername /

C:\>netdom computername /

Then, after rebooting the server:

C:\>netdom computername /

Finally, Windows Server 2003 also supports the ability to reposition domains within an Active Directory forest. For example, imagine that you originally implemented each domain as its own forest, and then decided that you instead wanted to change the structure that such all domains fell into the same DNS namespace, as part of a single tree. This is now possible, but only if the forest is configured to the Windows Server 2003 functional level. Although that does present a limitation, the ability to reposition domains is a great feature, especially if you managed to inherit responsibility for a forest that was not well designed in the first place.

In the same manner as renaming domains, domain repositioning in Windows Server 2003 Active Directory environments is also accomplished by using the RENDOM utility.

Forest and Domain Functional Levels

Although many people have already decided that Windows Server 2003 is no more than a minor revision of Windows 2000, the truth of the matter is that this new version includes more than just a few new features, tools, and services. Although it is built upon the foundation provided by Windows 2000, many of these new elements are ones that many organizations, and especially larger ones, will want to be aware of. My goal with this article and the next is to provide an overview of some of the new features found in Windows Server 2003, and specifically those associated with it’s directory service, Active Directory. In this article we’ll take a look at domain and forest functional levels.

Domain and Forest Functional Levels

Those familiar with Active Directory in Windows 2000 will recall that once installed, domains could be configured in one of two modes – mixed mode, and native mode. In mixed mode, an Active Directory domain was still capable of supporting Windows NT 4.0 domain controllers, providing companies with the ability to transition their domains from the old model to the new directory-based design. Although mixed mode made the deployment of Active Directory in existing environments more flexible, it did come with limitations, namely the inability to configure universal groups. Once a domain was switched to native mode, all domain controllers had to be running Windows 2000, and using universal groups became possible.

In Windows Server 2003 Active Directory, the concept of a domain “mode” has been re-branded as a “functional level”. This is definitely not a bad idea, since the functional level of a Windows Server 2003 Active Directory domain not only impacts the operating system versions that can function as domain controllers, but also the ability to utilize some of the new features in Active Directory. Furthermore, Windows Server 2003 also introduces an entirely new concept, known as a forest functional level. Along the same lines as a domain functional level, the forest functional level configured impacts the ability to implement certain new Active Directory features, as you’ll see later in this article.

The domain functional levels associated with Windows Server 2003 are outlined below. For each functional level, the versions of Windows that are supported as domain controllers are also listed.


It should be noted that once the functional level of a domain is raised, domain controllers running previous versions of Windows cannot be added to the domain. So, if you raise the functional level of a domain to Windows Server 2003, Windows 2000 domain controllers can no longer be added to that domain.

Much like changing the mode of a domain in Windows 2000, the functional level of a domain is changed from within the Active Directory Users and Computers tool. To raise the functional level of a domain, right-click on the domain object in Active Directory Users and Computers and click Raise Domain Functional Level. In the screenshot below, you’ll notice that the domain functional level cannot be changed, because it has already been configured to the Windows Server 2003 level. To raise the functional level of a domain, you must be a member of the Enterprise Admins group, or the Domain Admins group in that particular domain. This ability can also be delegated to other users.

In much the same manner, Windows Server 2003 Active Directory supports 3 different forest functional levels. Each of the forest functional levels is listed below. For each functional level, the versions of Windows that are supported as domain controllers are also listed.


In the same manner as with domain functional levels, once the functional level of a forest is changed, domain controllers running earlier Windows versions can no longer be added to any domain in the forest.

Changing the functional level of a forest is accomplished differently than a domain. Forest functional levels are configured using the Active Directory Domains and Trusts tool, by right-click on a forest and clicking Raise Forest Functional Level. The screenshot below shows that the current functional level of my forest is set to the default, Windows 2000. In this case, it can still be upgraded to Windows Server 2003. To raise the functional level of a forest, you must be a member of the Enterprise Admins group or the Domain Admins group in the forest root domain.

Before beginning to look at some of the new features of Windows Server 2003 Active Directory, it is important for you to note that not every new feature requires a certain domain or forest functional level to be configured. Some of the features work at any functional level, while others explicitly require the Windows Server 2003 domain or forest functional level.

Public Key Infrastructure and Certificate Services on Windows Server 2003

This article is the first in a series that will cover the design, implementation and management of a PKI. PKI systems have become more and more common in modern IT environments as more technologies are built to take advantage of the strong authentication provided by certificates.

What is a PKI?

A PKI is defined as “the set of policies, practices and components that make up a certificate hierarchy”. There are several key components that must be understood to implement a PKI.

Certificate: A file that follows the X.509 syntax. A certificate contains information identifying the holder, where the certificate came from, when the certificate is valid, what the certificate can be used for, how the certificate can be verified and a thumbprint.

CA: A Certificate Authority (CA) is a software package that accepts and processes certificate requests, issues certificates, and manages issued certificates.

Technologies that Drive PKI

Simply put, it is the role of a PKI to issue and manage certificates. It is fundamental to understanding the operation of a PKI, and that a good understanding of the operation of certificates exist.

Certificates provide the basis for authenticating an entity. This authentication is based on several key principals, some of which are managed by technology, others that are managed by law and organizational policy. At its core, a certificate implements two key technologies; asymmetric encryption (often called public/private key encryption) and hashing.

Shadow Copies of Shared Folders – Part 1

If you ask most network administrators what the top five tasks they perform on a repeated basis are, one task that is sure to be on the list is restoring a single file or a small number of files from backup (that and resetting passwords!). Users accidentally delete files, overwrite files, or may need an older copy of a file. In order to get these files back the user would most likely have to contact the IT department, give them a description of the file, where the file was stored, and the time or date when the deletion or modification occurred. Now unless you’re the CEO, CIO, or the person who is responsible for backups has a crush on you, restoring a file from backup is not exciting and in most situations does not get top priority. Additionally, while we know they try their best to give as much information as possible, users can get confused and the information about the files they need may not be quite right – and the file hunt begins!

Sometimes we can get lucky – the file may have been deleted only an hour ago and there is a backup from last night on another hard drive in the server. In this situation restoring the file is a simple copy/paste operation and is relatively painless if you don’t have to do it a lot. But what happens if the file they need is from a week ago? Or what if the user is not sure if they need a copy from last week or two weeks ago? What about if they need a copy from Monday, Wednesday, and Friday of last week? While hunting through tapes is not the end of the world, I can sure think of a few other things I rather spend my time on!

During this whole process of restoring a file the network administrator’s time is occupied, the user may be unable to complete a task until they get the file, or the user may decide dealing with IT would just take too long and they end up rebuilding the file by hand. Anyway you look at it restoring files ends up causing lost productivity and in turn lost revenues for the company – but what if there was a better way?

Windows Server 2003 introduces a new feature called Shadow Copies of Shared Folders which solves many of the problems associated with restoring a small number of files from backup. To put it simply, Shadow Copies of Shared Folders provide point-in-time copies of files located in shared folders on a Windows Server 2003 server. These copies are accessible by end users and show what a shared folder, or a single file inside of the shared folder, looked like a few hours ago, yesterday, last week, or even a few weeks ago.

Shadow Copies of Shared Folders works by storing a copy of data that has changed since the last shadow copy. Because Shadow Copies of Shared Folders only stores block-level (a.k.a. cluster) changes to files rather than the entire file the amount of hard drive space needed is greatly reduced. The administrator can specify when shadow copies are made and the amount of disk space that is used to store changes – with newer copies replacing older copies as needed.

While this is a great new feature, there are a few things you need to be aware of when planning to use Shadow Copies of Shared Folders (called SCSF from here on out):

  • SCSF is set on a volume-by-volume setting and is only available on NTFS volumes. That is, SCSF is enabled for every shared folder on a given volume or none at all – you can’t pick and choose which shared folders on a given volume will or will not use SCSF. Additionally the schedule of when shadow copies are made is also set at the volume level.
  • SCSF will not work on mount points (when a second hard drive is mounted as a folder on the first)
  • Dual booting the server into previous versions of Windows could cause loss of shadow copies.
  • SCSF is NOT a replacement for undertaking regular backups!

Let’s look at an example of how to setup and use SCSF. In the example we have three drives – C: contains the Windows Server 2003 installation, E: contains a shared folder with user documents that are redirected from the user’s My Documents folder (i.e. folder redirection setup in group policy), and F: is not being used.

By putting the shares I want to use SCSF with on another hard drive (or even another partition on the same physical drive) this keeps from wasting shadow copy disk space or I/O bandwidth of copying shares we don’t need SCSF on.

To enable SCFS on a volume right click the drive in Windows Explorer, select properties, and choose the “Shadow Copies” tab.

While SCSF will let you store the shadow copy data on the same drive as the shares being copied, this is not optimal – the drive head has to go back and forth in order to read the data in the shared folder and then write it to shadow copy storage. Lightly loaded file servers can deal with having everything on one drive, but adding the shadow copy storage area to the same drive on servers with high I/O loads can cause serious slow downs and is not recommended.

If we wanted to setup SCSF with default settings (which are: store the shadow copy data on the same drive, set the maximum limit to 10% of the total drive space, scheduled copies to be made Monday – Friday at 7:00 AM and 12:00 PM, and make the initial copy) we could simply select the drive and choose the “Enable” button. But because we have an additional physical hard drive from the drive that contains the shared folder (convenient, wasn’t that?) we will configure SCSF to use drive F: as the shadow copy storage area. Note that a single volume can act as the storage area for multiple other volumes – the only limitation is the amount of free drive space available.

To continue configuring SCSF select drive E: from the list of volumes and click “Settings…”

In the “Located on this volume” drop-down list select the drive you want shadow copy data to be stored on. In this case we will choose drive F:. Next set the Maximum size of the storage area to an appropriate amount of disk space for your situation.

So just what is an “appropriate amount of disk space” anyway? Well it all depends on the situation (doesn’t it always). Although 10% of the total drive space is a good estimate, you need to take the following variables into consideration:

  • The amount of data in the shared folders
  • The frequency that different blocks change in the shared folders
  • The amount of free disk space on the drive that contains the storage area
  • How many past shadow copies you want to keep
  • The cluster size of the volume that holds the shared folders
  • There is a 100MB minimum

Note that when I say “frequency that different blocks change” I mean the number of blocks that change between shadow copies – not the number of times each block changes. For example, say I make a shadow copy at 7:00AM and 12:00PM and I have two files that change (let’s assume each file fits in a single block for simplicity): fileA and fileB. Between 7:00AM and 12:00PM let’s say fileA is updated 4 times and fileB is updated 2 times. Because shadow copies take a “snapshot” of the shared folders at a point in time there is only one copy per updated block, not per change to each block. So in our example when the 12:00PM shadow copy is made only the most recent versions of our two files are copies – the 4th update to fileA and the 2nd update to fileB, not all 6 different updates that were made.

Another example would be a large file that is made up of multiple blocks – let’s say 100. Next let’s open the file, modify the last two lines, and save the file. By doing so our modifications don’t modify the entire file just the last block (Note this is dependent on the application and if it rewrites the entire file to disk or not). Now let’s assume that we repeat our modifications – changing the last few lines of the file a half dozen more times. When the next shadow copy is made only our final change to the last block of the file is copied – none of the first 99 blocks or the first six modifications to the 100th block are copied. In other words, if you up date a file 5 times or 5,000 times the space needed to store the shadow copy is still the same (assuming that the same blocks are modified between the 5 and 5,000 updates) for that file. Got that?

Also, why does the cluster size on the volume matter? The simple answer is that when you defragment a volume the clusters that makeup files are reorganized. SCFS may see this as a modification and will make a copy at the next shadow copy. To minimize the number of times this occurs Microsoft is recommending that you use a cluster size of at least 16K or any multiple of 16K when you format the volume. The driver used to support shadow copies is aware of defragmenting and can optimize for it, but only if blocks are moved in multiples of 16K. Note however that the larger you make the cluster size the more space you waist (if you have a 24K file the minimum space allocation is still 1 cluster, so if the cluster size was 64K we would end up wasting 40K). Additionally, if your cluster size is over 4K you can’t use file compression – file compression requires cluster sizes of 4K or less. If you can’t use a larger cluster size don’t worry, just keep in mind you may need additional space in the storage area due to defragmentation.

Back to the settings screen – when you are done selecting a drive to use as the storage area and setting the limit click OK

The Shadow Copy tab now shows that we have set drive F: to be used as the storage area for shadow copies made on drive E:

To continue select the E: drive and click “Enable.”
We are informed that this will use the default settings – although this is true for the schedule, it *will* use our selection of drive F: as the storage area.

Click Yes to continue.

The initial shadow copy is then made and the schedule for updates is enabled.

If you would like to modify the schedule of when copies are made you can select the drive, click Settings, and then choose schedule.

When done, click OK on all screens to exit out of the drive properties.

One thing that you may notice is that a new scheduled task appears in the scheduled task folder in control panel for each drive you setup SCSF on. While this is the same schedule that is available from the Settings on the Shadow Copies tab, I would recommend that you don’t directly modify the scheduled task itself – there are many more settings that could be accidentally “goofed.”

So speaking of schedules, how often should we make shadow copies? Again, this depends on the data and when your users use the data, but there are a few things to keep in mind:

  • Microsoft recommends a minimum of one hour between shadow copies and even that is probably way to low for most situations.
  • Taking one shadow copy per day is probably the maximum amount of time you want to go on weekdays between making shadow copies.
  • The longer the time between shadow copies the longer and more I/O intensive the shadow copy will be (due to the fact there are more changed blocks).
  • Your goal should be to take snapshots of data that would be most useful to users and made when they will impact the system the least.
  • There is a maximum limit of 64 shadow copies per volume regardless if there is free space in the storage area.

Twice a day during weekdays is probably sufficient for most M-F 9-5 operations – once in the morning before anyone shows up and than once at lunch when a good number of people are out of the office. The times here are important – you want shadow copies taken during times users are using the system to impact the system as little as possible. By taking a shadow copy right before most people are in the office the number of blocks that have to be copied during the noon shadow copy is reduced and in turn the I/O impact on the system is less. If a shadow copy was only taken at noon the I/O impact would be for blocks updated over the last 24 hours (and a lot higher) rather than just the last 5-6 hours.

Some sites may need more than two shadow copies per day, maybe an additional copy made at the end of the day, or maybe even four copies a day in some situations. However keep the 64 shadow copies per volume maximum in mind – if you take two copies per day during weekdays you can store almost six and a half weeks of shadow copies (assuming you have the hard drive space), a little over four weeks if you take three copies per day, and a little over three weeks if you take four copies per day. The 64 limit should not be a problem for most situations, but it’s the reason why you don’t want to just take a copy every couple of hours – there would only be a little over a week of shadow copies before they start to get overwritten with newer shadow copies.

Well that about does it for this article, part two will cover the client side of Shadow Copies of Shared Folders.