Migrate Failover Cluster from Windows Server 2008 R2 to 2016
ADFS and SQL Failover Clusters – Migrate Failover Cluster from Windows Server 2008 R2 to 2016
DOCUMENT PURPOSE
A cluster is a set of independent computers that work together to increase the availability of services and applications. The clustered servers (called nodes) are connected by physical cables and by software. If one of the nodes fails, another node begins to provide service through a process known as failover.
You can use the Microsoft® Management Console (MMC) snap-in, Failover Cluster Manager, to validate failover cluster configurations, create and manage failover clusters, and migrate certain settings to a cluster running the Windows Server® 2008 R2 operating system. You can also configure and manage failover clusters by using Windows PowerShell. These Help topics describe methods for using Failover Cluster Manager. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
In Windows Server 2008 and Windows Server 2008 R2, the improvements to failover clusters (formerly known as server clusters in Windows Server 2003) are aimed at simplifying clusters, making them more secure, and enhancing cluster stability. Cluster setup and management are easier. Security and networking in clusters have been improved, as has the way a failover cluster communicates with storage.
Note that the Failover Clustering feature is included in server products such as Windows Server 2008 R2 Enterprise and Windows Server 2008 R2 Datacenter. The Failover Clustering feature is not included in Windows Server 2008 R2 Standard or Windows Web Server 2008 R2.
To enhance user mobility and flexibility on Single Sign on solution based on ADFS Servers running against a MS Windows SQL Cluster Services .
Current solutions that Customer has are experiencing EOL or EOS and Need to be migrated to a new platform for all CUSTOMER needs.
- Checklist: Create a Failover Cluster
- Checklist: Create a Clustered File Server
- Checklist: Create a Clustered Print Server
- Checklist: Create a Clustered Virtual Machine
- Installing the Failover Clustering Feature
- Validating a Failover Cluster Configuration
- Creating a Failover Cluster or Adding a Cluster Node
- Configuring a Service or Application for High Availability
- Migrating Settings to a Failover Cluster Running Windows Server 2008 R2
- Modifying Settings for a Failover Cluster
- Managing a Failover Cluster
- Resources for Failover Clusters
- User Interface: The Failover Cluster Manager Snap-In
2.1.2. Checklist: Create a Failover Cluster
2.1.3. Logical technical diagram components
If you want to provide high availability for users of one or more file shares, see a more complete checklist at Checklist: Create a Clustered File Server.
1 | Review hardware and infrastructure requirements for a failover cluster. | Understanding Requirements for Failover Clusters |
2 | Install the Failover Clustering feature on every server that will be in the cluster. | Install the Failover Clustering Feature |
3 | Connect the networks and storage that the cluster will use. | Prepare Hardware Before Validating a Failover Cluster |
4 | Run the Validate a Configuration Wizard on all the servers that you want to cluster, to confirm that the hardware and hardware settings of the servers, network, and storage are compatible with failover clustering. If necessary, adjust hardware or hardware settings and rerun the wizard until all tests pass (required for support). | Validate a New or Existing Failover Cluster |
5 | Create the failover cluster. | Create a New Failover Cluster |
If you want to provide high availability for users of one or more print servers, see a more complete checklist at Checklist: Create a Clustered Print Server.
After you have created a failover cluster, the next step is usually to configure the cluster to support a particular service or application. For more information, see Configuring a Service or Application for High Availability.
3. Checklist: Create a Clustered File Server
Step | Reference | |
On every server that will be in the cluster, open Server Manager, click Add roles and then use the Add Roles Wizard to add the File Services role and any role services that are needed. | Help in Server Manager | |
Review hardware and infrastructure requirements for a failover cluster. | Understanding Requirements for Failover Clusters | |
Install the Failover Clustering feature on every server that will be in the cluster. | Install the Failover Clustering Feature | |
Connect the networks and storage that the cluster will use. | Prepare Hardware Before Validating a Failover Cluster | |
Run the Validate a Configuration Wizard on all the servers that you want to cluster, to confirm that the hardware and hardware settings of the servers, network, and storage are compatible with failover clustering. If necessary, adjust hardware or hardware settings and rerun the wizard until all tests pass (required for support). | Validate a New or Existing Failover Cluster | |
Create the failover cluster. | Create a New Failover Cluster | |
If the clustered servers are connected to a network that is not to be used for network communication in the cluster (for example, a network intended only for iSCSI or only for backup), then configure that network so that it does not allow cluster communication. | Modify Network Settings for a Failover Cluster | |
Run the High-Availability Wizard and specify the File Server role, a name for the clustered file server, and IP address information that is not automatically supplied by your DHCP settings. Also specify the storage volume or volumes. | Configure a Service or Application for High Availability | |
Add shared folders to the clustered file server as needed. | Create a Shared Folder in a Clustered File Server | |
Test the failover of the clustered file server. | Test the Failover of a Clustered Service or Application |
3.1.1.1. Additional references
- Overview of Failover Clusters
- Modify the Failover Settings for a Clustered Service or Application
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
4. Checklist: Create a Clustered Print Server
Step | Reference | |
On every server that will be in the cluster, open Server Manager, click Add roles and then use the Add Roles Wizard to add the Print and Document Services role and any role services that are needed. | Help in Server Manager | |
Review hardware and infrastructure requirements for a failover cluster. | Understanding Requirements for Failover Clusters | |
Install the Failover Clustering feature on every server that will be in the cluster. | Install the Failover Clustering Feature | |
Connect the networks and storage that the cluster will use. | Prepare Hardware Before Validating a Failover Cluster | |
Run the Validate a Configuration Wizard on all the servers that you want to cluster, to confirm that the hardware and hardware settings of the servers, network, and storage are compatible with failover clustering. If necessary, adjust hardware or hardware settings and rerun the wizard until all tests pass (required for support). | Validate a New or Existing Failover Cluster | |
Create the failover cluster. | Create a New Failover Cluster | |
If the clustered servers are connected to a network that is not to be used for network communication in the cluster (for example, a network intended only for iSCSI or only for backup), then configure that network so that it does not allow cluster communication. | Modify Network Settings for a Failover Cluster | |
Run the High-Availability Wizard and specify the Print Server role, a name for the clustered print server, and IP address information that is not automatically supplied by your DHCP settings. | Configure a Service or Application for High Availability | |
Configure print settings such as the setting that specifies the printer. | Configure the Print Settings for a Clustered Print Server | |
Test the failover of the clustered print server. | Test the Failover of a Clustered Service or Application | |
From a client, print a test page on the clustered print server. | Help for the operating system on the client (for specifying the path to the clustered print server)
Help from the application from which you print the test page |
4.1.1.1. Additional references
- Overview of Failover Clusters
- Modify the Failover Settings for a Clustered Service or Application
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
5. Checklist: Create a Clustered Virtual Machine
Step | Reference | |
Review hardware and infrastructure requirements for a failover cluster. | Understanding Requirements for Failover Clusters | |
Review hardware requirements for Hyper-V. | Hardware requirements for Hyper-V (https://go.microsoft.com/fwlink/?LinkId=137803) | |
Review concepts related to Hyper-V and virtual machines in the context of a cluster.
If you want to use Cluster Shared Volumes for your virtual machines, review concepts related to Cluster Shared Volumes. |
Understanding Hyper-V and Virtual Machines in the Context of a Cluster | |
Install Hyper-V (role) and Failover Clustering (feature) on every server that will be in the cluster. | Install Hyper-V (https://go.microsoft.com/fwlink/?LinkId=137804) | |
Connect the networks and storage that the cluster will use. In Hyper-V Manager, create virtual networks that will be available for virtual machines to use. | Prepare Hardware Before Validating a Failover Cluster
Managing virtual networks (https://go.microsoft.com/fwlink/?LinkId=139566) |
|
Run the Validate a Configuration Wizard on all the servers that you want to cluster, to confirm that the hardware and hardware settings of the servers, network, and storage are compatible with failover clustering. If necessary, adjust hardware or hardware settings and rerun the wizard until all tests pass (required for support). | Validate a New or Existing Failover Cluster | |
Create the failover cluster. | Create a New Failover Cluster | |
If the clustered servers are connected to a network that is not to be used for network communication in the cluster (for example, a network intended only for iSCSI or only for backup), then configure that network so that it does not allow cluster communication. | Modify Network Settings for a Failover Cluster | |
If you want to use Cluster Shared Volumes and you have not already enabled this feature, enable Cluster Shared Volumes. | Enable Cluster Shared Volumes in a Failover Cluster | |
Create a virtual machine and configure it for high availability, being sure to select Store the virtual machine in a different location and specify appropriate clustered storage. If you have Cluster Shared Volumes, for the storage, specify a Cluster Shared Volume (a volume that appears to be on the system drive of the node, under the \ClusterStorage folder). Otherwise, create the virtual machine on the node that currently owns the clustered storage for the virtual machine, and specify the location of that storage.
None of the files used by a clustered virtual machine can be on a local disk. They must all be on clustered storage. |
Configure a Virtual Machine for High Availability | |
If you have not already installed the operating system for the virtual machine, install the operating system. If there are Hyper-V integration services for that operating system, install them in the operating system that runs in the virtual machine. | Install a guest operating system (https://go.microsoft.com/fwlink/?LinkId=137806) | |
Reconfigure the automatic start action for the virtual machine so that it does nothing when the physical computer starts. | Configure virtual machines (https://go.microsoft.com/fwlink/?LinkId=137807) | |
Test the failover of the clustered virtual machine. | Test the Failover of a Clustered Virtual Machine |
5.1.1.1. Additional references
- Refresh the Configuration of a Virtual Machine
- Overview of Failover Clusters
- Modify the Failover Settings for a Clustered Service or Application
- Configure a Virtual Machine for High Availability
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
6. Technical infrastructure
7. Installing the Failover Clustering Feature
Before you can create a failover cluster, you must install the Failover Clustering feature on all servers that you want to include in the cluster. This section provides instructions for installing this feature.
Note that the Failover Clustering feature is included in server products such as Windows Server 2008 R2 Enterprise and Windows Server 2008 R2 Datacenter. The Failover Clustering feature is not included in Windows Server 2008 R2 Standard or Windows Web Server 2008 R2.
Figure 3. Infrastructure topology
8. Install the Failover Clustering Feature
You can install the Failover Clustering feature by using the Add Features command from Initial Configuration Tasks or from Server Manager.
Note | |
The Failover Clustering feature is included in server products such as Windows Server 2008 R2 Enterprise and Windows Server 2008 R2 Datacenter. The Failover Clustering feature is not included in Windows Server 2008 R2 Standard or Windows Web Server 2008 R2. |
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To install the Failover Clustering feature |
- If you recently installed Windows Server 2008 R2 on the server and the Initial Configuration Tasks interface is displayed, under Customize This Server, click Add features. (Skip to Step 3.)
- If Initial Configuration Tasks is not displayed, add the feature through Server Manager:
- If Server Manager is already running, click Features. Then under Features Summary, click Add Features.
- If Server Manager is not running, click Start, click Administrative Tools, click Server Manager, and then, if prompted for permission to continue, click Continue. Then, under Features Summary, click Add Features.
- In the Add Features Wizard, click Failover Clustering and then click Install.
- When the wizard finishes, close it.
- Repeat the process for each server that you want to include in the cluster.
8.1.1.1. Additional references
9. Validating a Failover Cluster Configuration
Before you create a failover cluster, we strongly recommend that you validate your configuration, that is, that you run all tests in the Validate a Configuration Wizard. By running the tests, you can confirm that your hardware and settings are compatible with failover clustering.
You can run the tests on a set of servers and storage devices either before or after you have configured them as a failover cluster.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
With the Validate a Configuration Wizard, you can run the complete set of configuration tests or a subset of the tests. The wizard provides a report that shows the result of each test that was run.
- Understanding Requirements for Failover Clusters
- Understanding Microsoft Support of Cluster Solutions
- Understanding Cluster Validation Tests
- Prepare Hardware Before Validating a Failover Cluster
- Validate a New or Existing Failover Cluster
9.1.1.1. Additional references
- Use Validation Tests for Troubleshooting a Failover Cluster
- For troubleshooting information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137836.
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
10. Understanding Requirements for Failover Clusters
A failover cluster must meet certain requirements for hardware, software, and network infrastructure, and it requires the administrator to use an account with the appropriate domain permissions. The following sections provide information about these requirements.
- Hardware requirements for a failover cluster
- Software requirements for a failover cluster
- Network infrastructure and domain account requirements for a failover cluster
For additional information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
10.1. Hardware requirements for a failover cluster
You need the following hardware in a failover cluster:
- Servers: We recommend that you use a set of matching computers that contain the same or similar components.
Important | |
Microsoft supports a failover cluster solution only if all the hardware components are marked as “Certified for Windows Server 2008 R2.” In addition, the complete configuration (servers, network, and storage) must pass all tests in the Validate a Configuration Wizard, which is included in the Failover Cluster Manager snap-in. |
- For information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.For information about the maximum number of servers that you can have in a failover cluster, see https://go.microsoft.com/fwlink/?LinkId=139146.
- Network adapters and cable (for network communication): The network hardware, like other components in the failover cluster solution, must be marked as “Certified for Windows Server 2008 R2.” If you use iSCSI, your network adapters should be dedicated to either network communication or iSCSI, not both.In the network infrastructure that connects your cluster nodes, avoid having single points of failure. There are multiple ways of accomplishing this. You can connect your cluster nodes by multiple, distinct networks. Alternatively, you can connect your cluster nodes with one network that is constructed with teamed network adapters, redundant switches, redundant routers, or similar hardware that removes single points of failure.
Note | |
If you connect cluster nodes with a single network, the network will pass the redundancy requirement in the Validate a Configuration Wizard. However, the report from the wizard will include a warning that the network should not have single points of failure. |
- For more details about the network configuration required for a failover cluster, see Network infrastructure and domain account requirements for a failover cluster, later in this topic.
- Device controllers or appropriate adapters for the storage:
- For Serial Attached SCSI or Fibre Channel: If you are using Serial Attached SCSI or Fibre Channel, in all clustered servers, the mass-storage device controllers that are dedicated to the cluster storage should be identical. They should also use the same firmware version.
Note | |
With Windows Server 2008 R2, you cannot use parallel SCSI to connect the storage to the clustered servers. This was also true for Windows Server 2008. |
- For iSCSI: If you are using iSCSI, each clustered server should have one or more network adapters or host bus adapters that are dedicated to the cluster storage. The network you use for iSCSI should not be used for network communication. In all clustered servers, the network adapters you use to connect to the iSCSI storage target should be identical, and we recommend that you use Gigabit Ethernet or higher.For iSCSI, you cannot use teamed network adapters, because they are not supported with iSCSI.For more information about iSCSI, see the iSCSI FAQ on the Microsoft Web site (https://go.microsoft.com/fwlink/?LinkId=61375).
- Storage: You must use shared storage that is compatible with Windows Server 2008 R2.In most cases, the storage should contain multiple, separate disks (LUNs) that are configured at the hardware level. For some clusters, one disk functions as the disk witness (described at the end of this subsection). Other disks contain the files required for the clustered services or applications. Storage requirements include the following:
- To use the native disk support included in failover clustering, use basic disks, not dynamic disks.
- We recommend that you format the partitions with NTFS. If you have a disk witness or use Cluster Shared Volumes, the partition for each of those must be NTFS.For Cluster Shared Volumes, there are no special requirements other than the requirement for NTFS. For more information about Cluster Shared Volumes, see Understanding Cluster Shared Volumes in a Failover Cluster.
- For the partition style of the disk, you can use either master boot record (MBR) or GUID partition table (GPT).
A disk witness is a disk in the cluster storage that is designated to hold a copy of the cluster configuration database. A failover cluster has a disk witness only if this is specified as part of the quorum configuration. For more information, see Understanding Quorum Configurations in a Failover Cluster.
10.1.1. Deploying storage area networks with failover clusters
When deploying a storage area network (SAN) with a failover cluster, follow these guidelines:
- Confirm compatibility of the storage: Confirm with manufacturers and vendors that the storage, including drivers, firmware, and software used for the storage, are compatible with failover clusters in Windows Server 2008 R2.
Important | |
Storage that was compatible with server clusters in Windows Server 2003 might not be compatible with failover clusters in Windows Server 2008 R2. Contact your vendor to ensure that your storage is compatible with failover clusters in Windows Server 2008 R2. |
- Failover clusters include the following new requirements for storage:
- Improvements in failover clusters (as compared to server clusters in Windows Server 2003) require that the storage respond correctly to specific SCSI commands. To confirm that your storage is compatible, run the Validate a Configuration Wizard. In addition, you can contact the storage vendor.
- The miniport driver used for the storage must work with the Microsoft Storport storage driver.
- Isolate storage devices, one cluster per device: Servers from different clusters must not be able to access the same storage devices. In most cases, a LUN used for one set of cluster servers should be isolated from all other servers through LUN masking or zoning.
- Consider using multipath I/O software: In a highly available storage fabric, you can deploy failover clusters with multiple host bus adapters by using multipath I/O software. This provides the highest level of redundancy and availability. For Windows Server 2008 R2, your multipath solution must be based on Microsoft Multipath I/O (MPIO). Your hardware vendor will usually supply an MPIO device-specific module (DSM) for your hardware, although Windows Server 2008 R2 includes one or more DSMs as part of the operating system.
Important | |
Host bus adapters and multipath I/O software can be very version sensitive. If you are implementing a multipath solution for your cluster, you should work closely with your hardware vendor to choose the correct adapters, firmware, and software for Windows Server 2008 R2. |
10.2. Software requirements for a failover cluster
All the servers in a failover cluster must either run the x64-based version or the Itanium architecture-based version of Windows Server 2008 R2 (nodes within a single failover cluster cannot run different versions).
All the servers should have the same software updates (patches) and service packs.
The Failover Clustering feature is included in server products such as Windows Server 2008 R2 Enterprise and Windows Server 2008 R2 Datacenter. The Failover Clustering feature is not included in Windows Server 2008 R2 Standard or Windows Web Server 2008 R2.
10.3. Network infrastructure and domain account requirements for a failover cluster
You will need the following network infrastructure for a failover cluster and an administrative account with the following domain permissions:
- Network settings and IP addresses: When you use identical network adapters for a network, also use identical communication settings on those adapters (for example, Speed, Duplex Mode, Flow Control, and Media Type). Also, compare settings between the network adapter and the switch it connects to and make sure that no settings are in conflict.If you have private networks that are not routed to the rest of your network infrastructure, ensure that each of these private networks uses a unique subnet. This is necessary even if you give each network adapter a unique IP address. For example, if you have two cluster nodes in a central office that uses one physical network, and two more nodes in a branch office that uses a separate physical network, do not specify 10.0.0.0/24 for both networks, even if you give each adapter a unique IP address.For more information about the network adapters, see Hardware Requirements for a Failover Cluster, earlier in this topic.
- DNS: The servers in the cluster must be using Domain Name System (DNS) for name resolution. The DNS dynamic update protocol can be used.
- Domain role: All servers in the cluster must be in the same Active Directory domain. As a best practice, all clustered servers should have the same domain role (either member server or domain controller). The recommended role is member server.
- Domain controllers: We recommend that your clustered servers be member servers. If they are, other servers will be the domain controllers in the domain that contains your failover cluster.
- Clients: There are no specific requirements for clients, other than the obvious requirements for connectivity and compatibility: the clients must be able to connect to the clustered servers, and they must run software that is compatible with the services offered by the clustered servers.
- Account for administering the cluster: When you first create a cluster or add servers to it, you must be logged on to the domain with an account that has administrator rights and permissions on all servers in that cluster. The account does not need to be a Domain Admins account—it can be a Domain Users account that is in the Administrators group on each clustered server. In addition, if the account is not a Domain Admins account, the account (or the group that the account is a member of) must be delegated Create Computer Objects permission in the domain. For more information, see Failover Cluster Step-by-Step Guide: Configuring Accounts in Active Directory (https://go.microsoft.com/fwlink/?LinkId=139147).
Note | |
There is a change in the way the Cluster service runs in Windows Server 2008 R2, as compared to Windows Server 2003. In Windows Server 2008 R2, there is no Cluster service account. Instead, the Cluster service automatically runs in a special context that provides the specific permissions and privileges necessary for the service (similar to the local system context, but with reduced privileges). |
10.3.1.1. Additional references
- Validating a Failover Cluster Configuration
- Overview of Failover Clusters
- For information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
11. Understanding Microsoft Support of Cluster Solutions
Microsoft supports a failover cluster solution only if it meets the following requirements:
- All hardware components in the failover cluster solution are marked as “Certified for Windows Server 2008 R2.”
- The complete cluster configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard.
- The hardware manufacturers’ recommendations for firmware updates and software updates (patches) have been followed. Usually, this means that the latest firmware and software updates have been applied. Occasionally, a manufacturer might recommend specific updates other than the latest updates.
11.1.1.1. Additional references
- Understanding Requirements for Failover Clusters
- Validating a Failover Cluster Configuration
- For information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
12. Understanding Cluster Validation Tests
With the Validate a Configuration Wizard, you can run tests to confirm that your hardware and hardware settings are compatible with failover clustering. You can run the complete set of configuration tests or a subset of the tests.
We recommend that you run the tests on your set of servers and storage devices before you configure them as a failover cluster (create a cluster from them). You can also run the tests after you create a cluster.
Note that the Failover Clustering feature must be installed on all the servers that you want to include in the tests.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
The Validate a Configuration Wizard includes five types of tests:
- Cluster Configuration tests. For an existing cluster, provide a simple way to review cluster settings and determine whether they are properly configured. These tests run only on existing clusters. For more information, see Understanding Cluster Validation Tests: Cluster Configuration.
- Inventory tests. Provide an inventory of the hardware, software, and settings (such as network settings) on the servers, and information about the storage. For more information, see Understanding Cluster Validation Tests: Inventory.
- Network tests. Validate that your networks are set up correctly for clustering. For more information, see Understanding Cluster Validation Tests: Network.
- Storage tests. Validate that the storage on which the failover cluster depends is behaving correctly and supports the required functions of the cluster. For more information, see Understanding Cluster Validation Tests: Storage.
- System Configuration tests. Validate that system software and configuration settings are compatible across servers. For more information, see Understanding Cluster Validation Tests: System Configuration.
13. Understanding Cluster Validation Tests: Cluster Configuration
Cluster configuration tests make it easier to review the cluster settings and determine whether they are properly configured. The cluster configuration tests run only on existing clusters (not servers for which a cluster is planned). The tests include the following:
- List Cluster Core Groups: This test lists information about Available Storage and about the group of resources used by the cluster itself.
- List Cluster Network Information: This test lists cluster-specific network settings that are stored in the cluster configuration. For example, it lists settings that affect how often heartbeat signals are sent between servers on the same subnet, and how often they are sent between servers on different subnets.
- List Cluster Resources: This test lists details for all the resources that are configured in the cluster.
- List Cluster Volumes: This test lists information about volumes in the cluster storage.
- List Clustered Services and Applications: This test lists the services and applications configured to run in the cluster.
- Validate Quorum Configuration: This test validates that the quorum configuration is optimal for the cluster. For more information about quorum configurations, see Understanding Quorum Configurations in a Failover Cluster.
- Validate Resource Status: This test validates that cluster resources are online, and lists the cluster resources that are running in separate resource monitors. If a resource is running in a separate resource monitor, it is usually because the resource failed and the Cluster service began running it in a separate resource monitor (to make it less likely to affect other resources if it fails again).
- Validate Service Principal Name: This test validates that a Service Principal Name exists for all resources that have Kerberos enabled.
- Validate Volume Consistency: This test checks for volumes that are flagged as inconsistent (“dirty”) and if any are found, provides a reminder that running chkdsk is recommended
14. Understanding Cluster Validation Tests: Inventory
Inventory tests provide lists of information about the hardware, software, and settings on each of the servers you are testing. You can use inventory tests alone (without other tests in the Validate a Cluster Configuration Wizard) to review or record the configuration of hardware (for example, to review that the software updates on each server are identical after you perform scheduled maintenance).
You can run the following inventory tests by using the Validate a Configuration Wizard:
- List BIOS Information
- List Environment Variables: Examples of environment variables are the number of processors, the operating system path, and the location of temporary folders.
- List Fibre Channel Host Bus Adapters
- List iSCSI Host Bus Adapters
- List Memory Information
- List Operating System Information
- List Plug and Play Devices
- List Running Processes
- List SAS Host Bus Adapters: This test lists the host bus adapters for Serial Attached SCSI (SAS).
- List Services Information: As a general best practice for servers, especially cluster nodes, you should run only necessary services.
- List Software Updates: You can use this test to help correct issues that are uncovered by one of the System Configuration tests, Validate Software Update Levels. For more information, see Understanding Cluster Validation Tests: System Configuration.
- List System Drivers
- List System Information: The system information includes the following:
- Computer name
- Manufacturer, model, and type
- Account name of the person who ran the validation tests
- Domain that the computer is in
- Time zone and daylight time setting (determines whether the clock is adjusted for daylight time changes)
- Number of processors
- List Unsigned Drivers: You can use this test to help correct issues that are uncovered by one of the System Configuration tests, Validate All Drivers Signed. For more information, see Understanding Cluster Validation Tests: System Configuration.
15. Understanding Cluster Validation Tests: Network
Network tests help you confirm that the correct network infrastructure is in place for a failover cluster.
15.1. Correcting issues uncovered by network tests
For information about how to correct the issues that are revealed by network tests, see:
- Prepare Hardware Before Validating a Failover Cluster
- Understanding Requirements for Failover Clusters
For additional information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
15.2. Network tests in the Validate a Configuration Wizard
You can run the following network tests by using the Validate a Configuration Wizard:
- List Network Binding Order: This test lists the order in which networks are bound to the adapters on each node.
- Validate Cluster Network Configuration includes the following tasks:
- Lists the cluster networks, that is, the network topology as seen from the perspective of the cluster.
- Validates that, for a particular cluster network, all network adapters are provided with IP addresses in the same way (that is, all use static IP addresses or all use DHCP).
- Validates that, for a particular cluster network, all network adapters use the same version of IP (that is, all use IPv4, all use IPv6, or all use both IPv4 and IPv6).
- Validate IP Configuration includes the following tasks:
- Lists the IP configuration details.
- Validates that IP addresses are unique in the cluster (no duplication).
- Checks the number of network adapters on each tested server. If only one adapter is found, the report provides a warning about avoiding single points of failure in the network infrastructure that connects clustered servers.There are multiple ways of avoiding single points of failure. You can connect your cluster nodes by multiple, distinct networks. Alternatively, you can connect your cluster nodes with one network that is constructed with teamed network adapters, redundant switches, redundant routers, or similar hardware that removes single points of failure.
- Validates that no tested servers have multiple adapters on the same IP subnet.
- Validates that all tested servers use the same version of IP (that is, all use IPv4, all use IPv6, or all use both IPv4 and IPv6).
- Validate Multiple Subnet Properties: This test validates that settings related to DNS are configured appropriately for clusters using multiple subnets to span multiple sites.
- Validate Network Communication: This test validates that tested servers can communicate with acceptable latency on all networks. The test also checks to see if there are redundant communication paths between all servers. Communication between the nodes of a cluster enables the cluster to detect node failures and status changes and to manage the cluster as a single entity.
- Validate Windows Firewall Configuration: This test validates that Windows Firewall is configured correctly for failover clustering on the tested servers.
16. Understanding Cluster Validation Tests: Storage
Storage tests analyze the storage to determine whether it will work correctly for a failover cluster running Windows Server 2008 R2.
16.1. Correcting issues uncovered by storage tests
If a storage test indicates that your storage or your storage configuration will not support a failover cluster, review the following suggestions:
- Contact your storage vendor and use the utilities provided with your cluster storage to gather information about the configuration. (In unusual cases, your storage vendor might indicate that your cluster solution is supported even though this is not reflected in the storage tests. For example, your cluster solution might have been specifically designed to work without shared storage.)
- Review results from multiple tests in the Validate a Configuration Wizard, such as the List Host Bus Adapters test (see the topic Understanding Cluster Validation Tests: Inventory) and two tests that are described in this topic, List All Disks and List Cluster Disks.
- Look for a storage validation test that is related to the one that uncovered the issue. For example, if Validate Multiple Arbitration uncovered an issue, the related test, Validate Disk Arbitration, might provide useful information.
- Review the storage requirements in Understanding Requirements for Failover Clusters.For information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
- Review the documentation for your storage, or contact the manufacturer.
16.2. Storage tests in the Validate a Configuration Wizard
You can run the following storage tests by using the Validate a Configuration Wizard:
- List All Disks
- List Potential Cluster Disks
- Validate Disk Access Latency
- Validate Disk Arbitration
- Validate Disk Failover
- Validate File System
- Validate Microsoft MPIO-Based Disks
- Validate Multiple Arbitration
- Validate SCSI Device Vital Product Data (VPD)
- Validate SCSI-3 Persistent Reservation
- Validate Simultaneous Failover
16.2.1.1. List All Disks
This test lists all disks that are visible to one or more tested servers. The test lists:
- Disks that can support clustering and can be accessed by all the servers.
- Disks on an individual server.
The following information is listed for each disk:
- Disk number
- Unique identifier
- Bus type
- Stack type
- Disk address (where applicable), including the port, path, target identifier (TID), and Logical Unit Number (LUN)
- Adapter description
- Disk characteristics such as the partition style and partition type
You can use this test to help diagnose issues uncovered by other storage tests described in this topic.
16.2.1.2. List Potential Cluster Disks
This test lists disks that can support clustering and are visible to all tested servers. To support clustering, the disk must be connected through Serial Attached SCSI (SAS), iSCSI, or Fibre Channel. In addition, the test validates that multipath I/O is working correctly, meaning that each of the disks is seen as one disk, not two.
16.2.1.2.1. Types of disks not listed by the test
This test lists only disks that can be used for clustering. The disks that it lists must:
- Be connected through Serial Attached SCSI (SAS), iSCSI, or Fibre Channel.
- Be visible to all servers in the cluster.
- Be accessed through a host bus adapter that supports clustering.
- Not be a boot volume or system volume.
- Not be used for paging files, hibernation, or dump files. (Dump files record the contents of memory when the system stops unexpectedly.)
16.2.1.3. Validate Disk Access Latency
This test validates that the latency for disk read and write operations is within an acceptable limit for a failover cluster. If disk read and write operations take too long, one possible result is that cluster time-outs might be triggered. Another possible result is that the application attempting to access the disk might appear to have failed, and the cluster might initiate a needless failover.
16.2.1.4. Validate Disk Arbitration
This test validates that:
- Each of the clustered servers can use the arbitration process to become the owner of each of the cluster disks.
- When a particular server owns a disk, if one or more other servers arbitrate for that disk, the original owner retains ownership.
If a clustered server cannot become the owner of a disk, or it cannot retain ownership when other clustered servers arbitrate for the disk, various issues might result:
- The disk could have no owner and therefore be unavailable.
- Two owners could write to the disk in an uncoordinated way, causing the disk to become corrupted.Failover cluster servers are designed to coordinate all write operations in a way that avoids disk corruption.
- The disk could change owners every time arbitration occurs, which would interfere with disk availability.
16.2.1.5. Validate Disk Failover
This test validates that disk failover works correctly in the cluster. Specifically, the test validates that when a disk owned by one clustered server is failed over, the server that takes ownership of the disk can read it. The test also validates that information written to the disk before the failover is still the same after the failover.
If disk failover occurs but the server that takes ownership of a disk cannot read it, the cluster cannot maintain availability of the disk. If information written to the disk is changed during the process of failover, it could cause issues for users or software that require this information. In either case, if the affected disk is a disk witness (a disk that stores cluster configuration data and participates in quorum), such issues could cause the cluster to lose quorum and shut down.
If this test reveals that disk failover does not work correctly, the results of the following tests might help you identify the cause of the issue:
16.2.1.6. Validate File System
This test validates that the file system on disks in shared storage is supported by failover clusters.
16.2.1.7. Validate Microsoft MPIO-Based Disks
This test validates that multi-path disks (Microsoft MPIO-based disks) have been configured correctly for failover cluster.
16.2.1.8. Validate Multiple Arbitration
This test validates that when multiple clustered servers arbitrate for a cluster disk, only one server obtains ownership. The process of disk arbitration helps ensure that clustered servers perform all write operations in a coordinated way, avoiding disk corruption.
If this test reveals that multiple clustered servers can obtain ownership of a cluster disk through disk arbitration, the results of the following test might help you identify the cause of the issue:
16.2.1.9. Validate SCSI Device Vital Product Data (VPD)
This test validates that the storage supports necessary SCSI inquiry data (VPD descriptors) and that they are unique.
16.2.1.10. Validate SCSI-3 Persistent Reservation
This test validates that the cluster storage uses the more recent (SCSI-3 standard) Persistent Reserve commands (which are different from the older SCSI-2 standard reserve/release commands). The Persistent Reserve commands avoid SCSI bus resets, which means they are much less disruptive than the older reserve/release commands. Therefore, a failover cluster can be more responsive in a variety of situations, as compared to a cluster running an earlier version of the operating system. In addition, disks are never left in an unprotected state, which lowers the risk of volume corruption.
16.2.1.11. Validate Simultaneous Failover
This test validates that simultaneous disk failovers work correctly in the cluster. Specifically, the test validates that even when multiple disk failovers occur at the same time, any clustered server that takes ownership of a disk can read it. The test also validates that information written to each disk before a failover is still the same after the failover.
If disk failover occurs but the server that takes ownership of a disk cannot read it, the cluster cannot maintain availability of the disk. If information written to the disk is changed during the process of failover, it could cause issues for users or software that require this information. In either case, if the affected disk is a disk witness (a disk that stores cluster configuration data and participates in quorum), such issues could cause the cluster to lose quorum and shut down.
If this test reveals that disk failover does not work correctly, the results of the following tests might help you identify the cause of the issue:
17. Understanding Cluster Validation Tests: System Configuration
System configuration tests analyze the selected servers to determine whether they are properly configured for working together in a failover cluster. The system configuration tests include the following:
- Validate Active Directory Configuration: This test validates that each tested server is in the same domain and organizational unit. It also validates that all tested servers are domain controllers or that all are member servers. To change the domain role of a server, use the Active Directory® Domain Services Installation Wizard.
- Validate All Drivers Signed: This test validates that all tested servers contain only signed drivers. If an unsigned driver is detected, the test is not considered a failure, but a warning is issued.The purpose of signing drivers is to tell you whether the drivers on your system are original, unaltered files that either came with the operating system or were supplied by a vendor.You can get a list of system drivers and a list of unsigned drivers by running the corresponding inventory tasks. For more information, see Understanding Cluster Validation Tests: Inventory.
- Validate Cluster Service and Driver Settings: This test validates the startup settings used by services and drivers, such as the Cluster service, NetFT.sys, and Clusdisk.sys.
- Validate Memory Dump Settings: This test validates that none of the nodes currently requires a reboot (as part of a software update) and that each node is configured to capture a memory dump if it stops running.
- Validate Operating System Installation Option: This test validates that the operating systems on the servers use the same installation option (full installation or Server Core installation).
- Validate Required Services: This test validates that the services that are required for failover clustering are running on each tested server and are configured to start automatically whenever the server is restarted.
- Validate Same Processor Architecture: This test validates that all tested servers have the same architecture. A failover cluster is supported only if the systems in it are all x64-based or all Itanium architecture-based.
- Validate Service Pack Levels: This test validates that all tested servers have the same service packs. A failover cluster can run even if some servers have different service packs than others. However, servers with different service packs might behave differently from each other, with unexpected results. We recommend that all servers in the failover cluster have the same service packs.
- Validate Software Update Levels: This test validates that all tested servers have the same software updates. A failover cluster can run even if some servers have different updates than others. However, servers with different software updates might behave differently from each other, with unexpected results. We recommend that all servers in the failover cluster have the same software update levels.
- Validate System Drive Variable: This test validates that all nodes use the same letter for the system drive environment variable.
18. Prepare Hardware Before Validating a Failover Cluster
Before running the Validate a Configuration Wizard for a failover cluster, you should make preparations such as connecting the networks and storage needed by the cluster. This topic briefly describes these preparations. For a full list of the requirements for a failover cluster, see Understanding Requirements for Failover Clusters.
In most cases, membership in the local Administrators group on each server, or equivalent, is the minimum required to complete this procedure. This is because the procedure includes steps for configuring networks and storage. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To prepare hardware before validating a failover cluster |
- Confirm that your entire cluster solution, including drivers, is compatible with Windows Server 2008 R2 by checking the hardware compatibility information on the Microsoft Web site (https://go.microsoft.com/fwlink/?LinkId=139145).
Important | |
Microsoft supports a failover cluster solution only if all the hardware components are marked as “Certified for Windows Server 2008 R2.” In addition, the complete configuration (servers, network, and storage) must pass all tests in the Validate a Configuration Wizard, which is included in the Failover Cluster Manager snap-in. |
- We recommend that you use a set of matching computers that contain the same or similar components.
- For the cluster networks:
-
- Review the details about networks in Understanding Requirements for Failover Clusters.
- Connect and configure the networks that the servers in the cluster will use.
Note | |
One option for configuring cluster networks is to create a preliminary network configuration, then run the Validate a Configuration Wizard with only the Network tests selected (avoid selecting Storage tests). When only the Network tests are run, the process does not take a long time. Using the validation report, you can make any corrections still needed in the network configuration. (Later, after the storage is configured, you should run the wizard with all tests.) |
- For the storage:
-
- Review the details about storage in Understanding Requirements for Failover Clusters.
- Follow the manufacturer’s instructions for physically connecting the servers to the storage.
- Ensure that the disks (LUNs) that you want to use in the cluster are exposed to the servers you will cluster (and only those servers). You can use any of the following interfaces to expose disks or LUNs:
- Microsoft Storage Manager for SANs (part of the operating system in Windows Server 2008 R2). To use this interface, you need to contact the manufacturer of your storage for a Virtual Disk Service (VDS) provider package that is designed for your storage.
- If you are using iSCSI, an appropriate iSCSI interface.
- The interface provided by the manufacturer of the storage.
- If you have purchased software that controls the format or function of the disk, obtain instructions from the vendor about how to use that software with Windows Server 2008 R2.
- On one of the servers that you want to cluster, click Start, click Administrative Tools, click Computer Management, and then click Disk Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.) In Disk Management, confirm that the cluster disks are visible.
- If you want to have a storage volume larger than 2 terabytes, and you are using the Windows interface to control the format of the disk, convert that disk now to the partition style called GUID partition table (GPT). To do this, back up any data on the disk, delete all volumes on the disk and then, in Disk Management, right-click the disk (not a partition) and click Convert to GPT Disk.
For volumes smaller than 2 terabytes, instead of using GPT, you can use the partition style called master boot record (MBR).
Important | |
You can use either MBR or GPT for a disk used by a failover cluster, but you cannot use a disk that you converted to dynamic by using Disk Management. |
- Check the format of any exposed volume or LUN. We recommend NTFS for the format (for the disk witness or for Cluster Shared Volumes, you must use NTFS).
- As appropriate, make sure that there is connectivity from the servers to be clustered to any nonclustered domain controllers. (Connectivity to clients is not necessary for validation, and it can be established later.)
19. Validate a New or Existing Failover Cluster
Before creating a failover cluster, we recommend that you validate your hardware (servers, networks, and storage) by running the Validate a Configuration Wizard. You can validate either an existing cluster or one or more servers that are not yet clustered.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
Note | |
If you want to run tests on a cluster with Cluster Shared Volumes that are currently online, use the instructions in Use Validation Tests for Troubleshooting a Failover Cluster. |
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To validate a new or existing failover cluster |
- Identify the server or servers that you want to test and confirm that the Failover Clustering feature is installed:
- If the cluster does not yet exist, choose the servers that you want to include in the cluster, and make sure you have installed the Failover Clustering feature on those servers. For more information, see Install the Failover Clustering Feature.Note that when you run the Validate a Configuration Wizard on unclustered servers, you must enter the names of all the servers you want to test, not just one.
- If the cluster already exists, make sure you know the name of the cluster or a node in the cluster.
- Review network or storage hardware that you want to validate, to confirm that it is connected to the servers. For details, see Prepare Hardware Before Validating a Failover Cluster and Understanding Requirements for Failover Clusters.
- Decide whether you want to run all or only some of the available validation tests. For detailed information about the tests, see the topics listed in Understanding Cluster Validation Tests.
The following guidelines can help you decide whether to run all tests:
- For a planned cluster with all hardware connected: Run all tests.
- For a planned cluster with parts of the hardware connected: Run System Configuration tests, Inventory tests, and tests that apply to the hardware that is connected (that is, Network tests if the network is connected or Storage tests if the storage is connected).
- For a cluster to which you plan to add a server: Run all tests. When you run them, be sure to include all servers that you plan to have in the cluster.
- For troubleshooting an existing cluster: If you are troubleshooting an existing cluster, you might run all tests, although you could run only the tests that relate to the apparent issue.
Note | |
If you are troubleshooting an existing cluster that uses Cluster Shared Volumes, see Use Validation Tests for Troubleshooting a Failover Cluster. |
- In the Failover Cluster Manager snap-in, in the console tree, make sure Failover Cluster Manager is selected and then, under Management, click Validate a Configuration.
- Follow the instructions in the wizard to specify the servers and the tests, run the tests, and view the results.
Note that when you run the Validate a Configuration Wizard on unclustered servers, you must enter the names of all the servers you want to test, not just one.
19.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- To view the results of the tests after you close the wizard, choose one of the following:
- Open the folder systemroot\Cluster\Reports (on a clustered server).
- If the tested servers are now a cluster, in the console tree, right-click the cluster, and then click View Validation Report. This displays the most recent validation report for that cluster.
20. Creating a Failover Cluster or Adding a Cluster Node
Before you can configure the cluster for the first time or add a server (node) to an existing failover cluster, you need to take the following preparatory steps:
- Install the Failover Clustering feature: You must install the Failover Clustering feature on any server that will become a server (node) in the cluster. For more information, see Install the Failover Clustering Feature.
- Connect networks and storage: You must connect the networks and storage that the nodes will use. For details, see:
- Validate the hardware configuration: We strongly recommend that you validate your hardware configuration (servers, network, and storage) before creating a cluster or adding a node to a cluster. To validate, you run a wizard. For more information, see Validate a New or Existing Failover Cluster.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
To see the preceding steps in the context of a checklist, see Checklist: Create a Failover Cluster.
The following topics describe how to create a failover cluster or add a server (node) to a failover cluster:
- Understanding Access Points (Names and IP Addresses) in a Failover Cluster
- Create a New Failover Cluster
- Add a Server to a Failover Cluster
20.1.1.1. Additional references
- Configuring a Service or Application for High Availability
- Migrating Settings to a Failover Cluster Running Windows Server 2008 R2
- For information about the maximum number of servers that you can have in a failover cluster, see https://go.microsoft.com/fwlink/?LinkId=139146.
21. Understanding Access Points (Names and IP Addresses) in a Failover Cluster
An access point is a name and associated IP address information. You use an access point to administer a failover cluster or to communicate with a service or application in the cluster. One access point can include one or more IP addresses, which can be IPv6 addresses, IPv4 addresses supplied through DHCP, or static IPv4 addresses.
When selecting a network to be used for an access point, avoid any network that is used for iSCSI. Configure networks used for iSCSI so that they are not used for network communication in the cluster.
21.1.1.1. Additional references
- Understanding Cluster Validation Tests: Network
- Modify Network Settings for a Failover Cluster
- Understanding Requirements for Failover Clusters
22. Create a New Failover Cluster
Before you create a cluster, you must carry out tasks such as connecting the hardware and validating the hardware configuration. For a list of these tasks, see Checklist: Create a Failover Cluster.
For information about the maximum number of servers that you can have in a failover cluster, see https://go.microsoft.com/fwlink/?LinkId=139146.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. In addition, if your account is not a Domain Admins account, either the account or the group that the account is a member of must be delegated the Create Computer Objects permission in the domain. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To create a new failover cluster |
- Confirm that you have connected the hardware and validated the hardware configuration, as described in the following topics:
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
- In the Failover Cluster Manager snap-in, confirm that Failover Cluster Manager is selected and then, under Management, click Create a Cluster.
- Follow the instructions in the wizard to specify:
- The servers to include in the cluster.
- The name of the cluster.
- Any IP address information that is not automatically supplied by your DHCP settings.
- After the wizard runs and the Summary page appears, if you want to view a report of the tasks that the wizard performed, click View Report.
To view the report after you close the wizard, see the following folder, where SystemRoot is the location of the operating system (for example, C:\Windows):
SystemRoot\Cluster\Reports\
22.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
23. Add a Server to a Failover Cluster
Before you add a server (node) to a failover cluster, we strongly recommend that you run the Validate a Configuration Wizard for the existing cluster nodes and the new node or nodes. The Validate a Configuration Wizard helps you confirm the configuration in a variety of important ways. For example, it validates that the server to be added is connected correctly to the networks and storage and that it contains the same software updates.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
For information about the maximum number of servers that you can have in a failover cluster, see https://go.microsoft.com/fwlink/?LinkId=139146.
For more information about validation, see Validate a New or Existing Failover Cluster.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To add a server to a failover cluster |
- Confirm that you have connected the networks and storage to the server you want to add. For details about the requirements for networking and storage, see Prepare Hardware Before Validating a Failover Cluster.
- Validate the hardware configuration, including both the existing cluster nodes and the proposed new node. For information about validation, see Validate a New or Existing Failover Cluster.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- Select the cluster, and then in the Actions pane, click Add Node.
- Follow the instructions in the wizard to specify the server to add to the cluster.
- After the wizard runs and the Summary page appears, if you want to view a report of the tasks the wizard performed, click View Report.
To view the report after you close the wizard, see the following folder, where SystemRoot is the location of the operating system (for example, C:\Windows):
SystemRoot\Cluster\Reports\
23.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
24. Configuring a Service or Application for High Availability
This topic provides an overview of the task of configuring specific services or applications for failover clustering by using the High Availability Wizard. Instructions for running the wizard are provided in Configure a Service or Application for High Availability.
Important | |
If you want to cluster a mail server or database server application, see the application’s documentation for information about the correct way to install it in a cluster environment. Mail server and database server applications are complex, and they might require configuration steps that fall outside the scope of this failover clustering Help. |
This topic contains the following sections:
- Applications and services listed in the High Availability Wizard
- List of topics about configuring a service or application for high availability
24.1. Applications and services listed in the High Availability Wizard
A variety of services and applications can work as “cluster-aware” applications, functioning in a coordinated way with cluster components.
Note | |
When configuring a service or application that is not cluster-aware, you can use generic options in the High Availability Wizard: Generic Service, Generic Application, or Generic Script. For information about using these options, see Understanding Generic Services and Applications that Can Be Configured in a Failover Cluster. |
In the High Availability Wizard, you can choose from the generic options described in the previous note, or you can choose from the following services and applications:
- DFS Namespace Server: Provides a virtual view of shared folders in an organization. When a user views the namespace, the folders appear to reside on a single hard disk. Users can navigate the namespace without needing to know the server names or shared folders that are hosting the data.
- DHCP Server: Automatically provides client computers and other TCP/IP-based network devices with valid IP addresses.
- Distributed Transaction Coordinator (DTC): Supports distributed applications that perform transactions. A transaction is a set of related tasks, such as updates to databases, that either succeed or fail as a unit.
- File Server: Provides a central location on your network where you can store and share files with users.
- Internet Storage Name Service (iSNS) Server: Provides a directory of iSCSI targets.
- Message Queuing: Enables distributed applications that are running at different times to communicate across heterogeneous networks and with computers that may be offline.
- Other Server: Provides a client access point and storage only. Add an application after completing the wizard.
- Print Server: Manages a queue of print jobs for a shared printer.
- Remote Desktop Connection Broker (formerly TS Session Broker): Supports session load balancing and session reconnection in a load-balanced remote desktop server farm. RD Connection Broker is also used to provide users access to RemoteApp programs and virtual desktops through RemoteApp and Desktop Connection.
- Virtual Machine: Runs on a physical computer as a virtualized computer system. Multiple virtual machines can run on one computer.
- WINS Server: Enables users to access resources by a NetBIOS name instead of requiring them to use IP addresses that are difficult to recognize and remember.
24.2. List of topics about configuring a service or application for high availability
The following topics describe the process of configuring a service or application for high availability in a failover cluster:
- Understanding Generic Services and Applications that Can Be Configured in a Failover Cluster
- Understanding Hyper-V and Virtual Machines in the Context of a Cluster
- Configure a Service or Application for High Availability
- Configure a Virtual Machine for High Availability
- Test the Failover of a Clustered Service or Application
- Test the Failover of a Clustered Virtual Machine
- Modifying the Settings for a Clustered Service or Application
25. Understanding Generic Services and Applications that Can Be Configured in a Failover Cluster
You can configure a variety of different services or applications for high availability in a failover cluster. For a list of the services or applications most commonly configured for high availability, see Configuring a Service or Application for High Availability.
This topic contains the following sections:
- Services or applications that can be run as a Generic Application, Generic Script, or Generic Service
- Basic requirements for a service or application in a failover cluster environment
25.1. Services or applications that can be run as a Generic Application, Generic Script, or Generic Service
In failover clusters, you can use the Generic Application, Generic Script, and Generic Service options to configure high availability for some services and applications that are not “cluster-aware” (not originally designed to run in a cluster).
25.1.1.1. Generic Application
If you run an application as a Generic Application, the cluster software will start the application, then periodically query the operating system to see whether the application appears to be running. If so, it is presumed to be online, and will not be restarted or failed over.
Note that in comparison with a cluster-aware application, a Generic Application has fewer ways of communicating its precise state to the cluster software. If a Generic Application enters a problematic state but nonetheless appears to be running, the cluster software does not have a way of discovering this and taking an action (such as restarting the application or failing it over).
Before running the High Availability Wizard to configure high availability for a Generic Application, make sure that you know the path of the application and the names of any registry keys under HKEY_LOCAL _MACHINE that are required by the application.
25.1.1.2. Generic Script
You can create a script that runs in Windows Script Host and that monitors and controls your application. Then you can configure the script as a Generic Script in the cluster. The script provides the cluster software with information about the current state of the application. As needed, the cluster software will restart or fail over the script (and through it, the application will be restarted or failed over).
When you configure a Generic Script in a failover cluster, the ability of the cluster software to respond with precision to the state of the application is determined by the script. The more precise the script is in providing information about the state of the application, the more precise the cluster software can be in responding to that information.
Before running the High Availability Wizard to configure high availability for a Generic Script, make sure that you know the path of the script.
25.1.1.3. Generic Service
The cluster software will start the service, then periodically query the Service Controller (a feature of the operating system) to determine whether the service appears to be running. If so, it is presumed to be online, and will not be restarted or failed over.
Note that in comparison with a cluster-aware service, a Generic Service has fewer ways of communicating its precise state to the cluster software. If a Generic Service enters a problematic state but nonetheless appears to be running, the cluster software does not have a way of discovering this and taking an action (such as restarting the service or failing it over).
Before running the High Availability Wizard to configure high availability for a Generic Service, make sure that you know the name of the service as it appears in the registry under HKEY_LOCAL _MACHINE\System\CurrentControlSet\Services.
25.2. Basic requirements for a service or application in a failover cluster environment
To be appropriate for a failover cluster, a service or application must have certain characteristics. The most important characteristics include:
- The service or application should be stateful. In other words, the service or application should have long-running in-memory state or large, frequently updated data states. One example is a database application. For a stateless application (such as a Web server front end), Network Load Balancing will probably be more appropriate than failover clustering.
- The service or application should use a client component that automatically retries after temporary network interruptions. Otherwise, if the server component of the application fails over from one clustered server to another, the unavoidable (but brief) interruption will cause the clients to stop, rather than simply retrying and reconnecting.
- The service or application should be able to identify the disk or disks it uses. This makes it possible for the service or application to communicate with disks in the cluster storage, and to reliably find the correct disk even after a failover.
- The service or application should use IP-based protocols. Examples include TCP, UDP, DCOM, named pipes, and RPC over TCP/IP.
26. Understanding Hyper-V and Virtual Machines in the Context of a Cluster
This topic provides information about the following:
- Overview of Hyper-V in the context of a failover cluster
- Using Cluster Shared Volumes with Hyper-V
- Live migration, quick migration, and moving of virtual machines
- Coordinating the use of Hyper-V Manager with the use of Failover Cluster Manager
26.1. Overview of Hyper-V in the context of a failover cluster
The Hyper-V role in Server Manager enables you to create a virtualized server computing environment in which you can create and manage virtual machines that run operating systems, applications, and services. Failover clusters are used to increase the availability of such applications and services. Hyper-V and failover clustering can be used together to make a virtual machine that is highly available, thereby minimizing disruptions and interruptions to clients.
You can cluster a virtual machine and you can cluster a service or application that happens to be running in a virtual machine. If you cluster a virtual machine, the cluster monitors the health of the virtual machine itself (and will respond to failures by restarting the virtual machine or failing it over). If you cluster a service or application that happens to be running in a virtual machine, the cluster monitors the health of the service or application (and will respond to failures by restarting the application or failing it over).
A feature of failover clusters called Cluster Shared Volumes is specifically designed to enhance the availability and manageability of virtual machines.
26.2. Using Cluster Shared Volumes with Hyper-V
Cluster Shared Volumes are volumes in a failover cluster that multiple nodes can read from and write to at the same time. This enables multiple nodes to concurrently access a single shared volume. The Cluster Shared Volumes feature is only supported for use with Hyper-V (a server role in Windows Server 2008 R2) and other technologies specified by Microsoft. For information about the roles and features that are supported for use with Cluster Shared Volumes, see https://go.microsoft.com/fwlink/?LinkId=137158.
When you use Cluster Shared Volumes, managing a large number of clustered virtual machines becomes easier. For more information about Cluster Shared Volumes, see Understanding Cluster Shared Volumes in a Failover Cluster.
26.3. Live migration, quick migration, and moving of virtual machines
With failover clusters, a virtual machine can be moved from one cluster node to another in several different ways: live migration, quick migration, and moving. This section describes these actions. For information about how to perform the actions, see Live Migrate, Quick Migrate, or Move a Virtual Machine from Node to Node.
Note | |
If you want to cluster virtual machines and use live migration or quick migration, we recommend making the hardware and system settings on the nodes as similar as possible to minimize potential problems. |
The following list describes the choices:
Live migration: When you initiate live migration, the cluster copies the memory being used by the virtual machine from the current node to another node, so that when the transition to the other node actually takes place, the memory and state information is already in place for the virtual machine. The transition is usually fast enough that a client using the virtual machine does not lose the network connection. If you are using Cluster Shared Volumes, live migration is almost instantaneous, because no transfer of disk ownership is needed. A live migration can be used for planned maintenance but not for an unplanned failover.
Note | |
You cannot use live migration to move multiple virtual machines simultaneously. On a given server running Hyper-V, only one live migration (to or from the server) can be in progress at a given time. |
Quick migration: When you initiate quick migration, the cluster copies the memory being used by the virtual machine to a disk in storage, so that when the transition to another node actually takes place, the memory and state information needed by the virtual machine can quickly be read from the disk by the node that is taking over ownership. A quick migration can be used for planned maintenance but not for an unplanned failover.
You can use quick migration to move multiple virtual machines simultaneously.
Moving: When you initiate a move, the cluster prepares to take the virtual machine offline by performing an action that you have specified in the cluster configuration for the virtual machine resource: Save, Shut down, Shut down (forced), or Turn off. Save (the default) saves the state of the virtual machine, so that the state can be restored when bringing the virtual machine back online. Shut down performs an orderly shutdown of the operating system (waiting for all processes to close) on the virtual machine before taking the virtual machine offline. Shut down (forced) shuts down the operating system on the virtual machine without waiting for slower processes to finish, and then takes the virtual machine offline. Turn off is like turning off the power to the virtual machine, which means that data loss may occur.
The setting you specify for the offline action does not affect live migration, quick migration, or unplanned failover. It affects only moving (or taking the resource offline through the action of Windows PowerShell or an application). To specify this setting, see the “Additional considerations” section in Live Migrate, Quick Migrate, or Move a Virtual Machine from Node to Node.
26.4. Coordinating the use of Hyper-V Manager with the use of Failover Cluster Manager
After you configure clustered virtual machines, you can modify most settings of those clustered virtual machines using either Hyper-V Manager or Failover Cluster Manager. We recommend that you use Failover Cluster Manager for modifying those settings, as described in Modify the Virtual Machine Settings for a Clustered Virtual Machine. If you decide to use Hyper-V Manager to modify virtual machine settings, be sure to open Failover Cluster Manager and refresh the virtual machine configuration, as described in Refresh the Configuration of a Virtual Machine.
27. Configure a Service or Application for High Availability
You can configure a service or application for high availability by running a wizard that creates the appropriate settings in a failover cluster. For information about the services and applications that you can configure in a failover cluster, see:
- Configuring a Service or Application for High Availability
- Understanding Generic Services and Applications that Can Be Configured in a Failover Cluster.
If you are configuring a clustered file server or a clustered print server, review one of the following:
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. In addition, if your account is not a Domain Admins account, either the account or the group that the account is a member of must be delegated the Create Computer Objects permission in the domain. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To configure a service or application for high availability |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Click Services and Applications and then, under Actions (on the right), click Configure a Service or Application.
- Follow the instructions in the wizard to specify the service or application that you want to configure for high availability, along with the following details:
- A name for the clustered service or application. This name will be registered in DNS and associated with the IP address for this clustered service or application.
- Any IP address information that is not automatically supplied by your DHCP settings—for example, a static IPv4 address for this clustered service or application.
- The storage volume or volumes that the clustered service or application should use.
- Specific information for the service or application that you are configuring. For example, for a Generic Application, you must specify the path for the application and any registry keys that the application requires (so that the registry keys can be replicated to all nodes in the cluster).
- After the wizard runs and the Summary page appears, if you want to view a report of the tasks that the wizard performed, click View Report.
If you are configuring a clustered file server or a clustered print server, see the following “Additional considerations” section.
27.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- If this is the first service or application that you are configuring for high availability, it might be appropriate to review your cluster network settings now. If the clustered servers are connected to a network that is not to be used for network communication in the cluster (for example, a network that is intended only for iSCSI or only for backup), then under Networks, right-click that network, click Properties, and then select Do not allow the cluster to use this network.
- If you are configuring a clustered file server, review Checklist: Create a Clustered File Server.
- If you are configuring a clustered print server, review Checklist: Create a Clustered Print Server.
- For information about modifying the settings for this service or application after the wizard finishes running, see Modifying the Settings for a Clustered Service or Application.
28. Configure a Virtual Machine for High Availability
In a failover cluster, you can create configure a virtual machine for high availability by running a wizard that creates the appropriate settings.
To work with virtual machines, you might want to view Help content for the Hyper-V Manager snap-in. To view this content, install the Hyper-V role (through Server Manager and the Add Roles Wizard), open Hyper-V Manager (through Server Manager or in a separate snap-in console), and press F1. You can also view information about Hyper-V on the Web, for example, information about creating a virtual machine (https://go.microsoft.com/fwlink/?LinkId=137805).
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To configure a virtual machine for high availability |
- Be sure that you have installed the Hyper-V role and have reviewed the steps in Checklist: Create a Clustered Virtual Machine. This procedure is a step in that checklist.
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Click Services and Applications.
- If you have already created the virtual machine, skip to step 6. Otherwise, use the New Virtual Machine Wizard to create a virtual machine and configure it for high availability:
-
- In the Action pane, click Virtual machines, point to Virtual machine, and then click a node.The virtual machine will initially be created on that node, and then be clustered so that it can move to another node or nodes as needed.
- If the Before You Begin page of the New Virtual Machine Wizard appears, click Next.
- Specify a name for the virtual machine, and then select Store the virtual machine in a different location and specify a disk in shared storage or, if Cluster Shared Volumes is enabled, a Cluster Shared Volume (a volume that appears to be on the system drive of the node, under the \ClusterStorage folder).
- Follow the instructions in the wizard. You can specify details (such as the amount of memory, the network, and the virtual hard disk file) now, and you can also add or change configuration details later.
- When you click Finish, the wizard creates the virtual machine and also configures it for high availability. Skip the remaining step in this procedure.
- If you have already created the virtual machine and only want to configure it for high availability, first make sure that the virtual machine is not running. Then, use the High Availability Wizard to configure the virtual machine for high availability:
-
- In the Action pane, click Configure a Service or Application.
- If the Before You Begin page of the High Availability Wizard appears, click Next.
- On the Select Service or Application page, click Virtual Machine and then click Next.
- Select the virtual machine that you want to configure for high availability, and complete the wizard.
- After the High Availability wizard runs and the Summary page appears, if you want to view a report of the tasks that the wizard performed, click View Report.
28.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- If you decide to change the settings of a clustered virtual machine, be sure to see Modify the Virtual Machine Settings for a Clustered Virtual Machine.
- For each clustered virtual machine, you can also specify the action that the cluster performs before taking the virtual machine offline. Taking the virtual machine offline is necessary when moving the virtual machine (but not necessary for live migration or quick migration). To specify the setting, make sure that after selecting the clustered virtual machine in the console tree (on the left), you right-click the virtual machine resource displayed in the center pane (not on the left), and then click Properties. Click the Settings tab and select an option. The actions are described in Understanding Hyper-V and Virtual Machines in the Context of a Cluster in the section called “Live migration, quick migration, and moving of virtual machines,” in the description about the moving of virtual machines.
- If this is the first virtual machine that you are configuring for high availability, it might be appropriate to review your cluster network settings now. If the clustered servers are connected to a network that is not to be used for network communication in the cluster (for example, a network that is intended only for iSCSI or only for backup), then under Networks, right-click that network, click Properties, and then select Do not allow the cluster to use this network.
29. Test the Failover of a Clustered Service or Application
You can perform a basic test to confirm that a clustered service or application can fail over successfully to another node. For information about performing a similar test for a clustered virtual machine, see Test the Failover of a Clustered Virtual Machine.
This procedure is one of the steps in the following checklists:
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To test the failover of a clustered service or application |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then click the service or application for which you want to test failover.
- Under Actions (on the right), click Move this service or application to another node.
As the service or application moves, the status is displayed in the results pane (center pane).
- Optionally, repeat step 4 to move the service or application to an additional node or back to the original node.
29.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Sta
30. Test the Failover of a Clustered Virtual Machine
You can perform a basic test to confirm that a clustered virtual machine can fail over successfully to another node. For information about performing a similar test for a clustered service or application, see Test the Failover of a Clustered Service or Application.
This procedure is one of the steps in the following checklist:
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To test the failover of a clustered virtual machine |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Virtual Machine, and then click the virtual machine for which you want to test failover.
- Under Actions (on the right), click Move virtual machine(s) to another node.
As the virtual machine moves, the status is displayed in the results pane (center pane).
- Optionally, repeat step 4 to move the virtual machine to an additional node or back to the original node.
30.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- If you decide to change the settings of a clustered virtual machine, be sure to see Modify the Virtual Machine Settings for a Clustered Virtual Machine.
31. Modifying the Settings for a Clustered Service or Application
This topic describes failover and failback settings for a clustered service or application. It also provides links to information about other settings that you can modify for a clustered service or application.
- Modifying failover and failback settings, including preferred owners
- List of topics about modifying settings
31.1. Modifying failover and failback settings, including preferred owners
You can adjust the failover settings, including preferred owners and failback settings, to control how the cluster responds when the application or service fails. You can configure these settings on the property sheet for the clustered service or application (on the General tab or the Failover tab). The following table provides examples that illustrate how these settings work.
Settings | Effect |
Example 1:
General tab, Preferred owner: Node1 Failover tab, Failback setting: Allow failback (Immediately) |
If the service or application fails over from Node1 to Node2, when Node1 is again available, the service or application will fail back to Node1. |
Example 2:
Failover tab, Maximum failures in the specified period: 2 Failover tab, Period (hours): 6 |
In a six-hour period, if the application or service fails no more than twice, it will be restarted or failed over each time. If the application or service fails a third time in the six-hour period, it will be left in the failed state.
The default value for the maximum number of failures is n-1, where n is the number of nodes. You can change the value, but we recommend a relatively low value, so that if multiple node failures occur, the application or service will not be moved between nodes indefinitely. |
31.2. List of topics about modifying settings
The following topics provide details about settings that you can modify for a clustered service or application:
- Add Storage for a Clustered Service or Application (For information about adding a disk to the cluster itself, so that the disk is available for adding to a clustered service or application, see Add Storage to a Failover Cluster).
- Add a Resource to a Clustered Service or Application
- Modify the Failover Settings for a Clustered Service or Application
- Modify the Virtual Machine Settings for a Clustered Virtual Machine
- Create a Shared Folder in a Clustered File Server
- Configure the Print Settings for a Clustered Print Server
32. Add Storage for a Clustered Service or Application
You can add storage to an existing clustered service or application. However, if the storage has not already been added to the failover cluster itself, you must first add the storage to the cluster before you can add the storage to a clustered service or application within the cluster. For information about adding storage to a cluster, see Add Storage to a Failover Cluster.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To add storage for a clustered service or application |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then click the service or application that you want to add storage to.
- Under Actions (on the right), click Add storage.
- Select the disk or disks that you want to add.
If a disk does not appear in the list, it might not have been added to the cluster yet. For more information, see Add Storage to a Failover Cluster.
32.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes
33. Add a Resource to a Clustered Service or Application
To customize the way your clustered service or application works, you can add a resource.
Important | |
|
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To add a resource to a clustered service or application |
- Ensure that the software or feature that is needed for the resource is installed on all nodes in the cluster.
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then click the service or application that you want to add a resource to.
- Under Actions (on the right), click Add a resource.
- Click the resource that you want to add, or click More resources and then click the resource that you want to add. If a wizard appears for the resource you chose, provide the information requested by the wizard.
- In the center pane, right-click the resource that you added and click Properties.
- If the property sheet includes a Parameters tab, click the tab and then configure the parameters that are needed by the resource.
- Click the Dependencies tab, and then configure the dependencies for the resource. Click OK.
- In the console tree, select the service or application (not the individual resource), and then under Actions (on the right), click Show Dependency Report.
- Review the dependencies between the resources. For many resources, you must configure the correct dependencies before the resource can be brought online. If you need to change the dependencies, close the dependency report and then repeat step 9.
- Right-click the resource that you just added, and then click Bring this resource online.
33.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
34. Modify the Failover Settings for a Clustered Service or Application
You can adjust the failover, failback, or preferred owner settings to control the way the cluster responds when a particular application or service fails. For examples that illustrate the way the settings work, see Modifying the Settings for a Clustered Service or Application.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To modify the failover settings for a clustered service or application |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications.
- Right-click the service or application that you want to modify the failover settings for, and then click Properties.
- To configure failback, on the General tab, change settings for preferred owners, then click the Failover tab and choose options under Failback.
You must configure a preferred owner if you want failback to occur (that is, if you want a particular service or application to fail back to a particular node when possible).
- To configure the number of times that the cluster service should attempt to restart or fail over a service or application in a given time period, click the Failover tab and specify values under Failover.
For more information about failover settings for a clustered service or application, see Modifying the Settings for a Clustered Service or Application.
34.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
35. odify the Virtual Machine Settings for a Clustered Virtual Machine
When you change the settings of a clustered virtual machine, we recommend that you use Failover Cluster Manager instead of Hyper-V Manager, as described in this procedure.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To modify the virtual machine settings for a clustered virtual machine |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications.
- In the console tree, click the virtual machine that you want to modify the settings for.
- In the center pane, right-click the virtual machine resource and then click Settings. (If you do not see Settings in the menu, collapse the virtual machine resource and then right-click it.)
The Settings interface appears. This is the same interface that you see in Hyper-V Manager.
- Configure the settings for the virtual machine.
35.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- If you use Hyper-V Manager instead of Failover Cluster Manager to configure settings for a virtual machine, be sure to follow the steps in Refresh the Configuration of a Virtual Machine
36. Create a Shared Folder in a Clustered File Server
Before you perform this procedure, review the steps in Checklist: Create a Clustered File Server. This procedure describes a step in that checklist.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To create a shared folder in a clustered file server |
- In Control Panel, open Windows Firewall, click Allow a program or feature through Windows Firewall, click Change settings, select an exception for Remote Volume Management, and then click OK.
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then select the clustered file server.
- Under Actions, click Add a shared folder.
The Create a Shared Folder Wizard appears. This is the same wizard that you would use to share a folder on a nonclustered server.
- Follow the instructions in the wizard to specify the settings for the shared folder, including path, name, offline settings, and permissions.
36.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- In a clustered file server, when you bring the associated File Server resource online or take it offline, all shared folders in that resource go offline or online at the same time. You cannot change the online or offline status of one of the shared folders without affecting all of the shared folders.
37. Configure the Print Settings for a Clustered Print Server
Before you perform this procedure, review the steps in Checklist: Create a Clustered Print Server. This procedure describes the last step in that checklist.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To configure the print settings for a clustered print server |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications.
- Right-click the clustered print server that you want to configure the print settings for, and then click Manage Printers.
An instance of the Failover Cluster Manager interface appears with Print Management in the console tree.
- Under Print Management, click Print Servers and locate the clustered print server you want to configure.
Always perform management tasks on the clustered print server. Do not manage the individual cluster nodes as print servers.
- Right-click the clustered print server, and then click Add Printer. Follow the instructions in the wizard to add a printer.
This is the same wizard you would use to add a printer on a nonclustered server.
- When you have finished configuring settings for the clustered print server, to close the instance of the Failover Cluster Manager interface with Print Management in the console tree, click File and then click Exit.
37.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- The failover cluster software automatically replicates important files (such as print drivers) to each of the nodes in the cluster. It supports this replication process by using a disk resource (in the cluster storage) as a place to store these files. This disk resource is configured automatically by the High Availability Wizard as part of a new clustered print server. Do not delete the disk resource that is part of the clustered print server.
38. Migrating Settings to a Failover Cluster Running Windows Server 2008 R2
The following topics provide information about migrating settings to a failover cluster running Windows Server 2008 R2:
- Understanding the Process of Migrating to a Cluster Running Windows Server 2008 R2
- Migrate Resource Groups to a Failover Cluster Running Windows Server 2008 R2
39. Understanding the Process of Migrating to a Cluster Running Windows Server 2008 R2
You can use a wizard to migrate the settings of many types of resources to a cluster running Windows Server 2008 R2. From the third page of the migration wizard, you can view a pre-migration report that explains whether each resource is eligible for migration and describes additional steps to perform after running the wizard. After the wizard finishes, it provides a report that describes additional steps that may be required to complete the migration. The wizard supports the migration of settings to a cluster running Windows Server 2008 R2 from a cluster running any of the following operating systems:
- Windows Server 2003
- Windows Server 2008
- Windows Server 2008 R2
For information about the specific steps for running the Migrate a Cluster Wizard, see Migrate Resource Groups to a Failover Cluster Running Windows Server 2008 R2.
Caution | |
If new storage is used, you must handle copying or moving of data or folders on your shared volumes during a migration. The wizard for migrating clustered resource groups does not copy data from one location to another. |
This topic contains the following subsections:
- Identifying which clustered services or applications can be migrated from a cluster running Windows Server 2003
- Migration scenario A: Migrating a multiple-node cluster to a cluster with new hardwareFollow this scenario if you can create a multiple-node cluster running Windows Server 2008 R2 and then migrate settings to it from a multiple-node cluster running Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2. With this scenario, you use different physical servers and hardware for the new cluster than were used for the old cluster.
- Migration scenario B: Migrating a two-node cluster to a cluster with the same hardwareFollow this scenario for an in-place migration of a two-node cluster running Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2 to a two-node cluster running Windows Server 2008 R2. With this scenario, you use the same physical servers and hardware for a new two-node cluster as you used for the old one. This scenario is not a rolling upgrade (an upgrade where two nodes in the same cluster temporarily run different operating systems from each other). Rolling upgrades to a failover cluster running Windows Server 2008 R2 are not possible.Note that the hardware for the new cluster must meet the requirements for a cluster running Windows Server 2008 R2. For more information, see Understanding Requirements for Failover Clusters.
39.1. Identifying which clustered services or applications can be migrated to a cluster running Windows Server 2008 R2
This section lists the clustered services or applications (clustered resources) that can be migrated to a cluster running Windows Server 2008 R2.
Important | |
You cannot use the Migrate a Cluster Wizard to migrate settings for virtualized servers, mail servers, database servers, and print servers, or any other resources that are not listed in the following subsections. Other migration tools exist for some of these applications. For information about migrating mail server applications, see https://go.microsoft.com/fwlink/?LinkId=91732 and https://go.microsoft.com/fwlink/?LinkId=91733. |
39.1.1.1. Resources for which the Migrate a Cluster Wizard performs most or all of the migration steps
After you use the Migrate a Cluster Wizard to migrate the settings of the following resources to a failover cluster running Windows Server 2008 R2, few or no additional steps are needed before the resources can be brought online.
Caution | |
If new storage is used, you must handle the copying or moving of data or folders during a migration from your shared volumes. The wizard for migrating the settings of clustered resource groups does not copy data from one location to another. |
- File Server or File Share resources: You can migrate settings for a clustered file server (or from a Windows Server 2003 cluster, a File Share resource group) and for the associated Physical Disk, IP Address, and Network Name resources.When you migrate from a cluster running Windows Server 2003, the Migrate a Cluster Wizard automatically translates all File Share resource groups to a single clustered file server (with multiple File Share resources within it) in Windows Server 2008 R2. Therefore, some resources might look different after the migration. The following table provides details:
Resource as Seen in a Server Cluster Running Windows Server 2003 | Migrated Resource as Seen in a Failover Cluster Running Windows Server 2008 R2 |
One File Share resource | One File Server resource |
Multiple File Share resources | Multiple File Share resources within a single clustered file server (resource group) |
File Share resource with DFS root | Distributed File System resource and File Share resource (both within a clustered DFS Server) |
- Physical Disk: You can migrate settings for Physical Disk resources other than the quorum resource.You do not need to migrate the quorum resource. When you run the Create a Cluster Wizard, the cluster software automatically chooses the quorum configuration that will provide the highest availability for your new failover cluster. You can change the quorum configuration settings if necessary for your specific environment. For information about changing settings (including quorum configuration settings) for a failover cluster, see Modifying Settings for a Failover Cluster.
- IP Address: You can migrate IP Address settings other than the cluster IP address. IP addresses are eligible for migration only within the same subnet.
- Network Name: You can migrate Network Name settings other than the cluster name. If Kerberos authentication is enabled for the Network Name resource, the wizard will prompt you for the password for the Cluster service account that is used by the old cluster.
39.1.1.2. Resources for which the Migrate a Cluster Wizard might not perform all of the migration steps
After you use the Migrate a Cluster Wizard to migrate the settings of the following resource groups to a failover cluster running Windows Server 2008 R2, some additional steps might be needed before the resources can be brought online, depending on your original configuration. The migration report indicates what steps, if any, are needed for these resource groups:
- DHCP Service
- Distributed File System Namespace (DFS-N)
- Distributed Transaction Coordinator (DTC)
- Internet Storage Name Service (iSNS)
- Message Queuing
- NFS Service
- WINS Service
- Generic Application
- Generic Script
- Generic Service
The wizard provides a report that describes the additional steps that are needed. Generally, the steps you must take include:
- Installing server roles or features that are needed in the new cluster (all nodes).
- Copying or installing any associated applications, services, or scripts on the new cluster (all nodes).
- Ensuring that any data is copied.
- Providing static IP addresses if the new cluster is on a different subnet.
- Updating drive path locations for applications if the new cluster uses a different volume letter.
The resource settings are migrated, as are the settings for the IP Address and Network Name resources that are in the resource group. If there is a Physical Disk resource in the resource group, the settings for the Physical Disk resource are also migrated.
39.2. Migration scenario A: Migrating a multiple-node cluster to a cluster with new hardware
For this migration scenario, there are three phases:
- Install two or more new servers, run validation, and create a new cluster. For this phase, while the old cluster continues to run, install Windows Server 2008 R2 and Failover Clustering on at least two servers. Create the networks the servers will use, and connect the storage. Next, run the complete set of cluster validation tests to confirm that the hardware and hardware settings can support a failover cluster. Finally, create the new cluster. At this point, you have two clusters.Additional information about connecting the storage: If the new cluster is connected to old storage, make at least two logical unit numbers (LUNs) or disks accessible to the servers, and do not make those LUNs or disks accessible to any other servers. (These LUNs or disks are necessary for validation and for the disk witness, which is similar to, although not the same as, the quorum resource in Windows Server 2003.) If the new cluster is connected to new storage, make as many disks or LUNs accessible to it as you think it will need.The steps for creating a cluster are listed in Checklist: Create a Failover Cluster.
- Migrate settings to the new cluster and determine how you will make any existing data available to the new cluster. When the Migrate a Cluster Wizard completes, all migrated resources will be offline. Leave them offline at this stage. The new cluster will remain online and continue serving clients. If the new cluster will reuse old storage, plan how you will make the storage available to it, but leave the old cluster connected to the storage until you are ready to make the transition. If the new cluster will use new storage, copy the appropriate folders and data to the storage.
- Make the transition from the old cluster to the new. The first step in the transition is to take clustered services and applications offline on the old cluster. If the new cluster uses old storage, follow your plan for making LUNs or disks inaccessible to the old cluster and accessible to the new cluster. Then, regardless of which storage the new cluster uses, bring clustered services and applications online on the new cluster.
39.3. Migration scenario B: Migrating a two-node cluster to a cluster with the same hardware
For this migration scenario, there are four phases:
- Install a new server and run selected validation tests. For this phase, allow one existing server to keep running Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2 and the Cluster service, while you begin the migration process. Evict the other server from the old cluster, and then install Windows Server 2008 R2 and the Failover Clustering feature on it. On that server, run all tests that the Validate a Configuration Wizard will run. The wizard will recognize that this is a single node without storage and limit the tests that it runs. Tests that require two nodes (for example, tests that compare the nodes or that simulate failover) will not run.Note that the tests that you run at this stage do not provide complete information about whether the storage will work in a cluster running Windows Server 2008 R2. As described later in this section, you will run the Validate a Configuration Wizard later with all tests included.
- Make the new server into a single-node cluster and migrate settings to it. Create a new single-node cluster and use the Migration Wizard to migrate settings to it, but keep the clustered resources offline on the new cluster.
- Bring the new cluster online, and make existing data available to it. Take the services and applications in the old cluster offline. If the new cluster will use the old storage, leave the data on the old storage, and make the disks or LUNs accessible to the new cluster. If the new cluster will use new storage, copy the folders and data to appropriate LUNs or disks in the new storage, and make sure that those LUNs or disks are visible to the new cluster (and not visible to any other servers). Confirm that the settings for the migrated services and applications are correct. Bring the services and applications in the new cluster online and make sure that the resources are functioning and can access the storage.
- Bring the second node into the new cluster. Destroy the old cluster and on that server, install Windows Server 2008 R2 and the Failover Clustering feature. Connect that server to the networks and storage used by the new cluster. If the appropriate disks or LUNs are not already accessible to both servers, make them accessible. Run the Validate a Configuration Wizard, specifying both servers, and confirm that all tests pass. Finally, add the second server to the new cluster.
40. Migrate Resource Groups to a Failover Cluster Running Windows Server 2008 R2
You can use a wizard to migrate the settings of many types of resource groups to a cluster running Windows Server 2008 R2. You can migrate these settings from a cluster running Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2. After the settings have been migrated, the wizard also provides a report that describes any additional steps that may be required to complete the migration.
Caution | |
If new storage is used, you must handle all copying or moving of data or folders on your shared volumes during a migration. The wizard for migrating clustered resource groups does not copy data from one location to another. |
Before you start the wizard, be sure that you know the name or IP Address of the cluster or cluster node from which you want to migrate resource groups.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To migrate resource groups to a failover cluster running Windows Server 2008 R2 |
- On the cluster running Windows Server 2008 R2, open the failover cluster snap-in.
- If the cluster to which you want to migrate settings is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- Under Configure, click Migrate Services and Applications.
- Read the first page of the Migrate a Cluster Wizard, and then click Next.
- Specify the name or IP Address of the cluster or cluster node from which you want to migrate resource groups, and then click Next.
- Click View Report. Read the report, which explains whether each resource is eligible for migration.
The wizard also provides a report after it finishes, describing any additional steps that might be needed before you bring the migrated resource groups online.
- Follow the instructions in the wizard to complete the following:
- Choose the resource group or groups whose settings you want to migrate.Some types of resource groups are eligible for migration and some are not. For more information, see Understanding the Process of Migrating to a Cluster Running Windows Server 2008 R2.
- Specify whether the resource groups to be migrated will use new storage, or the same storage used in the old cluster. If the resource groups will use new storage, you can specify the disk that each resource group should use after migration. Note that if new storage is used, you must handle all copying or moving of data or folders—the wizard does not copy data from one location to another.
- After the wizard runs and the Summary page appears, click View Report. This report contains important information about any additional steps that might be needed before you bring the migrated resource groups online.
40.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- For information about modifying the settings of a service or application after migration, see Modifying the Settings for a Clustered Service or Application.
41. Modifying Settings for a Failover Cluster
The default settings that are created by wizards for failover clustering work well for many clusters. However, you can change a cluster’s settings—for example, add storage, modify network settings, or enable Cluster Shared Volumes. The following topics provide information about changing the overall settings for your cluster:
- Understanding Cluster Shared Volumes in a Failover Cluster
- Understanding Quorum Configurations in a Failover Cluster
- Add Storage to a Failover Cluster
- Modify Network Settings for a Failover Cluster
- Enable Cluster Shared Volumes in a Failover Cluster
- Select Quorum Options for a Failover Cluster
42. Understanding Cluster Shared Volumes in a Failover Cluster
On a failover cluster that uses Cluster Shared Volumes, multiple clustered virtual machines that are distributed across multiple cluster nodes can all access their Virtual Hard Disk (VHD) files at the same time, even if the VHD files are on a single disk (LUN) in the storage. This means that the clustered virtual machines can fail over independently of one another, even if they use only a single LUN.
In contrast, in a failover cluster on which Cluster Shared Volumes is not enabled, a single disk (LUN) can only be accessed by a single node at a time. This means that clustered virtual machines can only fail over independently if each virtual machine has its own LUN, which makes the management of LUNs and clustered virtual machines more difficult.
This topic contains the following sections:
- Benefits of using Cluster Shared Volumes in a failover cluster
- Restrictions on using Cluster Shared Volumes in a failover cluster
42.1. Benefits of using Cluster Shared Volumes in a failover cluster
Cluster Shared Volumes provides the following benefits in a failover cluster:
- The configuration of clustered virtual machines is much simpler than before.
- You can reduce the number of LUNs (disks) required for your virtual machines, instead of having to manage one LUN per virtual machine, which was previously the recommended configuration (because the LUN was the unit of failover). Many virtual machines can use a single LUN and can fail over without causing the other virtual machines on the same LUN to also fail over.
- You can make better use of disk space, because you do not need to place each Virtual Hard Disk (VHD) file on a separate disk with extra free space set aside just for that VHD file. Instead, the free space on a Cluster Shared Volume can be used by any VHD file on that volume.
- You can more easily track the paths to VHD files and other files used by virtual machines. You can specify the path names, instead of identifying disks by drive letters (limited to the number of letters in the alphabet) or identifiers called GUIDs (which are hard to use and remember). With Cluster Shared Volumes, the path appears to be on the system drive of the node, under the \ClusterStorage However, this path is the same when viewed from any node in the cluster.
- If you use a few Cluster Shared Volumes to create a configuration that supports many clustered virtual machines, you can perform validation more quickly than you could with a configuration that uses many LUNs to support many clustered virtual machines. With fewer LUNs, validation runs more quickly. (You perform validation by running the Validate a Configuration Wizard in the snap-in for failover clusters.)
- There are no special hardware requirements beyond what is already required for storage in a failover cluster (although Cluster Shared Volumes require NTFS).
- Resiliency is increased, because the cluster can respond correctly even if connectivity between one node and the SAN is interrupted, or part of a network is down. The cluster will re-route the Cluster Shared Volumes communication through an intact part of the SAN or network.
42.2. Restrictions on using Cluster Shared Volumes in a failover cluster
The following restrictions apply when using Cluster Shared Volumes in a failover cluster:
- The Cluster Shared Volumes feature is only supported for use with Hyper-V (a server role in Windows Server 2008 R2) and other technologies specified by Microsoft. For information about the roles and features that are supported for use with Cluster Shared Volumes, see https://go.microsoft.com/fwlink/?LinkId=137158.
- No files should be created or copied to a Cluster Shared Volume by an administrator, user, or application unless the files will be used by the Hyper-V role or other technologies specified by Microsoft. Failure to adhere to this instruction could result in data corruption or data loss on shared volumes. This instruction also applies to files that are created or copied to the \ClusterStorage folder, or subfolders of it, on the nodes.
- For Hyper-V to function properly, the operating system (%SystemDrive%) of each server in your cluster must be set so that it boots from the same drive letter as all other servers in the cluster. In other words, if one server boots from drive letter C, all servers in the cluster should boot from drive letter C.
- The NTFS file system is required for all volumes enabled as Cluster Shared Volumes.
43. Understanding Quorum Configurations in a Failover Cluster
This topic contains the following sections:
- How the quorum configuration affects the cluster
- Quorum configuration choices
- Illustrations of quorum configurations
- Why quorum is necessary
For information about how to configure quorum options, see Select Quorum Options for a Failover Cluster.
43.1. How the quorum configuration affects the cluster
The quorum configuration in a failover cluster determines the number of failures that the cluster can sustain. If an additional failure occurs, the cluster must stop running. The relevant failures in this context are failures of nodes or, in some cases, of a disk witness (which contains a copy of the cluster configuration) or file share witness. It is essential that the cluster stop running if too many failures occur or if there is a problem with communication between the cluster nodes. For a more detailed explanation, see Why quorum is necessary later in this topic.
Important | |
In most situations, use the quorum configuration that the cluster software identifies as appropriate for your cluster. Change the quorum configuration only if you have determined that the change is appropriate for your cluster. |
Note that full function of a cluster depends not just on quorum, but on the capacity of each node to support the services and applications that fail over to that node. For example, a cluster that has five nodes could still have quorum after two nodes fail, but the level of service provided by each remaining cluster node would depend on the capacity of that node to support the services and applications that failed over to it.
43.2. Quorum configuration choices
You can choose from among four possible quorum configurations:
- Node Majority (recommended for clusters with an odd number of nodes)Can sustain failures of half the nodes (rounding up) minus one. For example, a seven node cluster can sustain three node failures.
- Node and Disk Majority (recommended for clusters with an even number of nodes)Can sustain failures of half the nodes (rounding up) if the disk witness remains online. For example, a six node cluster in which the disk witness is online could sustain three node failures.Can sustain failures of half the nodes (rounding up) minus one if the disk witness goes offline or fails. For example, a six node cluster with a failed disk witness could sustain two (3-1=2) node failures.
- Node and File Share Majority (for clusters with special configurations)Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness.Note that if you use Node and File Share Majority, at least one of the available cluster nodes must contain a current copy of the cluster configuration before you can start the cluster. Otherwise, you must force the starting of the cluster through a particular node. For more information, see “Additional considerations” in Start or Stop the Cluster Service on a Cluster Node.
- No Majority: Disk Only (not recommended)Can sustain failures of all nodes except one (if the disk is online). However, this configuration is not recommended because the disk might be a single point of failure.
43.3. Illustrations of quorum configurations
The following illustrations show how three of the quorum configurations work. A fourth configuration is described in words, because it is similar to the Node and Disk Majority configuration illustration.
Note | |
In the illustrations, for all configurations other than Disk Only, notice whether a majority of the relevant elements are in communication (regardless of the number of elements). When they are, the cluster continues to function. When they are not, the cluster stops functioning. |
As shown in the preceding illustration, in a cluster with the Node Majority configuration, only nodes are counted when calculating a majority.
As shown in the preceding illustration, in a cluster with the Node and Disk Majority configuration, the nodes and the disk witness are counted when calculating a majority.
Node and File Share Majority Quorum Configuration
In a cluster with the Node and File Share Majority configuration, the nodes and the file share witness are counted when calculating a majority. This is similar to the Node and Disk Majority quorum configuration shown in the previous illustration, except that the witness is a file share that all nodes in the cluster can access instead of a disk in cluster storage.
In a cluster with the Disk Only configuration, the number of nodes does not affect how quorum is achieved. The disk is the quorum. However, if communication with the disk is lost, the cluster becomes unavailable.
43.4. Why quorum is necessary
When network problems occur, they can interfere with communication between cluster nodes. A small set of nodes might be able to communicate together across a functioning part of a network but not be able to communicate with a different set of nodes in another part of the network. This can cause serious issues. In this “split” situation, at least one of the sets of nodes must stop running as a cluster.
To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster must use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many “votes” constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.
For example, in a five node cluster that is using a node majority, consider what happens if nodes 1, 2, and 3 can communicate with each other but not with nodes 4 and 5. Nodes 1, 2, and 3 constitute a majority, and they continue running as a cluster. Nodes 4 and 5, being a minority, stop running as a cluster. If node 3 loses communication with other nodes, all nodes stop running as a cluster. However, all functioning nodes will continue to listen for communication, so that when the network begins working again, the cluster can form and begin to run.
44. Add Storage to a Failover Cluster
You can add storage to a failover cluster after exposing that storage to all cluster nodes (by changing LUN masking or zoning). You do not need to add the storage to the cluster if the storage is already listed for that cluster under Storage in the Failover Cluster Manager snap-in.
If you are only adding storage to a particular clustered service or application (not adding entirely new storage to the failover cluster as a whole), see Add Storage for a Clustered Service or Application.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To add storage to a failover cluster |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Right-click Storage, and then click Add a disk.
- Select the disk or disks you want to add.
44.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- After you click Add a disk, if you do not see a disk that you expect to see, review the following:
- The list of disks that are shown when you click Storage in the Failover Cluster Manager snap-in. If the disk is already in that list, you do not need to add it to the cluster.
- The configuration of the storage interfaces, including the storage interfaces that run on the cluster nodes. The disk must be available to all nodes in the cluster before you can add it to the set of storage for the cluster.
- The disks shown in Disk Management (check each node in the cluster).If the disk that you want to add does not appear at all in Disk Management (on any node), there might be an issue with the storage configuration that is preventing the operating system from recognizing or mounting the disk. Note that disks currently in use by the cluster will appear in Disk Management on one node only (the node that is the current owner of that disk).If the disk that you want to add appears in Disk Management but does not appear after you click Add a disk, confirm that the disk is configured as a basic disk, not a dynamic disk. Only basic disks can be used in a failover cluster.
To open Disk Management, click Start, click Administrative Tools, click Computer Management, and then click Disk Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.)
45. Modify Network Settings for a Failover Cluster
For each of the networks physically connected to the servers (nodes) in a failover cluster, you can specify whether the network is used by the cluster, and if so, whether the network is used by the nodes only or also by clients. Note that in this context, the term “clients” includes not only client computers accessing clustered services and applications, but remote computers that you use to administer the cluster.
If you use a network for iSCSI (storage), do not use it for network communication in the cluster.
For background information about network requirements for a failover cluster, see Understanding Requirements for Failover Clusters.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To modify network settings for a failover cluster |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Networks.
- Right-click the network that you want to modify settings for, and then click Properties.
- If needed, change the name of the network.
- Select one of the following options:
- Allow cluster network communication on this networkIf you select this option and you want the network to be used by the nodes only (not clients), clear Allow clients to connect through this network. Otherwise, make sure it is selected.
- Do not allow cluster network communication on this networkSelect this option if you are using a network only for iSCSI (communication with storage) or only for backup. (These are among the most common reasons for selecting this option.)
45.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- IP addresses and the associated network names used for the cluster itself or for clustered services or applications are called access points. For a brief description of access points, see Understanding Access Points (Names and IP Addresses) in a Failover Cluster.
46. Enable Cluster Shared Volumes in a Failover Cluster
You can enable Cluster Shared Volumes for a failover cluster using the Failover Cluster Manager snap-in. By enabling Cluster Shared Volumes for a failover cluster, all nodes in the cluster will be enabled to use shared volumes.
Important | |
The Cluster Shared Volumes feature has specific restrictions that must be followed. For a list of both the benefits and the restrictions, see Understanding Cluster Shared Volumes in a Failover Cluster. |
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To enable Cluster Shared Volumes in a failover cluster |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- Under Configure (center pane), click Enable Cluster Shared Volumes.
- In the dialog box, select the checkbox.
46.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- Cluster Shared Volumes can only be enabled once per cluster.
47. Select Quorum Options for a Failover Cluster
If you have special requirements or make changes to your cluster, you might want to change the quorum options for your cluster.
Important | |
In most situations, use the quorum configuration that the cluster software identifies as appropriate for your cluster. Change the quorum configuration only if you have determined that the change is appropriate for your cluster. |
For important conceptual information about quorum configuration options, see Understanding Quorum Configurations in a Failover Cluster.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To select quorum options for a cluster |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- With the cluster selected, in the Actions pane, click More Actions, and then click Configure Cluster Quorum Settings.
- Follow the instructions in the wizard to select the quorum configuration for your cluster. If you choose a configuration that includes a disk witness or file share witness, follow the instructions for specifying the witness.
- After the wizard runs and the Summary page appears, if you want to view a report of the tasks that the wizard performed, click View Report.
47.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
48. Managing a Failover Cluster
The following topics contain information about managing and troubleshooting a failover cluster:
- Understanding Backup and Recovery Basics for a Failover Cluster
- Bring a Clustered Service or Application Online or Take It Offline
- Live Migrate, Quick Migrate, or Move a Virtual Machine from Node to Node
- Refresh the Configuration of a Virtual Machine
- Pause or Resume a Node in a Failover Cluster
- Run a Disk Maintenance Tool Such as Chkdsk on a Clustered Disk
- Start or Stop the Cluster Service on a Cluster Node
- Use Validation Tests for Troubleshooting a Failover Cluster
- View Events and Logs for a Failover Cluster
- View Reports of the Actions of Failover Cluster Wizards
49. Understanding Backup and Recovery Basics for a Failover Cluster
This topic outlines some basic guidelines for backing up and restoring a failover cluster. For more information about backing up and restoring a failover cluster, see https://go.microsoft.com/fwlink/?LinkId=92360.
49.1. Backing up a failover cluster
When you back up a failover cluster, you can back up the cluster configuration, the data on clustered disks, or both.
49.1.1. Backing up the cluster configuration
When you back up the cluster configuration, take note of the following:
- For a backup to succeed in a failover cluster, the cluster must be running and must have quorum. In other words, enough nodes must be running and communicating (perhaps with a disk witness or file share witness, depending on the quorum configuration) that the cluster has achieved quorum.For more information about quorum in a failover cluster, see Understanding Quorum Configurations in a Failover Cluster.
- Before putting a cluster into production, test your backup and recovery process.
- If you choose to use Windows Server Backup (the backup feature included in Windows Server 2008 R2), you must first add the feature. You can do this in Initial Configuration Tasks or in Server Manager by using the Add Features Wizard.
- When you perform a backup (using Windows Server Backup or other backup software), choose options that will allow you to perform a system recovery from your backup. For more information, see the Help or other documentation for your backup software.
49.1.2. Backing up data on clustered disks
When you back up data through a cluster node, notice which disks are Online on that node at that time. Only disks that are Online and owned by that cluster node at the time of the backup are backed up.
49.2. Restoring to a failover cluster from backup
When you restore to a failover cluster from backup, you can restore the cluster configuration, the data on clustered disks, or both.
49.2.1. Restoring the cluster configuration from backup
The Cluster service keeps track of which cluster configuration is the most recent, and it replicates that configuration to all cluster nodes. (If the cluster has a disk witness, the Cluster service also replicates the configuration to the disk witness). Therefore, when you restore a single cluster node from backup, there are two possibilities:
- Restoring the node to normal function, but not rolling back the cluster configuration: This is called a “non-authoritative restore.” In this scenario, the reason you use the backup is only to restore a damaged node to normal function. When the restored node begins functioning and joins the cluster, the latest cluster configuration automatically replicates to that node.
- Rolling back the cluster configuration to the configuration stored in the backup: This is called an “authoritative restore.” In this scenario, you have determined that you want to use the cluster configuration that is stored in the backup, not the configuration currently on the cluster nodes. By specifying appropriate options when you restore the backup, you can cause the cluster to treat the restored configuration as the “most recent.” In this case, the Cluster service will not overwrite the restored configuration, but instead will replicate it across all nodes. For details about how to perform an authoritative restore, see the Help or other documentation for your backup software.
49.2.2. Restoring data to clustered disks from backup
When you restore backed-up data through a failover cluster node, notice which disks are Online on that node at that time. Data can be written only to disks that are Online and owned by that cluster node when the backup is being restored.
50. Bring a Clustered Service or Application Online or Take It Offline
Sometimes during maintenance or diagnosis that involves a service or application in a failover cluster, you might need to bring that service or application online or take it offline. Bringing an application online or taking it offline does not trigger failover, although the Cluster service handles the process in an orderly fashion. For example, if a particular disk is required by a particular clustered application, the Cluster service ensures that the disk is available before the application starts.
For information about related actions, such as pausing a cluster node, see Managing a Failover Cluster.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To bring a clustered service or application online or take it offline |
- In the Failover Cluster Manager snap-in, if the cluster you want to manage is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to manage.
- Under Services and Applications, expand the console tree.
- Check the status of the service or application that you want to bring online or take offline by clicking the service or application and viewing the Status column (in the center pane).
- Right-click the service or application that you want to bring online or take offline.
- Click the appropriate command: Bring this service or application online or Take this service or application offline.
50.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- Note that in a clustered file server, shared folders are associated with a File Server When you bring that resource online or take it offline, all shared folders in the resource go offline or online at the same time. You cannot change the online or offline status of one of the shared folders without affecting all of the shared folders in the File Server resource.
51. Live Migrate, Quick Migrate, or Move a Virtual Machine from Node to Node
Failover clusters in Windows Server 2008 R2 provide several ways to move virtual machines from one cluster node to another. You can live migrate, quick migrate, or move a virtual machine to another node.
To live migrate, quick migrate, or move a virtual machine to another node |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then click the clustered instance containing the virtual machine you want to migrate or move.
- Under Actions (on the right), click one of the following:
- Live migrate virtual machine to another node
- Quick migrate virtual machine(s) to another node
- Move virtual machine(s) to another node
For more information about these choices, see Understanding Hyper-V and Virtual Machines in the Context of a Cluster.
51.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- You cannot use live migration to move multiple virtual machines simultaneously. On a given server running Hyper-V, only one live migration (to or from the server) can be in progress at a given time.
- If you decide to change the settings of a clustered virtual machine, be sure to see Modify the Virtual Machine Settings for a Clustered Virtual Machine.
- For live migration and quick migration, we recommend that you make the hardware and system settings of the nodes as similar as possible to minimize potential problems.
- For each clustered virtual machine, you can also specify the action that the cluster performs before taking the virtual machine offline. This setting does not affect live migration, quick migration, or unplanned failover. It affects only moving (or taking the resource offline through the action of Windows PowerShell or an application). To specify the setting, make sure that after selecting the clustered virtual machine in the console tree (on the left), you right-click the virtual machine resource displayed in the center pane (not on the left), and then click Properties. Click the Settings tab and select an option. The actions are described in Understanding Hyper-V and Virtual Machines in the Context of a Cluster in the section called “Live migration, quick migration, and moving of virtual machines,” in the description for moving of virtual machines.
52. Refresh the Configuration of a Virtual Machine
Important | |
|
To refresh the configuration of a virtual machine |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then click the virtual machine for which you want to refresh the configuration.
- In the Actions pane, scroll down, click More Actions, and then click Refresh virtual machine configuration.
- Click Yes to view the details of this action.
52.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
53. Pause or Resume a Node in a Failover Cluster
When you pause a node, existing groups and resources stay online, but additional groups and resources cannot be brought online on the node. Pausing a node is usually done when applying software updates to the node, where the recommended sequence is to move all services and applications off of the node, pause the node, then apply software updates to the node.
If you need to perform extensive diagnosis or maintenance on a cluster node, it might not be workable to simply pause the node. In that case, you can stop the Cluster service on that node. For more information, see Start or Stop the Cluster Service on a Cluster Node.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To pause or resume a node in a failover cluster by using the Windows interface |
- In the Failover Cluster Manager snap-in, if the cluster you want to manage is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and select or specify the cluster you want.
- If the console tree is collapsed, expand the tree under the cluster you want to manage.
- Expand the console tree under Nodes.
- Right-click the node you want to pause or resume, and then either click Pause or Resume.
53.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
54. Run a Disk Maintenance Tool Such as Chkdsk on a Clustered Disk
To run a disk maintenance tool such as Chkdsk on a disk or volume that is configured as part of a clustered service, application, or virtual machine, you must use maintenance mode. When maintenance mode is on, the disk maintenance tool can finish running without triggering a failover. If you have a disk witness, you cannot use maintenance mode for that disk.
Maintenance mode works somewhat differently on a volume in Cluster Shared Volumes than it does on other disks in cluster storage, as described in Additional considerations, later in this topic.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To run a disk maintenance tool such as Chkdsk on a clustered disk |
- In the Failover Cluster Manager snap-in, if the cluster is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and select or specify the cluster you want.
- If the console tree is collapsed, expand the tree under the cluster that uses the disk on which you want run a disk maintenance tool.
- In the console tree, click Storage.
- In the center pane, click the disk on which you want to run the disk maintenance tool.
- Under Actions, click More Actions, and then click the appropriate command:
- If the disk you clicked is under Cluster Shared Volumes and contains multiple volumes, click Maintenance, and then click the command for the appropriate volume. If prompted, confirm your action.
- If the disk you clicked is under Cluster Shared Volumes and contains one volume, click Maintenance, and then click Turn on maintenance mode for this volume. If prompted, confirm your action.
- If the disk you clicked is not under Cluster Shared Volumes, click Turn on maintenance mode for this disk.
- Run the disk maintenance tool on the disk or volume.
When maintenance mode is on, the disk maintenance tool can finish running without triggering a failover.
- When the disk maintenance tool finishes running, with the disk still selected, under Actions, click More Actions, and then click the appropriate command:
- If the disk you clicked is under Cluster Shared Volumes and contains multiple volumes, click Maintenance, and then click the command for the appropriate volume.
- If the disk you clicked is under Cluster Shared Volumes and contains one volume, click Maintenance, and then click Turn off maintenance mode for this volume.
- If the disk you clicked is not under Cluster Shared Volumes, click Turn off maintenance mode for this disk.
54.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- Maintenance mode works somewhat differently on a volume in Cluster Shared Volumes than it does on other disks in cluster storage:
For Cluster Shared Volumes | For disks not in Cluster Shared Volumes |
Changes the state of a volume. | Changes the state of a disk (LUN). |
Takes dependent resources offline (which interrupts client access). | Leaves dependent resources online. |
Removes access through the \ClusterStorage\volume path, still allowing the owner node to access the volume through its identifier (GUID). Also, suspends direct access from other nodes, allowing access only through the owner node. | Leaves access to the disk unchanged. |
- Maintenance mode will remain on until one of the following occurs:
- You turn it off.
- The node on which the resource is running restarts or loses communication with other nodes (which causes failover of all resources on that node).
- For a disk that is not in Cluster Shared Volumes, the disk resource goes offline or fails.
- You can see whether a disk is in maintenance mode by looking at the status in the center pane when Storage is selected in the console tree.
55. Start or Stop the Cluster Service on a Cluster Node
You might need to stop and restart the Cluster service on a cluster node during some troubleshooting or maintenance operations. When you stop the Cluster service on a node, services or applications on that node will fail over, and the node will stop functioning in the cluster until the Cluster service is restarted.
If you want to leave a particular node functioning so that it supports the services or applications it currently owns, and at the same time prevent additional services and applications from failing over to that node, pause the node (do not stop the Cluster service). For more information, see Pause or Resume a Node in a Failover Cluster.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To start or stop the Cluster service on a cluster node by using the Windows interface |
- In the Failover Cluster Manager snap-in, if the cluster you want to manage is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster you want to manage.
- To minimize disruption to clients, before stopping the Cluster service on a node, move the applications that are currently owned by that node to another node. To do this, expand the console tree under the cluster that you want to manage, and then expand Services and Applications. Click each service or application and (in the center pane) view the Current Owner. If the owner is the node on which you want to stop the Cluster service, right-click the service or application, click Move this service or application to another node, and then choose the node. (For an explanation of the Best possible command option, see “Additional considerations” in this topic.)
- Expand the console tree under Nodes.
- Right-click the node that you want to start or stop, and then click More Actions.
- Click the appropriate command:
- To start the service, click Start Cluster Service.
- To stop the service, click Stop Cluster Service.
55.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- On a cluster with more than two nodes, from the options next to Move this service or application to another node, you can choose Best possible. This option has no effect if you have not configured a Preferred owners list for the service or application you are moving (in this case, the node will be chosen randomly). If you have configured a Preferred owners list, Best possible will move the service or application to the first available node on the list.
- In the center pane of the Failover Cluster Manager snap-in, you can view information about the state of a node. To specifically check whether the Cluster service is running on a node, right-click the node and click More Actions. On a node that is started, Start Cluster Service is dimmed, and on a node that is stopped, Stop Cluster Service is dimmed.
- If you are using the Node and File Share Majority quorum option, at least one of the available cluster nodes must contain a current copy of the cluster configuration before you can start the cluster. Otherwise, you must force the starting of the cluster through a particular node. The cluster will then use the copy of the cluster configuration that is on that node and replicate it to all other nodes. To force the cluster to start, on a node that contains a copy of the cluster configuration that you want to use, open the Failover Cluster Manager snap-in, click the cluster, and then under Actions (on the right), click Force Cluster Start. (Under most circumstances, this command is not available in the Windows interface.)
Note | |
When you use this command on a given node, the copy of the cluster configuration that is on that node will be treated as the authoritative copy of the configuration and will be replicated to all other nodes. |
- The Cluster service performs essential functions on each cluster node, including managing the cluster configuration, coordinating with the instances of the Cluster service running on other nodes, and performing failover operations.
56. Use Validation Tests for Troubleshooting a Failover Cluster
The Validate a Configuration Wizard can be useful when troubleshooting a failover cluster. By running tests related to the symptoms you see, you can learn more about what to do to correct the issue.
Important | |
|
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To use validation tests for troubleshooting a failover cluster |
- Decide whether you want to run all or only some of the available validation tests. You can select or clear the following tests individually or by category:
- Cluster Configuration tests: Validate important cluster configuration settings. For more information, see Understanding Cluster Validation Tests: Cluster Configuration.
- Inventory tests: Provide an inventory of the hardware, software, and settings (such as network settings) on the servers, and information about the storage. For more information, see Understanding Cluster Validation Tests: Inventory.
- Network tests: Validate that networks are set up correctly for clustering. For more information, see Understanding Cluster Validation Tests: Network.
- Storage tests: Validate that the storage on which the failover cluster depends is behaving correctly and supports the required functions of the cluster. For more information, see Understanding Cluster Validation Tests: Storage.
- System Configuration tests: Validate that the system software and configuration settings are compatible across servers. For more information, see Understanding Cluster Validation Tests: System Configuration.
- In the Failover Cluster Manager snap-in, if the cluster that you want to troubleshoot is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If you want to test disks that you have configured as Cluster Shared Volumes, perform the following steps:
- Expand the console tree and click Cluster Shared Volumes.
- In the center pane, right-click a disk that you want to test and then click Take this resource offline.
- Repeat the previous step for any other disks that you want to test.
- Right-click the cluster that you want to troubleshoot, and then click Validate This Cluster.
- Follow the instructions in the wizard to specify the tests, run the tests, and view the results.
- If you took Cluster Shared Volumes offline in a previous step, perform the following steps:
- Click Cluster Shared Volumes.
- In the center pane, right-click a disk that is offline and then click Bring this resource online.
- Repeat the previous step for any other disks that you previously took offline.
56.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- To view the results of the tests after you close the wizard, choose one of the following:
- Open the folder systemroot\Cluster\Reports (on a clustered server).
- In the console tree, right-click the cluster, and then click View Validation Report. This displays the most recent validation report for that cluster.
57. iew Events and Logs for a Failover Cluster
When you view events through the Failover Cluster Manager snap-in, you can see events for all nodes in the cluster instead of seeing events for only one node at a time.
For information about viewing the reports from the failover cluster wizards, see View Reports of the Actions of Failover Cluster Wizards.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To view events and logs for a failover cluster by using the Windows interface |
- In the Failover Cluster Manager snap-in, if the cluster is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster for which you want to view events.
- In the console tree, right-click Cluster Events and then click Query.
- In the Cluster Events Filter dialog box, select the criteria for the events that you want to display.
To return to the default criteria, click the Reset button.
- Click OK.
- To sort the events, click a heading, for example, Level or Date and Time.
- To view a specific event, click the event and view the details in the Event Details
57.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- You can also use Event Viewer to open a log related to failover clustering. To locate the log, in Event Viewer, expand Applications and Services Logs, expand Microsoft, expand Windows, and then expand FailoverClustering. The log file is stored in systemroot\system32\winevt\Logs.
58. View Reports of the Actions of Failover Cluster Wizards
Each of the failover cluster wizards in Windows Server 2008 R2 produces a report of its actions. This topic describes how to view the report of a particular wizard by clicking a button on the last page of that wizard or by opening a report from a specific folder in Windows Server 2008 R2.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To view a report of the actions of a failover cluster wizard |
- Choose one of the following options:
- If the wizard for which you want to view a report is displayed, proceed through the wizard until you are on the Summary page, and then click View Report.
- If you ran the wizard but it is not displayed now, use Windows Explorer or another method to navigate to the following location and open the report you want:systemroot\Cluster\Reports
58.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
59. Add Resource Type Dialog Box
Item | Details |
Resource DLL path and file name | Specify the path and file name of the resource DLL that the Cluster service should use when it communicates with your service or application. A resource DLL monitors and controls the associated service or application in response to commands and requests from the Cluster service and associated software components. For example, the resource DLL saves and retrieves service or application properties in the cluster database, brings the resource online and takes it offline, and checks the health of the resource. |
Resource type name | Specify the name that the Cluster service uses for the resource type. This name stays the same regardless of the regional and language options that are currently selected. |
Resource type display name | Specify the n |
60. Resources for Failover Clusters
For more information about failover clusters, see the following resources on the Microsoft Web site:
- For a list of links to a variety of topics about failover clusters, see https://go.microsoft.com/fwlink/?LinkId=68633.
- For information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
- For operations information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137835.
- For troubleshooting information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137836.
60.1.
60.2. Server overview
Attached is an initial list of servers that will be deployed.
61. Network Infrastructure
61.1. WAN logical Network diagram
61.1.1. Topology
Diagram here
61.1.1.1. On-prem Network
On-prem will have only production and supporting subnets (e.g. SQL Cluster infra). At the moment of writing this document only infra subnet is provisioned. Other subnets will be decided later on then project sees need for it.
.
61.1.1.2. IP’s
All SQL CLUSTER traffic goes out via these SQL CLUSTER public IP’s:
IP1
IP2
IP3
61.1.2. Firewalls
61.1.2.1. Application firewall rules
Please see section 8.1 Application orders.
61.1.2.2. Predefined security groups
Name here
61.1.2.3. SQL Cluster solution firewall rules
Diagram below shows ports required for SQL Cluster components and sessions hosts to communicate with each other over Network as well as custom ports for applications residing on session hosts them self.
Figure 7. Ports required for solution
NOTE: To simplify future deployment and to have more flexibility we will allow each subnet to have access to Microsoft Office online services for Office activation, Outlook connection OneDrive synchronization. Limiting access to only CUSTOMER’s tenant.
61.1.3. Subnet segmentation
General purpose to divide sessions into multiple segments/subnets is to minimize risk of lateral movement between servers then one of the servers is compromised by a malicious activity. For that purpose, we are segregating session hosts and SQL Cluster infrastructure components into subnets below. Common applications are grouped together to form a unified communications channel.
White fill means that subnet will be implemented when needed.
Subnet name | CIDR range | Information | On-prem(p) / SQL CLUSTER (a) | SQL CLUSTER Node name |
std-users | 10.0.0.0/24 | Services. | ||
app-admins-p | 10.0.0.0/24 | Prod application e.g. RDP, SSH |
61.1.4. Domains
All servers and users will be residing in TGV.net domain. In case other projects will be implementing additional Active directory, forests or domains and require SQL Cluster to access to them, careful considerations and planning should be carried out. On a high level this is supported by SQL Cluster but has to some limitations if other systems are implemented not within supported level.
62. SQL Cluster configuration
62.1. Applications and Groups
pa-SQL-globalacces-regular – access to SQL Cluster environment for CUSTOMER employees and/or contractors.
pa-SQL-globalaccess-restricted – access with restrictions. No copying in/out files or text.
pa-SQL-globalaccess-full – Full copying in and out of environments.
NOTE: Further restrictions can we achieved using AD groups depending on security requirements:
- Full, bi-directional clipboard redirection
- Full, client-to-session only clipboard redirection
- Full, session-to-client only clipboard redirection
- Selective formats, bi-directional clipboard redirection
- Selective formats, client-to-session only clipboard redirection
- Selective formats, session-to-client only clipboard redirection
62.2. OU segmentation, GPO’s and policies
All machines in the same subnet/segment/group should be placed in the same OU and have same GPO’s. For that proposed OU structure would be:
All SQL Cluster OU’s should have general policies and GPO’s applied according to company policies + SQL Cluster specific policies e.g. log off users after idle time etc. (exact settings are subject to change therefore they are not documented on the design document).
62.3. Disaster Recovery
For a disaster recovery case ADFS and SQL Cluster Network will not be available at this time , we will be using Test environment on VMware with on-prem configuration.
63. Migration
63.1. Migration from SQL
High level migration from Windows 2008 to Windows 2019 and SQL Cluster steps:
- SQL Cluster DB to new SQL Cluster steps:
- Identify all applications and users who use them in the SQL environment
- Existing DB will be migrated one by one from legacy SQL Cluster to new SQL platform
64. System Management
64.1. Application orders
If new application order comes in to publish it. Bellow form must be filled in from the requestor or application owner.
64.2. Management and control
SQL Cluster Network administration is done via: https:/url URL, where authentication is done through Azure AD.
The ADFS and Cluster Management will be done through URL or directly connecting to the console from RDP platform.
64.3. Backup schedule
All backup should be done on a CUSTOMER’s standard basis at a standard times within off work hours. Bellow is table representing what and then each component should be backed up.
SQL Cluster component | Backup required | Schedule |
SQL Cluster | Yes | Incremental once a week. Full once a month. Retention 6 months. |
SQL Cluster Cluster Managemet | Yes | Incremental once a day. Full once a week. Retention 3 months. |
SQL Cluster Network Connector | Yes | – |
SQL Cluster Configuration | Yes | – |
File servers | Yes | Incremental once a day. Full once a week. Retention 3 months. |
64.4. System recovery and resilience
Please see list below for how resilient and what is done for the system to be recoverable.
SQL Cluster Network – SQL Cluster as a PaaS vendor will provide 99.5% uptime for its services. In case of SQL Cluster Network failure DR plans will commence and users will be redirected to use alternate path of connecting to applications.
SQL Cluster s – are deployed for a DR only scenario. Nevertheless, it is deployed in a highly available configuration. Two instances deployed on two different physical hosts with redundant Network controllers in active-passive mode.
SQL Cluster Cluster Managemet – servers are deployed for DR scenario as well. There are two servers which are load balanced with s. In case one of the servers goes offline the other will serve all users until the first one is recovered.
SQL Cluster Network Connectors – are the main components for SQL Cluster Network. They do not require any load balancing and are stateless. This means that they can be easily scaled out in case there is need. Automatic scaling will be done in case of failure on SQL CLUSTER part. On on-prem Network in case one of connectors will fail the other one will take over all load.
SQL Cluster session hosts – there will be deployed multiple hosts per application.
File Servers – file servers are deployed in a highly available configuration within clusters.
SQL DB – resiliency and recovery is solved within leveraged platform.
64.5. Scalability
Below is the explanation on each component how it can scale and what actions should be taken to do so:
SQL Cluster – as it is DR only component it is deployed to server users after disaster and does not need scaling.
SQL Cluster Cluster Managemet – deployed only for DR and does not need scaling, but it is deployed to support all incoming users.
SQL Cluster Network Connectors – are the main components for SQL Cluster Network. They do not require any load balancing and are stateless. This means that they can be easily scaled out in case there is need. On SQL CLUSTER part autoscaling is enabled to do it automatically in case of failure or performance degradation. Connectors can be scaled out/in or up/down.
SQL Cluster Scaling– scaling for more nodes is done manually depending on the applications usage. Failover on SQL CLUSTER is done automatically depending on the load through SQL Cluster feature called Automatic Failover.
File Servers – file servers are deployed in SQL CLUSTER as a service (FSx) in a highly available configuration within clusters. To scale file servers is to add more storage.
64.6. Licensing
Then SQL Cluster Network licenses will be bought they will reside on Volume Licensing (LTSA) and will be managed and provided for CUSTOMER to consume.
Additional ADFS licenses should be installed on a current LTSA or VLS server to satisfy increasing demand due to employee onboarding to SQL Cluster solution.
65. Appendix I
66. <ClusteredInstance> Properties: General Tab
Item | Details |
Preferred owners | Select nodes from the list, and then use the buttons to list them in order.
If you want this service or application to be moved to a particular node whenever that node is available: · Select the check box for a node and use the buttons to place it at the top of the list. · On the Failover tab, ensure that failback is allowed for this service or application. Note that in the Preferred owners list, even if you clear the check box for a node, the service or application could fail over to that node for either of two reasons: · You have not specified any preferred owners. · No node that is a preferred owner is currently online. To ensure that a clustered service or application never fails over to a particular node:
|
67. <Resource> Properties: Advanced Policies tab
Item | Details |
Possible owners | Clear the check box for a node only if you want to prevent this resource (and the clustered service or application that contains this resource) from failing over to that node. Otherwise, leave the boxes checked for all nodes.
Note that if you leave the box for only one node checked, this resource (and the clustered service or application that contains this resource) cannot fail over. |
Basic resource health check interval | Specify how often you want the cluster to perform a basic check to see whether the resource appears to be online. We recommend that you use the standard time period for the resource type unless you have a reason to change it.
This health check is also known as the Looks Alive poll. |
Thorough resource health check interval | Specify how often you want the cluster to perform a more thorough check, looking for indications that the resource is online and functioning properly. We recommend that you use the standard time period for the resource type unless you have a reason to change it.
This health check is also known as the Is Alive poll. |
68. <Resource> Properties: Dependencies Tab
When you specify resource dependencies, you control the order in which the Cluster service brings resources online and takes them offline. A dependent resource is:
- Brought online after the resource or resources it depends on.
- Taken offline before the resource or resources it depends on.
Note | |
After specifying dependencies, to see a report, right-click the clustered service or application (not the resource), and then click Show Dependency Report. |
This tab contains the following controls:
Item | Details |
Resource | Click the box, and then select a resource from the list. The list contains the other resources in this clustered service or application that have not already been selected. |
AND/OR | For the second or later lines:
· Specify AND if both the resource in that line and one or more previously listed resources must be online before the dependent resource is brought online. · Specify OR if either the resource in that line or another previously listed resource must be online before the dependent resource is brought online. (The dependent resource can also be brought online if both are online.) Notice that gray brackets appear to show how the listings are grouped, based on the AND/OR settings. |
Insert | When you select a line in the list and click Insert, a line is added to the list before the selected line. |
Delete | When you select a line in the list and click Delete, the line is deleted. |
69. <Resource> Properties: General Tab
If a disk that you are using in a failover cluster has failed, you can use Repair (on the General tab of the Properties sheet) to assign a different disk.
Item | Details | ||||
Repair button (for a disk) | When you click this button, you can stop using a failed disk and assign a different disk to this service or application. The disk that you assign must be one that can be used for clustering but is not yet clustered.
For more information about disks that can be used for clustering, see “List Potential Cluster Disks” in Understanding Cluster Validation Tests: Storage.
Before using the Repair button, expose a LUN to each node of the cluster, but do not add that LUN to the cluster (through the Failover Cluster Manager snap-in). You can restore the data to the disk before or after using the Repair button. |
70. <Resource> Properties: Policies Tab
Item | Details |
Maximum restarts in the specified period | Specify the number of times that you want the Cluster service to try to restart the resource during the period you specify. If the resource cannot be started after this number of attempts in the specified period, the Cluster service will take actions as specified by other fields of this tab.
For example, if you specify 3 for Maximum restarts in the specified period and 15:00 for the period, the Cluster service attempts to restart the resource three times in a given 15 minute period. If the resource still does not run, instead of trying to restart it a fourth time, the Cluster service will take the actions that you specified in the other fields of this tab. |
Period (mm:ss) | Specify the length of the period (minutes and seconds) during which the Cluster service counts the number of times that a resource has been restarted. For an example of how the period works with the maximum number of restarts, see the previous cell in this table. |
If restart is unsuccessful, fail over all resources in this service or application | Use this box to control the way the Cluster service responds if the maximum restarts fail:
· Select this box if you want the Cluster service to respond by failing the clustered service or application over to another node. · Clear this box if you want the Cluster service to respond by leaving this clustered service or application running on this node (even if this resource is in a failed state). |
If all the restart attempts fail, begin restarting again after the specified period (hh:mm) | Select this box if you want the Cluster service to go into an extended waiting period after attempting the maximum number of restarts on the resource. Note that this extended waiting period is measured in hours and minutes. After the waiting period, the Cluster service will begin another series of restarts. This is true regardless of which node owns the clustered service or application at that time. |
71. Virtual Machine <Resource Name> Properties: Settings Tab
On this tab, one of the settings you can specify is the action that the cluster will perform when taking the virtual machine offline. This setting does not affect live migration, quick migration, or unplanned failover. It affects only moving (or taking the resource offline through the action of Windows PowerShell or an application).
|
||||||||||
|
72. Appendix II
72.1. Appendix A glossary of terms
Reference | Description |
DR | Disaster Recovery |
IP | Internet Protocol |
FAS | Federated Authentication Service |
LAN | Local Area Network |
POP | Point of Presence |
Personal data | Documents and files user creates |
User data | Settings user creates while working |
SLA | Service Level Agreement |
VPN | Virtual Private Network |
WAN | Wide Area Network |
NPS | Network Policy Server |
GW | Gateway |
SQLH | Remote Desktop Session Host |
WEM | Workspace Environment Management |
BAPL | Business Application Presentation Layer |
NTP | Network Time Protocol |
NS | |
WEM | Workspace Environment Management |
CAL | Client access license |
72.2. Appendix B bibliography
Reference | Description |
v0.1 | Initial Draft |
72.3. Appendix C assumptions risks and constraints
72.3.1. Assumptions
ID | Assumption | Criticality (H/M/L) |
Business requirements | How users connect and what features they need | |
Security requirements | Network segmentation | |
72.3.2. Constraints
ID | Constraint | Impact Action |
Business requirements | There are no business representatives on a project to maintain and introduce business requirements. | |
Security | There is lack of security guidelines and overall company policy | |
72.3.3. Givens
ID | Givens |
72.3.4. Risks
ID | Risk | Impact Mitigation |
Requirements | A lot of functional requirements was assumed by the architect as there is no business owner involvement. | |
72.4. Appendix D document distribution
Distribution | Version | ||||||||||
0.1 | |||||||||||
Recipient or location | x | ||||||||||
Table 13. Document Distribution
clusters have been improved, as has the way a failover cluster communicates with storage.
Note that the Failover Clustering feature is included in server products such as Windows Server 2008 R2 Enterprise and Windows Server 2008 R2 Datacenter. The Failover Clustering feature is not included in Windows Server 2008 R2 Standard or Windows Web Server 2008 R2.
DOCUMENT CONTROL
Please note that printed copies of this document, permitted or otherwise, are uncontrolled and shall be treated as such.
DOCUMENT AUTHORISATION
Name | Role | Date |
Table 1. Document Authorization
CHANGE HISTORY
Version | Date | Author | Summary of Changes |
0.1 | 14.11.2021 | Pablo Villaronga | Initial Draft |
1.0 | TBD | Final version | |
DOCUMENT PREFERENCES
Document | Version | File Name/URL |
The Very Group Non-Functional requirements | Link to Non-Functional requirements | |
The Very Group Functional requirements | Link to Functional requirements | |
Table of Contents
2.1. Logical infrastructure overview diagram… 10
2.1.2. Checklist: Create a Failover Cluster. 10
2.1.3. Logical technical diagram components. 10
- Checklist: Create a Clustered File Server. 12
- Checklist: Create a Clustered Print Server. 13
- Checklist: Create a Clustered Virtual Machine.. 15
- Technical infrastructure.. 17
- Installing the Failover Clustering Feature.. 18
- Install the Failover Clustering Feature.. 19
- Validating a Failover Cluster Configuration.. 20
- Understanding Requirements for Failover Clusters. 21
10.1………………………………………………………… Hardware requirements for a failover cluster. 21
10.1.1………………………………….. Deploying storage area networks with failover clusters. 23
10.2…………………………………………………………. Software requirements for a failover cluster. 23
10.3…………… Network infrastructure and domain account requirements for a failover cluster. 24
- Understanding Microsoft Support of Cluster Solutions. 26
- Understanding Cluster Validation Tests. 27
- Understanding Cluster Validation Tests: Cluster Configuration.. 28
- Understanding Cluster Validation Tests: Inventory. 29
- Understanding Cluster Validation Tests: Network. 31
15.1………………………………………………………. Correcting issues uncovered by network tests. 31
15.2……………………………………………… Network tests in the Validate a Configuration Wizard.. 31
16.1……………………………………………………….. Correcting issues uncovered by storage tests. 33
16.2………………………………………………. Storage tests in the Validate a Configuration Wizard.. 33
- Understanding Cluster Validation Tests: System Configuration.. 38
- Prepare Hardware Before Validating a Failover Cluster. 40
- Validate a New or Existing Failover Cluster. 42
- Creating a Failover Cluster or Adding a Cluster Node.. 44
- Understanding Access Points (Names and IP Addresses) in a Failover Cluster. 45
- Create a New Failover Cluster. 46
- Add a Server to a Failover Cluster. 48
- Configuring a Service or Application for High Availability. 50
24.1…………………………………. Applications and services listed in the High Availability Wizard.. 50
24.2…………………. List of topics about configuring a service or application for high availability. 51
25.2…………. Basic requirements for a service or application in a failover cluster environment. 53
26.1………………………………………….. Overview of Hyper-V in the context of a failover cluster. 54
26.2…………………………………………………………. Using Cluster Shared Volumes with Hyper-V.. 54
26.3………………………………. Live migration, quick migration, and moving of virtual machines. 55
26.4…….. Coordinating the use of Hyper-V Manager with the use of Failover Cluster Manager. 56
- Configure a Service or Application for High Availability. 57
- Configure a Virtual Machine for High Availability. 59
- Test the Failover of a Clustered Service or Application.. 61
- Test the Failover of a Clustered Virtual Machine.. 62
- Modifying the Settings for a Clustered Service or Application.. 63
31.1…………………………. Modifying failover and failback settings, including preferred owners. 63
31.2………………………………………………………………… List of topics about modifying settings. 63
- Add Storage for a Clustered Service or Application.. 65
- Add a Resource to a Clustered Service or Application.. 66
- Modify the Failover Settings for a Clustered Service or Application.. 68
- odify the Virtual Machine Settings for a Clustered Virtual Machine.. 69
- Create a Shared Folder in a Clustered File Server. 70
- Configure the Print Settings for a Clustered Print Server. 71
- Migrating Settings to a Failover Cluster Running Windows Server 2008 R2. 73
- Understanding the Process of Migrating to a Cluster Running Windows Server 2008 R2. 74
39.2.. Migration scenario A: Migrating a multiple-node cluster to a cluster with new hardware.. 77
39.3. Migration scenario B: Migrating a two-node cluster to a cluster with the same hardware.. 77
- Migrate Resource Groups to a Failover Cluster Running Windows Server 2008 R2. 79
- Modifying Settings for a Failover Cluster. 81
- Understanding Cluster Shared Volumes in a Failover Cluster. 82
42.1…………………………………… Benefits of using Cluster Shared Volumes in a failover cluster. 82
42.2……………………………… Restrictions on using Cluster Shared Volumes in a failover cluster. 83
43.1…………………………………………………. How the quorum configuration affects the cluster. 84
43.2…………………………………………………………………………… Quorum configuration choices. 84
43.3………………………………………………………………… Illustrations of quorum configurations. 85
43.4…………………………………………………………………………………. Why quorum is necessary. 86
- Add Storage to a Failover Cluster. 87
- Modify Network Settings for a Failover Cluster. 89
- Enable Cluster Shared Volumes in a Failover Cluster. 90
- Select Quorum Options for a Failover Cluster. 91
- Managing a Failover Cluster. 92
- Understanding Backup and Recovery Basics for a Failover Cluster. 93
49.1……………………………………………………………………………… Backing up a failover cluster. 93
49.1.1………………………………………………………….. Backing up the cluster configuration.. 93
49.1.2…………………………………………………………….. Backing up data on clustered disks. 93
49.2…………………………………………………………… Restoring to a failover cluster from backup.. 93
49.2.1…………………………………………… Restoring the cluster configuration from backup.. 94
49.2.2……………………………………………… Restoring data to clustered disks from backup.. 94
- Bring a Clustered Service or Application Online or Take It Offline.. 95
- Live Migrate, Quick Migrate, or Move a Virtual Machine from Node to Node.. 96
- Refresh the Configuration of a Virtual Machine.. 98
- Pause or Resume a Node in a Failover Cluster. 99
53.1.1………………………………………………………………………… Additional considerations. 99
- Run a Disk Maintenance Tool Such as Chkdsk on a Clustered Disk. 100
- Start or Stop the Cluster Service on a Cluster Node.. 102
- Use Validation Tests for Troubleshooting a Failover Cluster. 104
- iew Events and Logs for a Failover Cluster. 106
- View Reports of the Actions of Failover Cluster Wizards. 107
- Add Resource Type Dialog Box. 108
- Resources for Failover Clusters. 109
60.2……………………………………………………………………………………………… Server overview… 109
61.1…………………………………………………………………………… WAN logical Network diagram… 110
61.1.1……………………………………………………………………………………………… Topology. 110
61.1.2………………………………………………………………………………………………. Firewalls. 110
61.1.3……………………………………………………………………………… Subnet segmentation.. 111
61.1.4………………………………………………………………………………………………. Domains. 111
62.1………………………………………………………………………………….. Applications and Groups. 112
62.2………………………………………………………………….. OU segmentation, GPO’s and policies. 112
62.3…………………………………………………………………………………………… Disaster Recovery. 112
63.1…………………………………………………………………………………………. Migration from SQL. 113
64.1…………………………………………………………………………………………… Application orders. 114
64.2…………………………………………………………………………………. Management and control 114
64.3…………………………………………………………………………………………….. Backup schedule.. 114
64.4………………………………………………………………………….. System recovery and resilience.. 114
64.5………………………………………………………………………………………………………. Scalability. 115
64.6……………………………………………………………………………………………………….. Licensing. 115
- Appendix I 116
- <ClusteredInstance> Properties: General Tab.. 117
- <Resource> Properties: Advanced Policies tab.. 118
- <Resource> Properties: Dependencies Tab.. 119
- <Resource> Properties: General Tab.. 120
- <Resource> Properties: Policies Tab.. 121
- Virtual Machine <Resource Name> Properties: Settings Tab.. 122
- Appendix II 123
72.1…………………………………………………………………………….. Appendix A glossary of terms. 123
72.2…………………………………………………………………………………… Appendix B bibliography. 123
72.3………………………………………………………. Appendix C assumptions risks and constraints. 124
72.3.1…………………………………………………………………………………………. Assumptions. 124
72.3.2…………………………………………………………………………………………… Constraints. 124
72.3.3…………………………………………………………………………………………………. Givens. 124
72.3.4……………………………………………………………………………………………………. Risks. 124
72.4……………………………………………………………………… Appendix D document distribution.. 125
Table of Figures
Figure 1. Logical infrastructure overview diagram… 10
Figure 3. Infrastructure topology. 18
Figure 7. Ports required for solution.. 110
Table of Tables
Table 1. Document Authorization.. 3
Table 3. Document Preferences. 4
Table 7. Glossary of Terms. 123
Table 13. Document Distribution.. 125
1. Executive Summary
To enhance user mobility and flexibility on Single Sign on solution based on ADFS Servers running against a MS Windows SQL Cluster Services .
Current solutions that The Very Group has are experiencing EOL or EOS and Need to be migrated to a new platform for all TVG needs.
2. Logical Architecture Environment
2.1. Logical infrastructure overview diagram
Figure 1. Logical infrastructure overview diagram
2.1.1. Description
- Checklist: Create a Failover Cluster
- Checklist: Create a Clustered File Server
- Checklist: Create a Clustered Print Server
- Checklist: Create a Clustered Virtual Machine
- Installing the Failover Clustering Feature
- Validating a Failover Cluster Configuration
- Creating a Failover Cluster or Adding a Cluster Node
- Configuring a Service or Application for High Availability
- Migrating Settings to a Failover Cluster Running Windows Server 2008 R2
- Modifying Settings for a Failover Cluster
- Managing a Failover Cluster
- Resources for Failover Clusters
- User Interface: The Failover Cluster Manager Snap-In
2.1.2. Checklist: Create a Failover Cluster
2.1.3. Logical technical diagram components
If you want to provide high availability for users of one or more file shares, see a more complete checklist at Checklist: Create a Clustered File Server.
1 | Review hardware and infrastructure requirements for a failover cluster. | Understanding Requirements for Failover Clusters |
2 | Install the Failover Clustering feature on every server that will be in the cluster. | Install the Failover Clustering Feature |
3 | Connect the networks and storage that the cluster will use. | Prepare Hardware Before Validating a Failover Cluster |
4 | Run the Validate a Configuration Wizard on all the servers that you want to cluster, to confirm that the hardware and hardware settings of the servers, network, and storage are compatible with failover clustering. If necessary, adjust hardware or hardware settings and rerun the wizard until all tests pass (required for support). | Validate a New or Existing Failover Cluster |
5 | Create the failover cluster. | Create a New Failover Cluster |
If you want to provide high availability for users of one or more print servers, see a more complete checklist at Checklist: Create a Clustered Print Server.
After you have created a failover cluster, the next step is usually to configure the cluster to support a particular service or application. For more information, see Configuring a Service or Application for High Availability.
3. Checklist: Create a Clustered File Server
Step | Reference | |
On every server that will be in the cluster, open Server Manager, click Add roles and then use the Add Roles Wizard to add the File Services role and any role services that are needed. | Help in Server Manager | |
Review hardware and infrastructure requirements for a failover cluster. | Understanding Requirements for Failover Clusters | |
Install the Failover Clustering feature on every server that will be in the cluster. | Install the Failover Clustering Feature | |
Connect the networks and storage that the cluster will use. | Prepare Hardware Before Validating a Failover Cluster | |
Run the Validate a Configuration Wizard on all the servers that you want to cluster, to confirm that the hardware and hardware settings of the servers, network, and storage are compatible with failover clustering. If necessary, adjust hardware or hardware settings and rerun the wizard until all tests pass (required for support). | Validate a New or Existing Failover Cluster | |
Create the failover cluster. | Create a New Failover Cluster | |
If the clustered servers are connected to a network that is not to be used for network communication in the cluster (for example, a network intended only for iSCSI or only for backup), then configure that network so that it does not allow cluster communication. | Modify Network Settings for a Failover Cluster | |
Run the High-Availability Wizard and specify the File Server role, a name for the clustered file server, and IP address information that is not automatically supplied by your DHCP settings. Also specify the storage volume or volumes. | Configure a Service or Application for High Availability | |
Add shared folders to the clustered file server as needed. | Create a Shared Folder in a Clustered File Server | |
Test the failover of the clustered file server. | Test the Failover of a Clustered Service or Application |
3.1.1.1. Additional references
- Overview of Failover Clusters
- Modify the Failover Settings for a Clustered Service or Application
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
4. Checklist: Create a Clustered Print Server
Step | Reference | |
On every server that will be in the cluster, open Server Manager, click Add roles and then use the Add Roles Wizard to add the Print and Document Services role and any role services that are needed. | Help in Server Manager | |
Review hardware and infrastructure requirements for a failover cluster. | Understanding Requirements for Failover Clusters | |
Install the Failover Clustering feature on every server that will be in the cluster. | Install the Failover Clustering Feature | |
Connect the networks and storage that the cluster will use. | Prepare Hardware Before Validating a Failover Cluster | |
Run the Validate a Configuration Wizard on all the servers that you want to cluster, to confirm that the hardware and hardware settings of the servers, network, and storage are compatible with failover clustering. If necessary, adjust hardware or hardware settings and rerun the wizard until all tests pass (required for support). | Validate a New or Existing Failover Cluster | |
Create the failover cluster. | Create a New Failover Cluster | |
If the clustered servers are connected to a network that is not to be used for network communication in the cluster (for example, a network intended only for iSCSI or only for backup), then configure that network so that it does not allow cluster communication. | Modify Network Settings for a Failover Cluster | |
Run the High-Availability Wizard and specify the Print Server role, a name for the clustered print server, and IP address information that is not automatically supplied by your DHCP settings. | Configure a Service or Application for High Availability | |
Configure print settings such as the setting that specifies the printer. | Configure the Print Settings for a Clustered Print Server | |
Test the failover of the clustered print server. | Test the Failover of a Clustered Service or Application | |
From a client, print a test page on the clustered print server. | Help for the operating system on the client (for specifying the path to the clustered print server)
Help from the application from which you print the test page |
4.1.1.1. Additional references
- Overview of Failover Clusters
- Modify the Failover Settings for a Clustered Service or Application
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
5. Checklist: Create a Clustered Virtual Machine
Step | Reference | |
Review hardware and infrastructure requirements for a failover cluster. | Understanding Requirements for Failover Clusters | |
Review hardware requirements for Hyper-V. | Hardware requirements for Hyper-V (https://go.microsoft.com/fwlink/?LinkId=137803) | |
Review concepts related to Hyper-V and virtual machines in the context of a cluster.
If you want to use Cluster Shared Volumes for your virtual machines, review concepts related to Cluster Shared Volumes. |
Understanding Hyper-V and Virtual Machines in the Context of a Cluster | |
Install Hyper-V (role) and Failover Clustering (feature) on every server that will be in the cluster. | Install Hyper-V (https://go.microsoft.com/fwlink/?LinkId=137804) | |
Connect the networks and storage that the cluster will use. In Hyper-V Manager, create virtual networks that will be available for virtual machines to use. | Prepare Hardware Before Validating a Failover Cluster
Managing virtual networks (https://go.microsoft.com/fwlink/?LinkId=139566) |
|
Run the Validate a Configuration Wizard on all the servers that you want to cluster, to confirm that the hardware and hardware settings of the servers, network, and storage are compatible with failover clustering. If necessary, adjust hardware or hardware settings and rerun the wizard until all tests pass (required for support). | Validate a New or Existing Failover Cluster | |
Create the failover cluster. | Create a New Failover Cluster | |
If the clustered servers are connected to a network that is not to be used for network communication in the cluster (for example, a network intended only for iSCSI or only for backup), then configure that network so that it does not allow cluster communication. | Modify Network Settings for a Failover Cluster | |
If you want to use Cluster Shared Volumes and you have not already enabled this feature, enable Cluster Shared Volumes. | Enable Cluster Shared Volumes in a Failover Cluster | |
Create a virtual machine and configure it for high availability, being sure to select Store the virtual machine in a different location and specify appropriate clustered storage. If you have Cluster Shared Volumes, for the storage, specify a Cluster Shared Volume (a volume that appears to be on the system drive of the node, under the \ClusterStorage folder). Otherwise, create the virtual machine on the node that currently owns the clustered storage for the virtual machine, and specify the location of that storage.
None of the files used by a clustered virtual machine can be on a local disk. They must all be on clustered storage. |
Configure a Virtual Machine for High Availability | |
If you have not already installed the operating system for the virtual machine, install the operating system. If there are Hyper-V integration services for that operating system, install them in the operating system that runs in the virtual machine. | Install a guest operating system (https://go.microsoft.com/fwlink/?LinkId=137806) | |
Reconfigure the automatic start action for the virtual machine so that it does nothing when the physical computer starts. | Configure virtual machines (https://go.microsoft.com/fwlink/?LinkId=137807) | |
Test the failover of the clustered virtual machine. | Test the Failover of a Clustered Virtual Machine |
5.1.1.1. Additional references
- Refresh the Configuration of a Virtual Machine
- Overview of Failover Clusters
- Modify the Failover Settings for a Clustered Service or Application
- Configure a Virtual Machine for High Availability
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
6. Technical infrastructure
7. Installing the Failover Clustering Feature
Before you can create a failover cluster, you must install the Failover Clustering feature on all servers that you want to include in the cluster. This section provides instructions for installing this feature.
Note that the Failover Clustering feature is included in server products such as Windows Server 2008 R2 Enterprise and Windows Server 2008 R2 Datacenter. The Failover Clustering feature is not included in Windows Server 2008 R2 Standard or Windows Web Server 2008 R2.
Figure 3. Infrastructure topology
8. Install the Failover Clustering Feature
You can install the Failover Clustering feature by using the Add Features command from Initial Configuration Tasks or from Server Manager.
Note | |
The Failover Clustering feature is included in server products such as Windows Server 2008 R2 Enterprise and Windows Server 2008 R2 Datacenter. The Failover Clustering feature is not included in Windows Server 2008 R2 Standard or Windows Web Server 2008 R2. |
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To install the Failover Clustering feature |
- If you recently installed Windows Server 2008 R2 on the server and the Initial Configuration Tasks interface is displayed, under Customize This Server, click Add features. (Skip to Step 3.)
- If Initial Configuration Tasks is not displayed, add the feature through Server Manager:
- If Server Manager is already running, click Features. Then under Features Summary, click Add Features.
- If Server Manager is not running, click Start, click Administrative Tools, click Server Manager, and then, if prompted for permission to continue, click Continue. Then, under Features Summary, click Add Features.
- In the Add Features Wizard, click Failover Clustering and then click Install.
- When the wizard finishes, close it.
- Repeat the process for each server that you want to include in the cluster.
8.1.1.1. Additional references
9. Validating a Failover Cluster Configuration
Before you create a failover cluster, we strongly recommend that you validate your configuration, that is, that you run all tests in the Validate a Configuration Wizard. By running the tests, you can confirm that your hardware and settings are compatible with failover clustering.
You can run the tests on a set of servers and storage devices either before or after you have configured them as a failover cluster.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
With the Validate a Configuration Wizard, you can run the complete set of configuration tests or a subset of the tests. The wizard provides a report that shows the result of each test that was run.
- Understanding Requirements for Failover Clusters
- Understanding Microsoft Support of Cluster Solutions
- Understanding Cluster Validation Tests
- Prepare Hardware Before Validating a Failover Cluster
- Validate a New or Existing Failover Cluster
9.1.1.1. Additional references
- Use Validation Tests for Troubleshooting a Failover Cluster
- For troubleshooting information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137836.
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
10. Understanding Requirements for Failover Clusters
A failover cluster must meet certain requirements for hardware, software, and network infrastructure, and it requires the administrator to use an account with the appropriate domain permissions. The following sections provide information about these requirements.
- Hardware requirements for a failover cluster
- Software requirements for a failover cluster
- Network infrastructure and domain account requirements for a failover cluster
For additional information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
10.1. Hardware requirements for a failover cluster
You need the following hardware in a failover cluster:
- Servers: We recommend that you use a set of matching computers that contain the same or similar components.
Important | |
Microsoft supports a failover cluster solution only if all the hardware components are marked as “Certified for Windows Server 2008 R2.” In addition, the complete configuration (servers, network, and storage) must pass all tests in the Validate a Configuration Wizard, which is included in the Failover Cluster Manager snap-in. |
- For information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.For information about the maximum number of servers that you can have in a failover cluster, see https://go.microsoft.com/fwlink/?LinkId=139146.
- Network adapters and cable (for network communication): The network hardware, like other components in the failover cluster solution, must be marked as “Certified for Windows Server 2008 R2.” If you use iSCSI, your network adapters should be dedicated to either network communication or iSCSI, not both.In the network infrastructure that connects your cluster nodes, avoid having single points of failure. There are multiple ways of accomplishing this. You can connect your cluster nodes by multiple, distinct networks. Alternatively, you can connect your cluster nodes with one network that is constructed with teamed network adapters, redundant switches, redundant routers, or similar hardware that removes single points of failure.
Note | |
If you connect cluster nodes with a single network, the network will pass the redundancy requirement in the Validate a Configuration Wizard. However, the report from the wizard will include a warning that the network should not have single points of failure. |
- For more details about the network configuration required for a failover cluster, see Network infrastructure and domain account requirements for a failover cluster, later in this topic.
- Device controllers or appropriate adapters for the storage:
- For Serial Attached SCSI or Fibre Channel: If you are using Serial Attached SCSI or Fibre Channel, in all clustered servers, the mass-storage device controllers that are dedicated to the cluster storage should be identical. They should also use the same firmware version.
Note | |
With Windows Server 2008 R2, you cannot use parallel SCSI to connect the storage to the clustered servers. This was also true for Windows Server 2008. |
- For iSCSI: If you are using iSCSI, each clustered server should have one or more network adapters or host bus adapters that are dedicated to the cluster storage. The network you use for iSCSI should not be used for network communication. In all clustered servers, the network adapters you use to connect to the iSCSI storage target should be identical, and we recommend that you use Gigabit Ethernet or higher.For iSCSI, you cannot use teamed network adapters, because they are not supported with iSCSI.For more information about iSCSI, see the iSCSI FAQ on the Microsoft Web site (https://go.microsoft.com/fwlink/?LinkId=61375).
- Storage: You must use shared storage that is compatible with Windows Server 2008 R2.In most cases, the storage should contain multiple, separate disks (LUNs) that are configured at the hardware level. For some clusters, one disk functions as the disk witness (described at the end of this subsection). Other disks contain the files required for the clustered services or applications. Storage requirements include the following:
- To use the native disk support included in failover clustering, use basic disks, not dynamic disks.
- We recommend that you format the partitions with NTFS. If you have a disk witness or use Cluster Shared Volumes, the partition for each of those must be NTFS.For Cluster Shared Volumes, there are no special requirements other than the requirement for NTFS. For more information about Cluster Shared Volumes, see Understanding Cluster Shared Volumes in a Failover Cluster.
- For the partition style of the disk, you can use either master boot record (MBR) or GUID partition table (GPT).
A disk witness is a disk in the cluster storage that is designated to hold a copy of the cluster configuration database. A failover cluster has a disk witness only if this is specified as part of the quorum configuration. For more information, see Understanding Quorum Configurations in a Failover Cluster.
10.1.1. Deploying storage area networks with failover clusters
When deploying a storage area network (SAN) with a failover cluster, follow these guidelines:
- Confirm compatibility of the storage: Confirm with manufacturers and vendors that the storage, including drivers, firmware, and software used for the storage, are compatible with failover clusters in Windows Server 2008 R2.
Important | |
Storage that was compatible with server clusters in Windows Server 2003 might not be compatible with failover clusters in Windows Server 2008 R2. Contact your vendor to ensure that your storage is compatible with failover clusters in Windows Server 2008 R2. |
- Failover clusters include the following new requirements for storage:
- Improvements in failover clusters (as compared to server clusters in Windows Server 2003) require that the storage respond correctly to specific SCSI commands. To confirm that your storage is compatible, run the Validate a Configuration Wizard. In addition, you can contact the storage vendor.
- The miniport driver used for the storage must work with the Microsoft Storport storage driver.
- Isolate storage devices, one cluster per device: Servers from different clusters must not be able to access the same storage devices. In most cases, a LUN used for one set of cluster servers should be isolated from all other servers through LUN masking or zoning.
- Consider using multipath I/O software: In a highly available storage fabric, you can deploy failover clusters with multiple host bus adapters by using multipath I/O software. This provides the highest level of redundancy and availability. For Windows Server 2008 R2, your multipath solution must be based on Microsoft Multipath I/O (MPIO). Your hardware vendor will usually supply an MPIO device-specific module (DSM) for your hardware, although Windows Server 2008 R2 includes one or more DSMs as part of the operating system.
Important | |
Host bus adapters and multipath I/O software can be very version sensitive. If you are implementing a multipath solution for your cluster, you should work closely with your hardware vendor to choose the correct adapters, firmware, and software for Windows Server 2008 R2. |
10.2. Software requirements for a failover cluster
All the servers in a failover cluster must either run the x64-based version or the Itanium architecture-based version of Windows Server 2008 R2 (nodes within a single failover cluster cannot run different versions).
All the servers should have the same software updates (patches) and service packs.
The Failover Clustering feature is included in server products such as Windows Server 2008 R2 Enterprise and Windows Server 2008 R2 Datacenter. The Failover Clustering feature is not included in Windows Server 2008 R2 Standard or Windows Web Server 2008 R2.
10.3. Network infrastructure and domain account requirements for a failover cluster
You will need the following network infrastructure for a failover cluster and an administrative account with the following domain permissions:
- Network settings and IP addresses: When you use identical network adapters for a network, also use identical communication settings on those adapters (for example, Speed, Duplex Mode, Flow Control, and Media Type). Also, compare settings between the network adapter and the switch it connects to and make sure that no settings are in conflict.If you have private networks that are not routed to the rest of your network infrastructure, ensure that each of these private networks uses a unique subnet. This is necessary even if you give each network adapter a unique IP address. For example, if you have two cluster nodes in a central office that uses one physical network, and two more nodes in a branch office that uses a separate physical network, do not specify 10.0.0.0/24 for both networks, even if you give each adapter a unique IP address.For more information about the network adapters, see Hardware Requirements for a Failover Cluster, earlier in this topic.
- DNS: The servers in the cluster must be using Domain Name System (DNS) for name resolution. The DNS dynamic update protocol can be used.
- Domain role: All servers in the cluster must be in the same Active Directory domain. As a best practice, all clustered servers should have the same domain role (either member server or domain controller). The recommended role is member server.
- Domain controllers: We recommend that your clustered servers be member servers. If they are, other servers will be the domain controllers in the domain that contains your failover cluster.
- Clients: There are no specific requirements for clients, other than the obvious requirements for connectivity and compatibility: the clients must be able to connect to the clustered servers, and they must run software that is compatible with the services offered by the clustered servers.
- Account for administering the cluster: When you first create a cluster or add servers to it, you must be logged on to the domain with an account that has administrator rights and permissions on all servers in that cluster. The account does not need to be a Domain Admins account—it can be a Domain Users account that is in the Administrators group on each clustered server. In addition, if the account is not a Domain Admins account, the account (or the group that the account is a member of) must be delegated Create Computer Objects permission in the domain. For more information, see Failover Cluster Step-by-Step Guide: Configuring Accounts in Active Directory (https://go.microsoft.com/fwlink/?LinkId=139147).
Note | |
There is a change in the way the Cluster service runs in Windows Server 2008 R2, as compared to Windows Server 2003. In Windows Server 2008 R2, there is no Cluster service account. Instead, the Cluster service automatically runs in a special context that provides the specific permissions and privileges necessary for the service (similar to the local system context, but with reduced privileges). |
10.3.1.1. Additional references
- Validating a Failover Cluster Configuration
- Overview of Failover Clusters
- For information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
11. Understanding Microsoft Support of Cluster Solutions
Microsoft supports a failover cluster solution only if it meets the following requirements:
- All hardware components in the failover cluster solution are marked as “Certified for Windows Server 2008 R2.”
- The complete cluster configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard.
- The hardware manufacturers’ recommendations for firmware updates and software updates (patches) have been followed. Usually, this means that the latest firmware and software updates have been applied. Occasionally, a manufacturer might recommend specific updates other than the latest updates.
11.1.1.1. Additional references
- Understanding Requirements for Failover Clusters
- Validating a Failover Cluster Configuration
- For information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
12. Understanding Cluster Validation Tests
With the Validate a Configuration Wizard, you can run tests to confirm that your hardware and hardware settings are compatible with failover clustering. You can run the complete set of configuration tests or a subset of the tests.
We recommend that you run the tests on your set of servers and storage devices before you configure them as a failover cluster (create a cluster from them). You can also run the tests after you create a cluster.
Note that the Failover Clustering feature must be installed on all the servers that you want to include in the tests.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
The Validate a Configuration Wizard includes five types of tests:
- Cluster Configuration tests. For an existing cluster, provide a simple way to review cluster settings and determine whether they are properly configured. These tests run only on existing clusters. For more information, see Understanding Cluster Validation Tests: Cluster Configuration.
- Inventory tests. Provide an inventory of the hardware, software, and settings (such as network settings) on the servers, and information about the storage. For more information, see Understanding Cluster Validation Tests: Inventory.
- Network tests. Validate that your networks are set up correctly for clustering. For more information, see Understanding Cluster Validation Tests: Network.
- Storage tests. Validate that the storage on which the failover cluster depends is behaving correctly and supports the required functions of the cluster. For more information, see Understanding Cluster Validation Tests: Storage.
- System Configuration tests. Validate that system software and configuration settings are compatible across servers. For more information, see Understanding Cluster Validation Tests: System Configuration.
13. Understanding Cluster Validation Tests: Cluster Configuration
Cluster configuration tests make it easier to review the cluster settings and determine whether they are properly configured. The cluster configuration tests run only on existing clusters (not servers for which a cluster is planned). The tests include the following:
- List Cluster Core Groups: This test lists information about Available Storage and about the group of resources used by the cluster itself.
- List Cluster Network Information: This test lists cluster-specific network settings that are stored in the cluster configuration. For example, it lists settings that affect how often heartbeat signals are sent between servers on the same subnet, and how often they are sent between servers on different subnets.
- List Cluster Resources: This test lists details for all the resources that are configured in the cluster.
- List Cluster Volumes: This test lists information about volumes in the cluster storage.
- List Clustered Services and Applications: This test lists the services and applications configured to run in the cluster.
- Validate Quorum Configuration: This test validates that the quorum configuration is optimal for the cluster. For more information about quorum configurations, see Understanding Quorum Configurations in a Failover Cluster.
- Validate Resource Status: This test validates that cluster resources are online, and lists the cluster resources that are running in separate resource monitors. If a resource is running in a separate resource monitor, it is usually because the resource failed and the Cluster service began running it in a separate resource monitor (to make it less likely to affect other resources if it fails again).
- Validate Service Principal Name: This test validates that a Service Principal Name exists for all resources that have Kerberos enabled.
- Validate Volume Consistency: This test checks for volumes that are flagged as inconsistent (“dirty”) and if any are found, provides a reminder that running chkdsk is recommended
14. Understanding Cluster Validation Tests: Inventory
Inventory tests provide lists of information about the hardware, software, and settings on each of the servers you are testing. You can use inventory tests alone (without other tests in the Validate a Cluster Configuration Wizard) to review or record the configuration of hardware (for example, to review that the software updates on each server are identical after you perform scheduled maintenance).
You can run the following inventory tests by using the Validate a Configuration Wizard:
- List BIOS Information
- List Environment Variables: Examples of environment variables are the number of processors, the operating system path, and the location of temporary folders.
- List Fibre Channel Host Bus Adapters
- List iSCSI Host Bus Adapters
- List Memory Information
- List Operating System Information
- List Plug and Play Devices
- List Running Processes
- List SAS Host Bus Adapters: This test lists the host bus adapters for Serial Attached SCSI (SAS).
- List Services Information: As a general best practice for servers, especially cluster nodes, you should run only necessary services.
- List Software Updates: You can use this test to help correct issues that are uncovered by one of the System Configuration tests, Validate Software Update Levels. For more information, see Understanding Cluster Validation Tests: System Configuration.
- List System Drivers
- List System Information: The system information includes the following:
- Computer name
- Manufacturer, model, and type
- Account name of the person who ran the validation tests
- Domain that the computer is in
- Time zone and daylight time setting (determines whether the clock is adjusted for daylight time changes)
- Number of processors
- List Unsigned Drivers: You can use this test to help correct issues that are uncovered by one of the System Configuration tests, Validate All Drivers Signed. For more information, see Understanding Cluster Validation Tests: System Configuration.
15. Understanding Cluster Validation Tests: Network
Network tests help you confirm that the correct network infrastructure is in place for a failover cluster.
15.1. Correcting issues uncovered by network tests
For information about how to correct the issues that are revealed by network tests, see:
- Prepare Hardware Before Validating a Failover Cluster
- Understanding Requirements for Failover Clusters
For additional information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
15.2. Network tests in the Validate a Configuration Wizard
You can run the following network tests by using the Validate a Configuration Wizard:
- List Network Binding Order: This test lists the order in which networks are bound to the adapters on each node.
- Validate Cluster Network Configuration includes the following tasks:
- Lists the cluster networks, that is, the network topology as seen from the perspective of the cluster.
- Validates that, for a particular cluster network, all network adapters are provided with IP addresses in the same way (that is, all use static IP addresses or all use DHCP).
- Validates that, for a particular cluster network, all network adapters use the same version of IP (that is, all use IPv4, all use IPv6, or all use both IPv4 and IPv6).
- Validate IP Configuration includes the following tasks:
- Lists the IP configuration details.
- Validates that IP addresses are unique in the cluster (no duplication).
- Checks the number of network adapters on each tested server. If only one adapter is found, the report provides a warning about avoiding single points of failure in the network infrastructure that connects clustered servers.There are multiple ways of avoiding single points of failure. You can connect your cluster nodes by multiple, distinct networks. Alternatively, you can connect your cluster nodes with one network that is constructed with teamed network adapters, redundant switches, redundant routers, or similar hardware that removes single points of failure.
- Validates that no tested servers have multiple adapters on the same IP subnet.
- Validates that all tested servers use the same version of IP (that is, all use IPv4, all use IPv6, or all use both IPv4 and IPv6).
- Validate Multiple Subnet Properties: This test validates that settings related to DNS are configured appropriately for clusters using multiple subnets to span multiple sites.
- Validate Network Communication: This test validates that tested servers can communicate with acceptable latency on all networks. The test also checks to see if there are redundant communication paths between all servers. Communication between the nodes of a cluster enables the cluster to detect node failures and status changes and to manage the cluster as a single entity.
- Validate Windows Firewall Configuration: This test validates that Windows Firewall is configured correctly for failover clustering on the tested servers.
16. Understanding Cluster Validation Tests: Storage
Storage tests analyze the storage to determine whether it will work correctly for a failover cluster running Windows Server 2008 R2.
16.1. Correcting issues uncovered by storage tests
If a storage test indicates that your storage or your storage configuration will not support a failover cluster, review the following suggestions:
- Contact your storage vendor and use the utilities provided with your cluster storage to gather information about the configuration. (In unusual cases, your storage vendor might indicate that your cluster solution is supported even though this is not reflected in the storage tests. For example, your cluster solution might have been specifically designed to work without shared storage.)
- Review results from multiple tests in the Validate a Configuration Wizard, such as the List Host Bus Adapters test (see the topic Understanding Cluster Validation Tests: Inventory) and two tests that are described in this topic, List All Disks and List Cluster Disks.
- Look for a storage validation test that is related to the one that uncovered the issue. For example, if Validate Multiple Arbitration uncovered an issue, the related test, Validate Disk Arbitration, might provide useful information.
- Review the storage requirements in Understanding Requirements for Failover Clusters.For information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
- Review the documentation for your storage, or contact the manufacturer.
16.2. Storage tests in the Validate a Configuration Wizard
You can run the following storage tests by using the Validate a Configuration Wizard:
- List All Disks
- List Potential Cluster Disks
- Validate Disk Access Latency
- Validate Disk Arbitration
- Validate Disk Failover
- Validate File System
- Validate Microsoft MPIO-Based Disks
- Validate Multiple Arbitration
- Validate SCSI Device Vital Product Data (VPD)
- Validate SCSI-3 Persistent Reservation
- Validate Simultaneous Failover
16.2.1.1. List All Disks
This test lists all disks that are visible to one or more tested servers. The test lists:
- Disks that can support clustering and can be accessed by all the servers.
- Disks on an individual server.
The following information is listed for each disk:
- Disk number
- Unique identifier
- Bus type
- Stack type
- Disk address (where applicable), including the port, path, target identifier (TID), and Logical Unit Number (LUN)
- Adapter description
- Disk characteristics such as the partition style and partition type
You can use this test to help diagnose issues uncovered by other storage tests described in this topic.
16.2.1.2. List Potential Cluster Disks
This test lists disks that can support clustering and are visible to all tested servers. To support clustering, the disk must be connected through Serial Attached SCSI (SAS), iSCSI, or Fibre Channel. In addition, the test validates that multipath I/O is working correctly, meaning that each of the disks is seen as one disk, not two.
16.2.1.2.1. Types of disks not listed by the test
This test lists only disks that can be used for clustering. The disks that it lists must:
- Be connected through Serial Attached SCSI (SAS), iSCSI, or Fibre Channel.
- Be visible to all servers in the cluster.
- Be accessed through a host bus adapter that supports clustering.
- Not be a boot volume or system volume.
- Not be used for paging files, hibernation, or dump files. (Dump files record the contents of memory when the system stops unexpectedly.)
16.2.1.3. Validate Disk Access Latency
This test validates that the latency for disk read and write operations is within an acceptable limit for a failover cluster. If disk read and write operations take too long, one possible result is that cluster time-outs might be triggered. Another possible result is that the application attempting to access the disk might appear to have failed, and the cluster might initiate a needless failover.
16.2.1.4. Validate Disk Arbitration
This test validates that:
- Each of the clustered servers can use the arbitration process to become the owner of each of the cluster disks.
- When a particular server owns a disk, if one or more other servers arbitrate for that disk, the original owner retains ownership.
If a clustered server cannot become the owner of a disk, or it cannot retain ownership when other clustered servers arbitrate for the disk, various issues might result:
- The disk could have no owner and therefore be unavailable.
- Two owners could write to the disk in an uncoordinated way, causing the disk to become corrupted.Failover cluster servers are designed to coordinate all write operations in a way that avoids disk corruption.
- The disk could change owners every time arbitration occurs, which would interfere with disk availability.
16.2.1.5. Validate Disk Failover
This test validates that disk failover works correctly in the cluster. Specifically, the test validates that when a disk owned by one clustered server is failed over, the server that takes ownership of the disk can read it. The test also validates that information written to the disk before the failover is still the same after the failover.
If disk failover occurs but the server that takes ownership of a disk cannot read it, the cluster cannot maintain availability of the disk. If information written to the disk is changed during the process of failover, it could cause issues for users or software that require this information. In either case, if the affected disk is a disk witness (a disk that stores cluster configuration data and participates in quorum), such issues could cause the cluster to lose quorum and shut down.
If this test reveals that disk failover does not work correctly, the results of the following tests might help you identify the cause of the issue:
16.2.1.6. Validate File System
This test validates that the file system on disks in shared storage is supported by failover clusters.
16.2.1.7. Validate Microsoft MPIO-Based Disks
This test validates that multi-path disks (Microsoft MPIO-based disks) have been configured correctly for failover cluster.
16.2.1.8. Validate Multiple Arbitration
This test validates that when multiple clustered servers arbitrate for a cluster disk, only one server obtains ownership. The process of disk arbitration helps ensure that clustered servers perform all write operations in a coordinated way, avoiding disk corruption.
If this test reveals that multiple clustered servers can obtain ownership of a cluster disk through disk arbitration, the results of the following test might help you identify the cause of the issue:
16.2.1.9. Validate SCSI Device Vital Product Data (VPD)
This test validates that the storage supports necessary SCSI inquiry data (VPD descriptors) and that they are unique.
16.2.1.10. Validate SCSI-3 Persistent Reservation
This test validates that the cluster storage uses the more recent (SCSI-3 standard) Persistent Reserve commands (which are different from the older SCSI-2 standard reserve/release commands). The Persistent Reserve commands avoid SCSI bus resets, which means they are much less disruptive than the older reserve/release commands. Therefore, a failover cluster can be more responsive in a variety of situations, as compared to a cluster running an earlier version of the operating system. In addition, disks are never left in an unprotected state, which lowers the risk of volume corruption.
16.2.1.11. Validate Simultaneous Failover
This test validates that simultaneous disk failovers work correctly in the cluster. Specifically, the test validates that even when multiple disk failovers occur at the same time, any clustered server that takes ownership of a disk can read it. The test also validates that information written to each disk before a failover is still the same after the failover.
If disk failover occurs but the server that takes ownership of a disk cannot read it, the cluster cannot maintain availability of the disk. If information written to the disk is changed during the process of failover, it could cause issues for users or software that require this information. In either case, if the affected disk is a disk witness (a disk that stores cluster configuration data and participates in quorum), such issues could cause the cluster to lose quorum and shut down.
If this test reveals that disk failover does not work correctly, the results of the following tests might help you identify the cause of the issue:
17. Understanding Cluster Validation Tests: System Configuration
System configuration tests analyze the selected servers to determine whether they are properly configured for working together in a failover cluster. The system configuration tests include the following:
- Validate Active Directory Configuration: This test validates that each tested server is in the same domain and organizational unit. It also validates that all tested servers are domain controllers or that all are member servers. To change the domain role of a server, use the Active Directory® Domain Services Installation Wizard.
- Validate All Drivers Signed: This test validates that all tested servers contain only signed drivers. If an unsigned driver is detected, the test is not considered a failure, but a warning is issued.The purpose of signing drivers is to tell you whether the drivers on your system are original, unaltered files that either came with the operating system or were supplied by a vendor.You can get a list of system drivers and a list of unsigned drivers by running the corresponding inventory tasks. For more information, see Understanding Cluster Validation Tests: Inventory.
- Validate Cluster Service and Driver Settings: This test validates the startup settings used by services and drivers, such as the Cluster service, NetFT.sys, and Clusdisk.sys.
- Validate Memory Dump Settings: This test validates that none of the nodes currently requires a reboot (as part of a software update) and that each node is configured to capture a memory dump if it stops running.
- Validate Operating System Installation Option: This test validates that the operating systems on the servers use the same installation option (full installation or Server Core installation).
- Validate Required Services: This test validates that the services that are required for failover clustering are running on each tested server and are configured to start automatically whenever the server is restarted.
- Validate Same Processor Architecture: This test validates that all tested servers have the same architecture. A failover cluster is supported only if the systems in it are all x64-based or all Itanium architecture-based.
- Validate Service Pack Levels: This test validates that all tested servers have the same service packs. A failover cluster can run even if some servers have different service packs than others. However, servers with different service packs might behave differently from each other, with unexpected results. We recommend that all servers in the failover cluster have the same service packs.
- Validate Software Update Levels: This test validates that all tested servers have the same software updates. A failover cluster can run even if some servers have different updates than others. However, servers with different software updates might behave differently from each other, with unexpected results. We recommend that all servers in the failover cluster have the same software update levels.
- Validate System Drive Variable: This test validates that all nodes use the same letter for the system drive environment variable.
18. Prepare Hardware Before Validating a Failover Cluster
Before running the Validate a Configuration Wizard for a failover cluster, you should make preparations such as connecting the networks and storage needed by the cluster. This topic briefly describes these preparations. For a full list of the requirements for a failover cluster, see Understanding Requirements for Failover Clusters.
In most cases, membership in the local Administrators group on each server, or equivalent, is the minimum required to complete this procedure. This is because the procedure includes steps for configuring networks and storage. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To prepare hardware before validating a failover cluster |
- Confirm that your entire cluster solution, including drivers, is compatible with Windows Server 2008 R2 by checking the hardware compatibility information on the Microsoft Web site (https://go.microsoft.com/fwlink/?LinkId=139145).
Important | |
Microsoft supports a failover cluster solution only if all the hardware components are marked as “Certified for Windows Server 2008 R2.” In addition, the complete configuration (servers, network, and storage) must pass all tests in the Validate a Configuration Wizard, which is included in the Failover Cluster Manager snap-in. |
- We recommend that you use a set of matching computers that contain the same or similar components.
- For the cluster networks:
-
- Review the details about networks in Understanding Requirements for Failover Clusters.
- Connect and configure the networks that the servers in the cluster will use.
Note | |
One option for configuring cluster networks is to create a preliminary network configuration, then run the Validate a Configuration Wizard with only the Network tests selected (avoid selecting Storage tests). When only the Network tests are run, the process does not take a long time. Using the validation report, you can make any corrections still needed in the network configuration. (Later, after the storage is configured, you should run the wizard with all tests.) |
- For the storage:
-
- Review the details about storage in Understanding Requirements for Failover Clusters.
- Follow the manufacturer’s instructions for physically connecting the servers to the storage.
- Ensure that the disks (LUNs) that you want to use in the cluster are exposed to the servers you will cluster (and only those servers). You can use any of the following interfaces to expose disks or LUNs:
- Microsoft Storage Manager for SANs (part of the operating system in Windows Server 2008 R2). To use this interface, you need to contact the manufacturer of your storage for a Virtual Disk Service (VDS) provider package that is designed for your storage.
- If you are using iSCSI, an appropriate iSCSI interface.
- The interface provided by the manufacturer of the storage.
- If you have purchased software that controls the format or function of the disk, obtain instructions from the vendor about how to use that software with Windows Server 2008 R2.
- On one of the servers that you want to cluster, click Start, click Administrative Tools, click Computer Management, and then click Disk Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.) In Disk Management, confirm that the cluster disks are visible.
- If you want to have a storage volume larger than 2 terabytes, and you are using the Windows interface to control the format of the disk, convert that disk now to the partition style called GUID partition table (GPT). To do this, back up any data on the disk, delete all volumes on the disk and then, in Disk Management, right-click the disk (not a partition) and click Convert to GPT Disk.
For volumes smaller than 2 terabytes, instead of using GPT, you can use the partition style called master boot record (MBR).
Important | |
You can use either MBR or GPT for a disk used by a failover cluster, but you cannot use a disk that you converted to dynamic by using Disk Management. |
- Check the format of any exposed volume or LUN. We recommend NTFS for the format (for the disk witness or for Cluster Shared Volumes, you must use NTFS).
- As appropriate, make sure that there is connectivity from the servers to be clustered to any nonclustered domain controllers. (Connectivity to clients is not necessary for validation, and it can be established later.)
19. Validate a New or Existing Failover Cluster
Before creating a failover cluster, we recommend that you validate your hardware (servers, networks, and storage) by running the Validate a Configuration Wizard. You can validate either an existing cluster or one or more servers that are not yet clustered.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
Note | |
If you want to run tests on a cluster with Cluster Shared Volumes that are currently online, use the instructions in Use Validation Tests for Troubleshooting a Failover Cluster. |
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To validate a new or existing failover cluster |
- Identify the server or servers that you want to test and confirm that the Failover Clustering feature is installed:
- If the cluster does not yet exist, choose the servers that you want to include in the cluster, and make sure you have installed the Failover Clustering feature on those servers. For more information, see Install the Failover Clustering Feature.Note that when you run the Validate a Configuration Wizard on unclustered servers, you must enter the names of all the servers you want to test, not just one.
- If the cluster already exists, make sure you know the name of the cluster or a node in the cluster.
- Review network or storage hardware that you want to validate, to confirm that it is connected to the servers. For details, see Prepare Hardware Before Validating a Failover Cluster and Understanding Requirements for Failover Clusters.
- Decide whether you want to run all or only some of the available validation tests. For detailed information about the tests, see the topics listed in Understanding Cluster Validation Tests.
The following guidelines can help you decide whether to run all tests:
- For a planned cluster with all hardware connected: Run all tests.
- For a planned cluster with parts of the hardware connected: Run System Configuration tests, Inventory tests, and tests that apply to the hardware that is connected (that is, Network tests if the network is connected or Storage tests if the storage is connected).
- For a cluster to which you plan to add a server: Run all tests. When you run them, be sure to include all servers that you plan to have in the cluster.
- For troubleshooting an existing cluster: If you are troubleshooting an existing cluster, you might run all tests, although you could run only the tests that relate to the apparent issue.
Note | |
If you are troubleshooting an existing cluster that uses Cluster Shared Volumes, see Use Validation Tests for Troubleshooting a Failover Cluster. |
- In the Failover Cluster Manager snap-in, in the console tree, make sure Failover Cluster Manager is selected and then, under Management, click Validate a Configuration.
- Follow the instructions in the wizard to specify the servers and the tests, run the tests, and view the results.
Note that when you run the Validate a Configuration Wizard on unclustered servers, you must enter the names of all the servers you want to test, not just one.
19.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- To view the results of the tests after you close the wizard, choose one of the following:
- Open the folder systemroot\Cluster\Reports (on a clustered server).
- If the tested servers are now a cluster, in the console tree, right-click the cluster, and then click View Validation Report. This displays the most recent validation report for that cluster.
20. Creating a Failover Cluster or Adding a Cluster Node
Before you can configure the cluster for the first time or add a server (node) to an existing failover cluster, you need to take the following preparatory steps:
- Install the Failover Clustering feature: You must install the Failover Clustering feature on any server that will become a server (node) in the cluster. For more information, see Install the Failover Clustering Feature.
- Connect networks and storage: You must connect the networks and storage that the nodes will use. For details, see:
- Validate the hardware configuration: We strongly recommend that you validate your hardware configuration (servers, network, and storage) before creating a cluster or adding a node to a cluster. To validate, you run a wizard. For more information, see Validate a New or Existing Failover Cluster.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
To see the preceding steps in the context of a checklist, see Checklist: Create a Failover Cluster.
The following topics describe how to create a failover cluster or add a server (node) to a failover cluster:
- Understanding Access Points (Names and IP Addresses) in a Failover Cluster
- Create a New Failover Cluster
- Add a Server to a Failover Cluster
20.1.1.1. Additional references
- Configuring a Service or Application for High Availability
- Migrating Settings to a Failover Cluster Running Windows Server 2008 R2
- For information about the maximum number of servers that you can have in a failover cluster, see https://go.microsoft.com/fwlink/?LinkId=139146.
21. Understanding Access Points (Names and IP Addresses) in a Failover Cluster
An access point is a name and associated IP address information. You use an access point to administer a failover cluster or to communicate with a service or application in the cluster. One access point can include one or more IP addresses, which can be IPv6 addresses, IPv4 addresses supplied through DHCP, or static IPv4 addresses.
When selecting a network to be used for an access point, avoid any network that is used for iSCSI. Configure networks used for iSCSI so that they are not used for network communication in the cluster.
21.1.1.1. Additional references
- Understanding Cluster Validation Tests: Network
- Modify Network Settings for a Failover Cluster
- Understanding Requirements for Failover Clusters
22. Create a New Failover Cluster
Before you create a cluster, you must carry out tasks such as connecting the hardware and validating the hardware configuration. For a list of these tasks, see Checklist: Create a Failover Cluster.
For information about the maximum number of servers that you can have in a failover cluster, see https://go.microsoft.com/fwlink/?LinkId=139146.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. In addition, if your account is not a Domain Admins account, either the account or the group that the account is a member of must be delegated the Create Computer Objects permission in the domain. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To create a new failover cluster |
- Confirm that you have connected the hardware and validated the hardware configuration, as described in the following topics:
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
- In the Failover Cluster Manager snap-in, confirm that Failover Cluster Manager is selected and then, under Management, click Create a Cluster.
- Follow the instructions in the wizard to specify:
- The servers to include in the cluster.
- The name of the cluster.
- Any IP address information that is not automatically supplied by your DHCP settings.
- After the wizard runs and the Summary page appears, if you want to view a report of the tasks that the wizard performed, click View Report.
To view the report after you close the wizard, see the following folder, where SystemRoot is the location of the operating system (for example, C:\Windows):
SystemRoot\Cluster\Reports\
22.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
23. Add a Server to a Failover Cluster
Before you add a server (node) to a failover cluster, we strongly recommend that you run the Validate a Configuration Wizard for the existing cluster nodes and the new node or nodes. The Validate a Configuration Wizard helps you confirm the configuration in a variety of important ways. For example, it validates that the server to be added is connected correctly to the networks and storage and that it contains the same software updates.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
For information about the maximum number of servers that you can have in a failover cluster, see https://go.microsoft.com/fwlink/?LinkId=139146.
For more information about validation, see Validate a New or Existing Failover Cluster.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To add a server to a failover cluster |
- Confirm that you have connected the networks and storage to the server you want to add. For details about the requirements for networking and storage, see Prepare Hardware Before Validating a Failover Cluster.
- Validate the hardware configuration, including both the existing cluster nodes and the proposed new node. For information about validation, see Validate a New or Existing Failover Cluster.
Important | |
Microsoft supports a failover cluster solution only if the complete configuration (servers, network, and storage) can pass all tests in the Validate a Configuration Wizard. In addition, all hardware components in the solution must be marked as “Certified for Windows Server 2008 R2.” |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- Select the cluster, and then in the Actions pane, click Add Node.
- Follow the instructions in the wizard to specify the server to add to the cluster.
- After the wizard runs and the Summary page appears, if you want to view a report of the tasks the wizard performed, click View Report.
To view the report after you close the wizard, see the following folder, where SystemRoot is the location of the operating system (for example, C:\Windows):
SystemRoot\Cluster\Reports\
23.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
24. Configuring a Service or Application for High Availability
This topic provides an overview of the task of configuring specific services or applications for failover clustering by using the High Availability Wizard. Instructions for running the wizard are provided in Configure a Service or Application for High Availability.
Important | |
If you want to cluster a mail server or database server application, see the application’s documentation for information about the correct way to install it in a cluster environment. Mail server and database server applications are complex, and they might require configuration steps that fall outside the scope of this failover clustering Help. |
This topic contains the following sections:
- Applications and services listed in the High Availability Wizard
- List of topics about configuring a service or application for high availability
24.1. Applications and services listed in the High Availability Wizard
A variety of services and applications can work as “cluster-aware” applications, functioning in a coordinated way with cluster components.
Note | |
When configuring a service or application that is not cluster-aware, you can use generic options in the High Availability Wizard: Generic Service, Generic Application, or Generic Script. For information about using these options, see Understanding Generic Services and Applications that Can Be Configured in a Failover Cluster. |
In the High Availability Wizard, you can choose from the generic options described in the previous note, or you can choose from the following services and applications:
- DFS Namespace Server: Provides a virtual view of shared folders in an organization. When a user views the namespace, the folders appear to reside on a single hard disk. Users can navigate the namespace without needing to know the server names or shared folders that are hosting the data.
- DHCP Server: Automatically provides client computers and other TCP/IP-based network devices with valid IP addresses.
- Distributed Transaction Coordinator (DTC): Supports distributed applications that perform transactions. A transaction is a set of related tasks, such as updates to databases, that either succeed or fail as a unit.
- File Server: Provides a central location on your network where you can store and share files with users.
- Internet Storage Name Service (iSNS) Server: Provides a directory of iSCSI targets.
- Message Queuing: Enables distributed applications that are running at different times to communicate across heterogeneous networks and with computers that may be offline.
- Other Server: Provides a client access point and storage only. Add an application after completing the wizard.
- Print Server: Manages a queue of print jobs for a shared printer.
- Remote Desktop Connection Broker (formerly TS Session Broker): Supports session load balancing and session reconnection in a load-balanced remote desktop server farm. RD Connection Broker is also used to provide users access to RemoteApp programs and virtual desktops through RemoteApp and Desktop Connection.
- Virtual Machine: Runs on a physical computer as a virtualized computer system. Multiple virtual machines can run on one computer.
- WINS Server: Enables users to access resources by a NetBIOS name instead of requiring them to use IP addresses that are difficult to recognize and remember.
24.2. List of topics about configuring a service or application for high availability
The following topics describe the process of configuring a service or application for high availability in a failover cluster:
- Understanding Generic Services and Applications that Can Be Configured in a Failover Cluster
- Understanding Hyper-V and Virtual Machines in the Context of a Cluster
- Configure a Service or Application for High Availability
- Configure a Virtual Machine for High Availability
- Test the Failover of a Clustered Service or Application
- Test the Failover of a Clustered Virtual Machine
- Modifying the Settings for a Clustered Service or Application
25. Understanding Generic Services and Applications that Can Be Configured in a Failover Cluster
You can configure a variety of different services or applications for high availability in a failover cluster. For a list of the services or applications most commonly configured for high availability, see Configuring a Service or Application for High Availability.
This topic contains the following sections:
- Services or applications that can be run as a Generic Application, Generic Script, or Generic Service
- Basic requirements for a service or application in a failover cluster environment
25.1. Services or applications that can be run as a Generic Application, Generic Script, or Generic Service
In failover clusters, you can use the Generic Application, Generic Script, and Generic Service options to configure high availability for some services and applications that are not “cluster-aware” (not originally designed to run in a cluster).
25.1.1.1. Generic Application
If you run an application as a Generic Application, the cluster software will start the application, then periodically query the operating system to see whether the application appears to be running. If so, it is presumed to be online, and will not be restarted or failed over.
Note that in comparison with a cluster-aware application, a Generic Application has fewer ways of communicating its precise state to the cluster software. If a Generic Application enters a problematic state but nonetheless appears to be running, the cluster software does not have a way of discovering this and taking an action (such as restarting the application or failing it over).
Before running the High Availability Wizard to configure high availability for a Generic Application, make sure that you know the path of the application and the names of any registry keys under HKEY_LOCAL _MACHINE that are required by the application.
25.1.1.2. Generic Script
You can create a script that runs in Windows Script Host and that monitors and controls your application. Then you can configure the script as a Generic Script in the cluster. The script provides the cluster software with information about the current state of the application. As needed, the cluster software will restart or fail over the script (and through it, the application will be restarted or failed over).
When you configure a Generic Script in a failover cluster, the ability of the cluster software to respond with precision to the state of the application is determined by the script. The more precise the script is in providing information about the state of the application, the more precise the cluster software can be in responding to that information.
Before running the High Availability Wizard to configure high availability for a Generic Script, make sure that you know the path of the script.
25.1.1.3. Generic Service
The cluster software will start the service, then periodically query the Service Controller (a feature of the operating system) to determine whether the service appears to be running. If so, it is presumed to be online, and will not be restarted or failed over.
Note that in comparison with a cluster-aware service, a Generic Service has fewer ways of communicating its precise state to the cluster software. If a Generic Service enters a problematic state but nonetheless appears to be running, the cluster software does not have a way of discovering this and taking an action (such as restarting the service or failing it over).
Before running the High Availability Wizard to configure high availability for a Generic Service, make sure that you know the name of the service as it appears in the registry under HKEY_LOCAL _MACHINE\System\CurrentControlSet\Services.
25.2. Basic requirements for a service or application in a failover cluster environment
To be appropriate for a failover cluster, a service or application must have certain characteristics. The most important characteristics include:
- The service or application should be stateful. In other words, the service or application should have long-running in-memory state or large, frequently updated data states. One example is a database application. For a stateless application (such as a Web server front end), Network Load Balancing will probably be more appropriate than failover clustering.
- The service or application should use a client component that automatically retries after temporary network interruptions. Otherwise, if the server component of the application fails over from one clustered server to another, the unavoidable (but brief) interruption will cause the clients to stop, rather than simply retrying and reconnecting.
- The service or application should be able to identify the disk or disks it uses. This makes it possible for the service or application to communicate with disks in the cluster storage, and to reliably find the correct disk even after a failover.
- The service or application should use IP-based protocols. Examples include TCP, UDP, DCOM, named pipes, and RPC over TCP/IP.
26. Understanding Hyper-V and Virtual Machines in the Context of a Cluster
This topic provides information about the following:
- Overview of Hyper-V in the context of a failover cluster
- Using Cluster Shared Volumes with Hyper-V
- Live migration, quick migration, and moving of virtual machines
- Coordinating the use of Hyper-V Manager with the use of Failover Cluster Manager
26.1. Overview of Hyper-V in the context of a failover cluster
The Hyper-V role in Server Manager enables you to create a virtualized server computing environment in which you can create and manage virtual machines that run operating systems, applications, and services. Failover clusters are used to increase the availability of such applications and services. Hyper-V and failover clustering can be used together to make a virtual machine that is highly available, thereby minimizing disruptions and interruptions to clients.
You can cluster a virtual machine and you can cluster a service or application that happens to be running in a virtual machine. If you cluster a virtual machine, the cluster monitors the health of the virtual machine itself (and will respond to failures by restarting the virtual machine or failing it over). If you cluster a service or application that happens to be running in a virtual machine, the cluster monitors the health of the service or application (and will respond to failures by restarting the application or failing it over).
A feature of failover clusters called Cluster Shared Volumes is specifically designed to enhance the availability and manageability of virtual machines.
26.2. Using Cluster Shared Volumes with Hyper-V
Cluster Shared Volumes are volumes in a failover cluster that multiple nodes can read from and write to at the same time. This enables multiple nodes to concurrently access a single shared volume. The Cluster Shared Volumes feature is only supported for use with Hyper-V (a server role in Windows Server 2008 R2) and other technologies specified by Microsoft. For information about the roles and features that are supported for use with Cluster Shared Volumes, see https://go.microsoft.com/fwlink/?LinkId=137158.
When you use Cluster Shared Volumes, managing a large number of clustered virtual machines becomes easier. For more information about Cluster Shared Volumes, see Understanding Cluster Shared Volumes in a Failover Cluster.
26.3. Live migration, quick migration, and moving of virtual machines
With failover clusters, a virtual machine can be moved from one cluster node to another in several different ways: live migration, quick migration, and moving. This section describes these actions. For information about how to perform the actions, see Live Migrate, Quick Migrate, or Move a Virtual Machine from Node to Node.
Note | |
If you want to cluster virtual machines and use live migration or quick migration, we recommend making the hardware and system settings on the nodes as similar as possible to minimize potential problems. |
The following list describes the choices:
Live migration: When you initiate live migration, the cluster copies the memory being used by the virtual machine from the current node to another node, so that when the transition to the other node actually takes place, the memory and state information is already in place for the virtual machine. The transition is usually fast enough that a client using the virtual machine does not lose the network connection. If you are using Cluster Shared Volumes, live migration is almost instantaneous, because no transfer of disk ownership is needed. A live migration can be used for planned maintenance but not for an unplanned failover.
Note | |
You cannot use live migration to move multiple virtual machines simultaneously. On a given server running Hyper-V, only one live migration (to or from the server) can be in progress at a given time. |
Quick migration: When you initiate quick migration, the cluster copies the memory being used by the virtual machine to a disk in storage, so that when the transition to another node actually takes place, the memory and state information needed by the virtual machine can quickly be read from the disk by the node that is taking over ownership. A quick migration can be used for planned maintenance but not for an unplanned failover.
You can use quick migration to move multiple virtual machines simultaneously.
Moving: When you initiate a move, the cluster prepares to take the virtual machine offline by performing an action that you have specified in the cluster configuration for the virtual machine resource: Save, Shut down, Shut down (forced), or Turn off. Save (the default) saves the state of the virtual machine, so that the state can be restored when bringing the virtual machine back online. Shut down performs an orderly shutdown of the operating system (waiting for all processes to close) on the virtual machine before taking the virtual machine offline. Shut down (forced) shuts down the operating system on the virtual machine without waiting for slower processes to finish, and then takes the virtual machine offline. Turn off is like turning off the power to the virtual machine, which means that data loss may occur.
The setting you specify for the offline action does not affect live migration, quick migration, or unplanned failover. It affects only moving (or taking the resource offline through the action of Windows PowerShell or an application). To specify this setting, see the “Additional considerations” section in Live Migrate, Quick Migrate, or Move a Virtual Machine from Node to Node.
26.4. Coordinating the use of Hyper-V Manager with the use of Failover Cluster Manager
After you configure clustered virtual machines, you can modify most settings of those clustered virtual machines using either Hyper-V Manager or Failover Cluster Manager. We recommend that you use Failover Cluster Manager for modifying those settings, as described in Modify the Virtual Machine Settings for a Clustered Virtual Machine. If you decide to use Hyper-V Manager to modify virtual machine settings, be sure to open Failover Cluster Manager and refresh the virtual machine configuration, as described in Refresh the Configuration of a Virtual Machine.
27. Configure a Service or Application for High Availability
You can configure a service or application for high availability by running a wizard that creates the appropriate settings in a failover cluster. For information about the services and applications that you can configure in a failover cluster, see:
- Configuring a Service or Application for High Availability
- Understanding Generic Services and Applications that Can Be Configured in a Failover Cluster.
If you are configuring a clustered file server or a clustered print server, review one of the following:
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. In addition, if your account is not a Domain Admins account, either the account or the group that the account is a member of must be delegated the Create Computer Objects permission in the domain. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To configure a service or application for high availability |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Click Services and Applications and then, under Actions (on the right), click Configure a Service or Application.
- Follow the instructions in the wizard to specify the service or application that you want to configure for high availability, along with the following details:
- A name for the clustered service or application. This name will be registered in DNS and associated with the IP address for this clustered service or application.
- Any IP address information that is not automatically supplied by your DHCP settings—for example, a static IPv4 address for this clustered service or application.
- The storage volume or volumes that the clustered service or application should use.
- Specific information for the service or application that you are configuring. For example, for a Generic Application, you must specify the path for the application and any registry keys that the application requires (so that the registry keys can be replicated to all nodes in the cluster).
- After the wizard runs and the Summary page appears, if you want to view a report of the tasks that the wizard performed, click View Report.
If you are configuring a clustered file server or a clustered print server, see the following “Additional considerations” section.
27.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- If this is the first service or application that you are configuring for high availability, it might be appropriate to review your cluster network settings now. If the clustered servers are connected to a network that is not to be used for network communication in the cluster (for example, a network that is intended only for iSCSI or only for backup), then under Networks, right-click that network, click Properties, and then select Do not allow the cluster to use this network.
- If you are configuring a clustered file server, review Checklist: Create a Clustered File Server.
- If you are configuring a clustered print server, review Checklist: Create a Clustered Print Server.
- For information about modifying the settings for this service or application after the wizard finishes running, see Modifying the Settings for a Clustered Service or Application.
28. Configure a Virtual Machine for High Availability
In a failover cluster, you can create configure a virtual machine for high availability by running a wizard that creates the appropriate settings.
To work with virtual machines, you might want to view Help content for the Hyper-V Manager snap-in. To view this content, install the Hyper-V role (through Server Manager and the Add Roles Wizard), open Hyper-V Manager (through Server Manager or in a separate snap-in console), and press F1. You can also view information about Hyper-V on the Web, for example, information about creating a virtual machine (https://go.microsoft.com/fwlink/?LinkId=137805).
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To configure a virtual machine for high availability |
- Be sure that you have installed the Hyper-V role and have reviewed the steps in Checklist: Create a Clustered Virtual Machine. This procedure is a step in that checklist.
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Click Services and Applications.
- If you have already created the virtual machine, skip to step 6. Otherwise, use the New Virtual Machine Wizard to create a virtual machine and configure it for high availability:
-
- In the Action pane, click Virtual machines, point to Virtual machine, and then click a node.The virtual machine will initially be created on that node, and then be clustered so that it can move to another node or nodes as needed.
- If the Before You Begin page of the New Virtual Machine Wizard appears, click Next.
- Specify a name for the virtual machine, and then select Store the virtual machine in a different location and specify a disk in shared storage or, if Cluster Shared Volumes is enabled, a Cluster Shared Volume (a volume that appears to be on the system drive of the node, under the \ClusterStorage folder).
- Follow the instructions in the wizard. You can specify details (such as the amount of memory, the network, and the virtual hard disk file) now, and you can also add or change configuration details later.
- When you click Finish, the wizard creates the virtual machine and also configures it for high availability. Skip the remaining step in this procedure.
- If you have already created the virtual machine and only want to configure it for high availability, first make sure that the virtual machine is not running. Then, use the High Availability Wizard to configure the virtual machine for high availability:
-
- In the Action pane, click Configure a Service or Application.
- If the Before You Begin page of the High Availability Wizard appears, click Next.
- On the Select Service or Application page, click Virtual Machine and then click Next.
- Select the virtual machine that you want to configure for high availability, and complete the wizard.
- After the High Availability wizard runs and the Summary page appears, if you want to view a report of the tasks that the wizard performed, click View Report.
28.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- If you decide to change the settings of a clustered virtual machine, be sure to see Modify the Virtual Machine Settings for a Clustered Virtual Machine.
- For each clustered virtual machine, you can also specify the action that the cluster performs before taking the virtual machine offline. Taking the virtual machine offline is necessary when moving the virtual machine (but not necessary for live migration or quick migration). To specify the setting, make sure that after selecting the clustered virtual machine in the console tree (on the left), you right-click the virtual machine resource displayed in the center pane (not on the left), and then click Properties. Click the Settings tab and select an option. The actions are described in Understanding Hyper-V and Virtual Machines in the Context of a Cluster in the section called “Live migration, quick migration, and moving of virtual machines,” in the description about the moving of virtual machines.
- If this is the first virtual machine that you are configuring for high availability, it might be appropriate to review your cluster network settings now. If the clustered servers are connected to a network that is not to be used for network communication in the cluster (for example, a network that is intended only for iSCSI or only for backup), then under Networks, right-click that network, click Properties, and then select Do not allow the cluster to use this network.
29. Test the Failover of a Clustered Service or Application
You can perform a basic test to confirm that a clustered service or application can fail over successfully to another node. For information about performing a similar test for a clustered virtual machine, see Test the Failover of a Clustered Virtual Machine.
This procedure is one of the steps in the following checklists:
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To test the failover of a clustered service or application |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then click the service or application for which you want to test failover.
- Under Actions (on the right), click Move this service or application to another node.
As the service or application moves, the status is displayed in the results pane (center pane).
- Optionally, repeat step 4 to move the service or application to an additional node or back to the original node.
29.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Sta
30. Test the Failover of a Clustered Virtual Machine
You can perform a basic test to confirm that a clustered virtual machine can fail over successfully to another node. For information about performing a similar test for a clustered service or application, see Test the Failover of a Clustered Service or Application.
This procedure is one of the steps in the following checklist:
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To test the failover of a clustered virtual machine |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Virtual Machine, and then click the virtual machine for which you want to test failover.
- Under Actions (on the right), click Move virtual machine(s) to another node.
As the virtual machine moves, the status is displayed in the results pane (center pane).
- Optionally, repeat step 4 to move the virtual machine to an additional node or back to the original node.
30.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- If you decide to change the settings of a clustered virtual machine, be sure to see Modify the Virtual Machine Settings for a Clustered Virtual Machine.
31. Modifying the Settings for a Clustered Service or Application
This topic describes failover and failback settings for a clustered service or application. It also provides links to information about other settings that you can modify for a clustered service or application.
- Modifying failover and failback settings, including preferred owners
- List of topics about modifying settings
31.1. Modifying failover and failback settings, including preferred owners
You can adjust the failover settings, including preferred owners and failback settings, to control how the cluster responds when the application or service fails. You can configure these settings on the property sheet for the clustered service or application (on the General tab or the Failover tab). The following table provides examples that illustrate how these settings work.
Settings | Effect |
Example 1:
General tab, Preferred owner: Node1 Failover tab, Failback setting: Allow failback (Immediately) |
If the service or application fails over from Node1 to Node2, when Node1 is again available, the service or application will fail back to Node1. |
Example 2:
Failover tab, Maximum failures in the specified period: 2 Failover tab, Period (hours): 6 |
In a six-hour period, if the application or service fails no more than twice, it will be restarted or failed over each time. If the application or service fails a third time in the six-hour period, it will be left in the failed state.
The default value for the maximum number of failures is n-1, where n is the number of nodes. You can change the value, but we recommend a relatively low value, so that if multiple node failures occur, the application or service will not be moved between nodes indefinitely. |
31.2. List of topics about modifying settings
The following topics provide details about settings that you can modify for a clustered service or application:
- Add Storage for a Clustered Service or Application (For information about adding a disk to the cluster itself, so that the disk is available for adding to a clustered service or application, see Add Storage to a Failover Cluster).
- Add a Resource to a Clustered Service or Application
- Modify the Failover Settings for a Clustered Service or Application
- Modify the Virtual Machine Settings for a Clustered Virtual Machine
- Create a Shared Folder in a Clustered File Server
- Configure the Print Settings for a Clustered Print Server
32. Add Storage for a Clustered Service or Application
You can add storage to an existing clustered service or application. However, if the storage has not already been added to the failover cluster itself, you must first add the storage to the cluster before you can add the storage to a clustered service or application within the cluster. For information about adding storage to a cluster, see Add Storage to a Failover Cluster.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To add storage for a clustered service or application |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then click the service or application that you want to add storage to.
- Under Actions (on the right), click Add storage.
- Select the disk or disks that you want to add.
If a disk does not appear in the list, it might not have been added to the cluster yet. For more information, see Add Storage to a Failover Cluster.
32.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes
33. Add a Resource to a Clustered Service or Application
To customize the way your clustered service or application works, you can add a resource.
Important | |
|
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To add a resource to a clustered service or application |
- Ensure that the software or feature that is needed for the resource is installed on all nodes in the cluster.
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then click the service or application that you want to add a resource to.
- Under Actions (on the right), click Add a resource.
- Click the resource that you want to add, or click More resources and then click the resource that you want to add. If a wizard appears for the resource you chose, provide the information requested by the wizard.
- In the center pane, right-click the resource that you added and click Properties.
- If the property sheet includes a Parameters tab, click the tab and then configure the parameters that are needed by the resource.
- Click the Dependencies tab, and then configure the dependencies for the resource. Click OK.
- In the console tree, select the service or application (not the individual resource), and then under Actions (on the right), click Show Dependency Report.
- Review the dependencies between the resources. For many resources, you must configure the correct dependencies before the resource can be brought online. If you need to change the dependencies, close the dependency report and then repeat step 9.
- Right-click the resource that you just added, and then click Bring this resource online.
33.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
34. Modify the Failover Settings for a Clustered Service or Application
You can adjust the failover, failback, or preferred owner settings to control the way the cluster responds when a particular application or service fails. For examples that illustrate the way the settings work, see Modifying the Settings for a Clustered Service or Application.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To modify the failover settings for a clustered service or application |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications.
- Right-click the service or application that you want to modify the failover settings for, and then click Properties.
- To configure failback, on the General tab, change settings for preferred owners, then click the Failover tab and choose options under Failback.
You must configure a preferred owner if you want failback to occur (that is, if you want a particular service or application to fail back to a particular node when possible).
- To configure the number of times that the cluster service should attempt to restart or fail over a service or application in a given time period, click the Failover tab and specify values under Failover.
For more information about failover settings for a clustered service or application, see Modifying the Settings for a Clustered Service or Application.
34.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
35. odify the Virtual Machine Settings for a Clustered Virtual Machine
When you change the settings of a clustered virtual machine, we recommend that you use Failover Cluster Manager instead of Hyper-V Manager, as described in this procedure.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To modify the virtual machine settings for a clustered virtual machine |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications.
- In the console tree, click the virtual machine that you want to modify the settings for.
- In the center pane, right-click the virtual machine resource and then click Settings. (If you do not see Settings in the menu, collapse the virtual machine resource and then right-click it.)
The Settings interface appears. This is the same interface that you see in Hyper-V Manager.
- Configure the settings for the virtual machine.
35.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- If you use Hyper-V Manager instead of Failover Cluster Manager to configure settings for a virtual machine, be sure to follow the steps in Refresh the Configuration of a Virtual Machine
36. Create a Shared Folder in a Clustered File Server
Before you perform this procedure, review the steps in Checklist: Create a Clustered File Server. This procedure describes a step in that checklist.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To create a shared folder in a clustered file server |
- In Control Panel, open Windows Firewall, click Allow a program or feature through Windows Firewall, click Change settings, select an exception for Remote Volume Management, and then click OK.
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then select the clustered file server.
- Under Actions, click Add a shared folder.
The Create a Shared Folder Wizard appears. This is the same wizard that you would use to share a folder on a nonclustered server.
- Follow the instructions in the wizard to specify the settings for the shared folder, including path, name, offline settings, and permissions.
36.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- In a clustered file server, when you bring the associated File Server resource online or take it offline, all shared folders in that resource go offline or online at the same time. You cannot change the online or offline status of one of the shared folders without affecting all of the shared folders.
37. Configure the Print Settings for a Clustered Print Server
Before you perform this procedure, review the steps in Checklist: Create a Clustered Print Server. This procedure describes the last step in that checklist.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To configure the print settings for a clustered print server |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications.
- Right-click the clustered print server that you want to configure the print settings for, and then click Manage Printers.
An instance of the Failover Cluster Manager interface appears with Print Management in the console tree.
- Under Print Management, click Print Servers and locate the clustered print server you want to configure.
Always perform management tasks on the clustered print server. Do not manage the individual cluster nodes as print servers.
- Right-click the clustered print server, and then click Add Printer. Follow the instructions in the wizard to add a printer.
This is the same wizard you would use to add a printer on a nonclustered server.
- When you have finished configuring settings for the clustered print server, to close the instance of the Failover Cluster Manager interface with Print Management in the console tree, click File and then click Exit.
37.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- The failover cluster software automatically replicates important files (such as print drivers) to each of the nodes in the cluster. It supports this replication process by using a disk resource (in the cluster storage) as a place to store these files. This disk resource is configured automatically by the High Availability Wizard as part of a new clustered print server. Do not delete the disk resource that is part of the clustered print server.
38. Migrating Settings to a Failover Cluster Running Windows Server 2008 R2
The following topics provide information about migrating settings to a failover cluster running Windows Server 2008 R2:
- Understanding the Process of Migrating to a Cluster Running Windows Server 2008 R2
- Migrate Resource Groups to a Failover Cluster Running Windows Server 2008 R2
39. Understanding the Process of Migrating to a Cluster Running Windows Server 2008 R2
You can use a wizard to migrate the settings of many types of resources to a cluster running Windows Server 2008 R2. From the third page of the migration wizard, you can view a pre-migration report that explains whether each resource is eligible for migration and describes additional steps to perform after running the wizard. After the wizard finishes, it provides a report that describes additional steps that may be required to complete the migration. The wizard supports the migration of settings to a cluster running Windows Server 2008 R2 from a cluster running any of the following operating systems:
- Windows Server 2003
- Windows Server 2008
- Windows Server 2008 R2
For information about the specific steps for running the Migrate a Cluster Wizard, see Migrate Resource Groups to a Failover Cluster Running Windows Server 2008 R2.
Caution | |
If new storage is used, you must handle copying or moving of data or folders on your shared volumes during a migration. The wizard for migrating clustered resource groups does not copy data from one location to another. |
This topic contains the following subsections:
- Identifying which clustered services or applications can be migrated from a cluster running Windows Server 2003
- Migration scenario A: Migrating a multiple-node cluster to a cluster with new hardwareFollow this scenario if you can create a multiple-node cluster running Windows Server 2008 R2 and then migrate settings to it from a multiple-node cluster running Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2. With this scenario, you use different physical servers and hardware for the new cluster than were used for the old cluster.
- Migration scenario B: Migrating a two-node cluster to a cluster with the same hardwareFollow this scenario for an in-place migration of a two-node cluster running Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2 to a two-node cluster running Windows Server 2008 R2. With this scenario, you use the same physical servers and hardware for a new two-node cluster as you used for the old one. This scenario is not a rolling upgrade (an upgrade where two nodes in the same cluster temporarily run different operating systems from each other). Rolling upgrades to a failover cluster running Windows Server 2008 R2 are not possible.Note that the hardware for the new cluster must meet the requirements for a cluster running Windows Server 2008 R2. For more information, see Understanding Requirements for Failover Clusters.
39.1. Identifying which clustered services or applications can be migrated to a cluster running Windows Server 2008 R2
This section lists the clustered services or applications (clustered resources) that can be migrated to a cluster running Windows Server 2008 R2.
Important | |
You cannot use the Migrate a Cluster Wizard to migrate settings for virtualized servers, mail servers, database servers, and print servers, or any other resources that are not listed in the following subsections. Other migration tools exist for some of these applications. For information about migrating mail server applications, see https://go.microsoft.com/fwlink/?LinkId=91732 and https://go.microsoft.com/fwlink/?LinkId=91733. |
39.1.1.1. Resources for which the Migrate a Cluster Wizard performs most or all of the migration steps
After you use the Migrate a Cluster Wizard to migrate the settings of the following resources to a failover cluster running Windows Server 2008 R2, few or no additional steps are needed before the resources can be brought online.
Caution | |
If new storage is used, you must handle the copying or moving of data or folders during a migration from your shared volumes. The wizard for migrating the settings of clustered resource groups does not copy data from one location to another. |
- File Server or File Share resources: You can migrate settings for a clustered file server (or from a Windows Server 2003 cluster, a File Share resource group) and for the associated Physical Disk, IP Address, and Network Name resources.When you migrate from a cluster running Windows Server 2003, the Migrate a Cluster Wizard automatically translates all File Share resource groups to a single clustered file server (with multiple File Share resources within it) in Windows Server 2008 R2. Therefore, some resources might look different after the migration. The following table provides details:
Resource as Seen in a Server Cluster Running Windows Server 2003 | Migrated Resource as Seen in a Failover Cluster Running Windows Server 2008 R2 |
One File Share resource | One File Server resource |
Multiple File Share resources | Multiple File Share resources within a single clustered file server (resource group) |
File Share resource with DFS root | Distributed File System resource and File Share resource (both within a clustered DFS Server) |
- Physical Disk: You can migrate settings for Physical Disk resources other than the quorum resource.You do not need to migrate the quorum resource. When you run the Create a Cluster Wizard, the cluster software automatically chooses the quorum configuration that will provide the highest availability for your new failover cluster. You can change the quorum configuration settings if necessary for your specific environment. For information about changing settings (including quorum configuration settings) for a failover cluster, see Modifying Settings for a Failover Cluster.
- IP Address: You can migrate IP Address settings other than the cluster IP address. IP addresses are eligible for migration only within the same subnet.
- Network Name: You can migrate Network Name settings other than the cluster name. If Kerberos authentication is enabled for the Network Name resource, the wizard will prompt you for the password for the Cluster service account that is used by the old cluster.
39.1.1.2. Resources for which the Migrate a Cluster Wizard might not perform all of the migration steps
After you use the Migrate a Cluster Wizard to migrate the settings of the following resource groups to a failover cluster running Windows Server 2008 R2, some additional steps might be needed before the resources can be brought online, depending on your original configuration. The migration report indicates what steps, if any, are needed for these resource groups:
- DHCP Service
- Distributed File System Namespace (DFS-N)
- Distributed Transaction Coordinator (DTC)
- Internet Storage Name Service (iSNS)
- Message Queuing
- NFS Service
- WINS Service
- Generic Application
- Generic Script
- Generic Service
The wizard provides a report that describes the additional steps that are needed. Generally, the steps you must take include:
- Installing server roles or features that are needed in the new cluster (all nodes).
- Copying or installing any associated applications, services, or scripts on the new cluster (all nodes).
- Ensuring that any data is copied.
- Providing static IP addresses if the new cluster is on a different subnet.
- Updating drive path locations for applications if the new cluster uses a different volume letter.
The resource settings are migrated, as are the settings for the IP Address and Network Name resources that are in the resource group. If there is a Physical Disk resource in the resource group, the settings for the Physical Disk resource are also migrated.
39.2. Migration scenario A: Migrating a multiple-node cluster to a cluster with new hardware
For this migration scenario, there are three phases:
- Install two or more new servers, run validation, and create a new cluster. For this phase, while the old cluster continues to run, install Windows Server 2008 R2 and Failover Clustering on at least two servers. Create the networks the servers will use, and connect the storage. Next, run the complete set of cluster validation tests to confirm that the hardware and hardware settings can support a failover cluster. Finally, create the new cluster. At this point, you have two clusters.Additional information about connecting the storage: If the new cluster is connected to old storage, make at least two logical unit numbers (LUNs) or disks accessible to the servers, and do not make those LUNs or disks accessible to any other servers. (These LUNs or disks are necessary for validation and for the disk witness, which is similar to, although not the same as, the quorum resource in Windows Server 2003.) If the new cluster is connected to new storage, make as many disks or LUNs accessible to it as you think it will need.The steps for creating a cluster are listed in Checklist: Create a Failover Cluster.
- Migrate settings to the new cluster and determine how you will make any existing data available to the new cluster. When the Migrate a Cluster Wizard completes, all migrated resources will be offline. Leave them offline at this stage. The new cluster will remain online and continue serving clients. If the new cluster will reuse old storage, plan how you will make the storage available to it, but leave the old cluster connected to the storage until you are ready to make the transition. If the new cluster will use new storage, copy the appropriate folders and data to the storage.
- Make the transition from the old cluster to the new. The first step in the transition is to take clustered services and applications offline on the old cluster. If the new cluster uses old storage, follow your plan for making LUNs or disks inaccessible to the old cluster and accessible to the new cluster. Then, regardless of which storage the new cluster uses, bring clustered services and applications online on the new cluster.
39.3. Migration scenario B: Migrating a two-node cluster to a cluster with the same hardware
For this migration scenario, there are four phases:
- Install a new server and run selected validation tests. For this phase, allow one existing server to keep running Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2 and the Cluster service, while you begin the migration process. Evict the other server from the old cluster, and then install Windows Server 2008 R2 and the Failover Clustering feature on it. On that server, run all tests that the Validate a Configuration Wizard will run. The wizard will recognize that this is a single node without storage and limit the tests that it runs. Tests that require two nodes (for example, tests that compare the nodes or that simulate failover) will not run.Note that the tests that you run at this stage do not provide complete information about whether the storage will work in a cluster running Windows Server 2008 R2. As described later in this section, you will run the Validate a Configuration Wizard later with all tests included.
- Make the new server into a single-node cluster and migrate settings to it. Create a new single-node cluster and use the Migration Wizard to migrate settings to it, but keep the clustered resources offline on the new cluster.
- Bring the new cluster online, and make existing data available to it. Take the services and applications in the old cluster offline. If the new cluster will use the old storage, leave the data on the old storage, and make the disks or LUNs accessible to the new cluster. If the new cluster will use new storage, copy the folders and data to appropriate LUNs or disks in the new storage, and make sure that those LUNs or disks are visible to the new cluster (and not visible to any other servers). Confirm that the settings for the migrated services and applications are correct. Bring the services and applications in the new cluster online and make sure that the resources are functioning and can access the storage.
- Bring the second node into the new cluster. Destroy the old cluster and on that server, install Windows Server 2008 R2 and the Failover Clustering feature. Connect that server to the networks and storage used by the new cluster. If the appropriate disks or LUNs are not already accessible to both servers, make them accessible. Run the Validate a Configuration Wizard, specifying both servers, and confirm that all tests pass. Finally, add the second server to the new cluster.
40. Migrate Resource Groups to a Failover Cluster Running Windows Server 2008 R2
You can use a wizard to migrate the settings of many types of resource groups to a cluster running Windows Server 2008 R2. You can migrate these settings from a cluster running Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2. After the settings have been migrated, the wizard also provides a report that describes any additional steps that may be required to complete the migration.
Caution | |
If new storage is used, you must handle all copying or moving of data or folders on your shared volumes during a migration. The wizard for migrating clustered resource groups does not copy data from one location to another. |
Before you start the wizard, be sure that you know the name or IP Address of the cluster or cluster node from which you want to migrate resource groups.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To migrate resource groups to a failover cluster running Windows Server 2008 R2 |
- On the cluster running Windows Server 2008 R2, open the failover cluster snap-in.
- If the cluster to which you want to migrate settings is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- Under Configure, click Migrate Services and Applications.
- Read the first page of the Migrate a Cluster Wizard, and then click Next.
- Specify the name or IP Address of the cluster or cluster node from which you want to migrate resource groups, and then click Next.
- Click View Report. Read the report, which explains whether each resource is eligible for migration.
The wizard also provides a report after it finishes, describing any additional steps that might be needed before you bring the migrated resource groups online.
- Follow the instructions in the wizard to complete the following:
- Choose the resource group or groups whose settings you want to migrate.Some types of resource groups are eligible for migration and some are not. For more information, see Understanding the Process of Migrating to a Cluster Running Windows Server 2008 R2.
- Specify whether the resource groups to be migrated will use new storage, or the same storage used in the old cluster. If the resource groups will use new storage, you can specify the disk that each resource group should use after migration. Note that if new storage is used, you must handle all copying or moving of data or folders—the wizard does not copy data from one location to another.
- After the wizard runs and the Summary page appears, click View Report. This report contains important information about any additional steps that might be needed before you bring the migrated resource groups online.
40.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- For information about modifying the settings of a service or application after migration, see Modifying the Settings for a Clustered Service or Application.
41. Modifying Settings for a Failover Cluster
The default settings that are created by wizards for failover clustering work well for many clusters. However, you can change a cluster’s settings—for example, add storage, modify network settings, or enable Cluster Shared Volumes. The following topics provide information about changing the overall settings for your cluster:
- Understanding Cluster Shared Volumes in a Failover Cluster
- Understanding Quorum Configurations in a Failover Cluster
- Add Storage to a Failover Cluster
- Modify Network Settings for a Failover Cluster
- Enable Cluster Shared Volumes in a Failover Cluster
- Select Quorum Options for a Failover Cluster
42. Understanding Cluster Shared Volumes in a Failover Cluster
On a failover cluster that uses Cluster Shared Volumes, multiple clustered virtual machines that are distributed across multiple cluster nodes can all access their Virtual Hard Disk (VHD) files at the same time, even if the VHD files are on a single disk (LUN) in the storage. This means that the clustered virtual machines can fail over independently of one another, even if they use only a single LUN.
In contrast, in a failover cluster on which Cluster Shared Volumes is not enabled, a single disk (LUN) can only be accessed by a single node at a time. This means that clustered virtual machines can only fail over independently if each virtual machine has its own LUN, which makes the management of LUNs and clustered virtual machines more difficult.
This topic contains the following sections:
- Benefits of using Cluster Shared Volumes in a failover cluster
- Restrictions on using Cluster Shared Volumes in a failover cluster
42.1. Benefits of using Cluster Shared Volumes in a failover cluster
Cluster Shared Volumes provides the following benefits in a failover cluster:
- The configuration of clustered virtual machines is much simpler than before.
- You can reduce the number of LUNs (disks) required for your virtual machines, instead of having to manage one LUN per virtual machine, which was previously the recommended configuration (because the LUN was the unit of failover). Many virtual machines can use a single LUN and can fail over without causing the other virtual machines on the same LUN to also fail over.
- You can make better use of disk space, because you do not need to place each Virtual Hard Disk (VHD) file on a separate disk with extra free space set aside just for that VHD file. Instead, the free space on a Cluster Shared Volume can be used by any VHD file on that volume.
- You can more easily track the paths to VHD files and other files used by virtual machines. You can specify the path names, instead of identifying disks by drive letters (limited to the number of letters in the alphabet) or identifiers called GUIDs (which are hard to use and remember). With Cluster Shared Volumes, the path appears to be on the system drive of the node, under the \ClusterStorage However, this path is the same when viewed from any node in the cluster.
- If you use a few Cluster Shared Volumes to create a configuration that supports many clustered virtual machines, you can perform validation more quickly than you could with a configuration that uses many LUNs to support many clustered virtual machines. With fewer LUNs, validation runs more quickly. (You perform validation by running the Validate a Configuration Wizard in the snap-in for failover clusters.)
- There are no special hardware requirements beyond what is already required for storage in a failover cluster (although Cluster Shared Volumes require NTFS).
- Resiliency is increased, because the cluster can respond correctly even if connectivity between one node and the SAN is interrupted, or part of a network is down. The cluster will re-route the Cluster Shared Volumes communication through an intact part of the SAN or network.
42.2. Restrictions on using Cluster Shared Volumes in a failover cluster
The following restrictions apply when using Cluster Shared Volumes in a failover cluster:
- The Cluster Shared Volumes feature is only supported for use with Hyper-V (a server role in Windows Server 2008 R2) and other technologies specified by Microsoft. For information about the roles and features that are supported for use with Cluster Shared Volumes, see https://go.microsoft.com/fwlink/?LinkId=137158.
- No files should be created or copied to a Cluster Shared Volume by an administrator, user, or application unless the files will be used by the Hyper-V role or other technologies specified by Microsoft. Failure to adhere to this instruction could result in data corruption or data loss on shared volumes. This instruction also applies to files that are created or copied to the \ClusterStorage folder, or subfolders of it, on the nodes.
- For Hyper-V to function properly, the operating system (%SystemDrive%) of each server in your cluster must be set so that it boots from the same drive letter as all other servers in the cluster. In other words, if one server boots from drive letter C, all servers in the cluster should boot from drive letter C.
- The NTFS file system is required for all volumes enabled as Cluster Shared Volumes.
43. Understanding Quorum Configurations in a Failover Cluster
This topic contains the following sections:
- How the quorum configuration affects the cluster
- Quorum configuration choices
- Illustrations of quorum configurations
- Why quorum is necessary
For information about how to configure quorum options, see Select Quorum Options for a Failover Cluster.
43.1. How the quorum configuration affects the cluster
The quorum configuration in a failover cluster determines the number of failures that the cluster can sustain. If an additional failure occurs, the cluster must stop running. The relevant failures in this context are failures of nodes or, in some cases, of a disk witness (which contains a copy of the cluster configuration) or file share witness. It is essential that the cluster stop running if too many failures occur or if there is a problem with communication between the cluster nodes. For a more detailed explanation, see Why quorum is necessary later in this topic.
Important | |
In most situations, use the quorum configuration that the cluster software identifies as appropriate for your cluster. Change the quorum configuration only if you have determined that the change is appropriate for your cluster. |
Note that full function of a cluster depends not just on quorum, but on the capacity of each node to support the services and applications that fail over to that node. For example, a cluster that has five nodes could still have quorum after two nodes fail, but the level of service provided by each remaining cluster node would depend on the capacity of that node to support the services and applications that failed over to it.
43.2. Quorum configuration choices
You can choose from among four possible quorum configurations:
- Node Majority (recommended for clusters with an odd number of nodes)Can sustain failures of half the nodes (rounding up) minus one. For example, a seven node cluster can sustain three node failures.
- Node and Disk Majority (recommended for clusters with an even number of nodes)Can sustain failures of half the nodes (rounding up) if the disk witness remains online. For example, a six node cluster in which the disk witness is online could sustain three node failures.Can sustain failures of half the nodes (rounding up) minus one if the disk witness goes offline or fails. For example, a six node cluster with a failed disk witness could sustain two (3-1=2) node failures.
- Node and File Share Majority (for clusters with special configurations)Works in a similar way to Node and Disk Majority, but instead of a disk witness, this cluster uses a file share witness.Note that if you use Node and File Share Majority, at least one of the available cluster nodes must contain a current copy of the cluster configuration before you can start the cluster. Otherwise, you must force the starting of the cluster through a particular node. For more information, see “Additional considerations” in Start or Stop the Cluster Service on a Cluster Node.
- No Majority: Disk Only (not recommended)Can sustain failures of all nodes except one (if the disk is online). However, this configuration is not recommended because the disk might be a single point of failure.
43.3. Illustrations of quorum configurations
The following illustrations show how three of the quorum configurations work. A fourth configuration is described in words, because it is similar to the Node and Disk Majority configuration illustration.
Note | |
In the illustrations, for all configurations other than Disk Only, notice whether a majority of the relevant elements are in communication (regardless of the number of elements). When they are, the cluster continues to function. When they are not, the cluster stops functioning. |
As shown in the preceding illustration, in a cluster with the Node Majority configuration, only nodes are counted when calculating a majority.
As shown in the preceding illustration, in a cluster with the Node and Disk Majority configuration, the nodes and the disk witness are counted when calculating a majority.
Node and File Share Majority Quorum Configuration
In a cluster with the Node and File Share Majority configuration, the nodes and the file share witness are counted when calculating a majority. This is similar to the Node and Disk Majority quorum configuration shown in the previous illustration, except that the witness is a file share that all nodes in the cluster can access instead of a disk in cluster storage.
In a cluster with the Disk Only configuration, the number of nodes does not affect how quorum is achieved. The disk is the quorum. However, if communication with the disk is lost, the cluster becomes unavailable.
43.4. Why quorum is necessary
When network problems occur, they can interfere with communication between cluster nodes. A small set of nodes might be able to communicate together across a functioning part of a network but not be able to communicate with a different set of nodes in another part of the network. This can cause serious issues. In this “split” situation, at least one of the sets of nodes must stop running as a cluster.
To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster must use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many “votes” constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.
For example, in a five node cluster that is using a node majority, consider what happens if nodes 1, 2, and 3 can communicate with each other but not with nodes 4 and 5. Nodes 1, 2, and 3 constitute a majority, and they continue running as a cluster. Nodes 4 and 5, being a minority, stop running as a cluster. If node 3 loses communication with other nodes, all nodes stop running as a cluster. However, all functioning nodes will continue to listen for communication, so that when the network begins working again, the cluster can form and begin to run.
44. Add Storage to a Failover Cluster
You can add storage to a failover cluster after exposing that storage to all cluster nodes (by changing LUN masking or zoning). You do not need to add the storage to the cluster if the storage is already listed for that cluster under Storage in the Failover Cluster Manager snap-in.
If you are only adding storage to a particular clustered service or application (not adding entirely new storage to the failover cluster as a whole), see Add Storage for a Clustered Service or Application.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To add storage to a failover cluster |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Right-click Storage, and then click Add a disk.
- Select the disk or disks you want to add.
44.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- After you click Add a disk, if you do not see a disk that you expect to see, review the following:
- The list of disks that are shown when you click Storage in the Failover Cluster Manager snap-in. If the disk is already in that list, you do not need to add it to the cluster.
- The configuration of the storage interfaces, including the storage interfaces that run on the cluster nodes. The disk must be available to all nodes in the cluster before you can add it to the set of storage for the cluster.
- The disks shown in Disk Management (check each node in the cluster).If the disk that you want to add does not appear at all in Disk Management (on any node), there might be an issue with the storage configuration that is preventing the operating system from recognizing or mounting the disk. Note that disks currently in use by the cluster will appear in Disk Management on one node only (the node that is the current owner of that disk).If the disk that you want to add appears in Disk Management but does not appear after you click Add a disk, confirm that the disk is configured as a basic disk, not a dynamic disk. Only basic disks can be used in a failover cluster.
To open Disk Management, click Start, click Administrative Tools, click Computer Management, and then click Disk Management. (If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.)
45. Modify Network Settings for a Failover Cluster
For each of the networks physically connected to the servers (nodes) in a failover cluster, you can specify whether the network is used by the cluster, and if so, whether the network is used by the nodes only or also by clients. Note that in this context, the term “clients” includes not only client computers accessing clustered services and applications, but remote computers that you use to administer the cluster.
If you use a network for iSCSI (storage), do not use it for network communication in the cluster.
For background information about network requirements for a failover cluster, see Understanding Requirements for Failover Clusters.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To modify network settings for a failover cluster |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Networks.
- Right-click the network that you want to modify settings for, and then click Properties.
- If needed, change the name of the network.
- Select one of the following options:
- Allow cluster network communication on this networkIf you select this option and you want the network to be used by the nodes only (not clients), clear Allow clients to connect through this network. Otherwise, make sure it is selected.
- Do not allow cluster network communication on this networkSelect this option if you are using a network only for iSCSI (communication with storage) or only for backup. (These are among the most common reasons for selecting this option.)
45.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- IP addresses and the associated network names used for the cluster itself or for clustered services or applications are called access points. For a brief description of access points, see Understanding Access Points (Names and IP Addresses) in a Failover Cluster.
46. Enable Cluster Shared Volumes in a Failover Cluster
You can enable Cluster Shared Volumes for a failover cluster using the Failover Cluster Manager snap-in. By enabling Cluster Shared Volumes for a failover cluster, all nodes in the cluster will be enabled to use shared volumes.
Important | |
The Cluster Shared Volumes feature has specific restrictions that must be followed. For a list of both the benefits and the restrictions, see Understanding Cluster Shared Volumes in a Failover Cluster. |
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To enable Cluster Shared Volumes in a failover cluster |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- Under Configure (center pane), click Enable Cluster Shared Volumes.
- In the dialog box, select the checkbox.
46.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- Cluster Shared Volumes can only be enabled once per cluster.
47. Select Quorum Options for a Failover Cluster
If you have special requirements or make changes to your cluster, you might want to change the quorum options for your cluster.
Important | |
In most situations, use the quorum configuration that the cluster software identifies as appropriate for your cluster. Change the quorum configuration only if you have determined that the change is appropriate for your cluster. |
For important conceptual information about quorum configuration options, see Understanding Quorum Configurations in a Failover Cluster.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To select quorum options for a cluster |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- With the cluster selected, in the Actions pane, click More Actions, and then click Configure Cluster Quorum Settings.
- Follow the instructions in the wizard to select the quorum configuration for your cluster. If you choose a configuration that includes a disk witness or file share witness, follow the instructions for specifying the witness.
- After the wizard runs and the Summary page appears, if you want to view a report of the tasks that the wizard performed, click View Report.
47.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
48. Managing a Failover Cluster
The following topics contain information about managing and troubleshooting a failover cluster:
- Understanding Backup and Recovery Basics for a Failover Cluster
- Bring a Clustered Service or Application Online or Take It Offline
- Live Migrate, Quick Migrate, or Move a Virtual Machine from Node to Node
- Refresh the Configuration of a Virtual Machine
- Pause or Resume a Node in a Failover Cluster
- Run a Disk Maintenance Tool Such as Chkdsk on a Clustered Disk
- Start or Stop the Cluster Service on a Cluster Node
- Use Validation Tests for Troubleshooting a Failover Cluster
- View Events and Logs for a Failover Cluster
- View Reports of the Actions of Failover Cluster Wizards
49. Understanding Backup and Recovery Basics for a Failover Cluster
This topic outlines some basic guidelines for backing up and restoring a failover cluster. For more information about backing up and restoring a failover cluster, see https://go.microsoft.com/fwlink/?LinkId=92360.
49.1. Backing up a failover cluster
When you back up a failover cluster, you can back up the cluster configuration, the data on clustered disks, or both.
49.1.1. Backing up the cluster configuration
When you back up the cluster configuration, take note of the following:
- For a backup to succeed in a failover cluster, the cluster must be running and must have quorum. In other words, enough nodes must be running and communicating (perhaps with a disk witness or file share witness, depending on the quorum configuration) that the cluster has achieved quorum.For more information about quorum in a failover cluster, see Understanding Quorum Configurations in a Failover Cluster.
- Before putting a cluster into production, test your backup and recovery process.
- If you choose to use Windows Server Backup (the backup feature included in Windows Server 2008 R2), you must first add the feature. You can do this in Initial Configuration Tasks or in Server Manager by using the Add Features Wizard.
- When you perform a backup (using Windows Server Backup or other backup software), choose options that will allow you to perform a system recovery from your backup. For more information, see the Help or other documentation for your backup software.
49.1.2. Backing up data on clustered disks
When you back up data through a cluster node, notice which disks are Online on that node at that time. Only disks that are Online and owned by that cluster node at the time of the backup are backed up.
49.2. Restoring to a failover cluster from backup
When you restore to a failover cluster from backup, you can restore the cluster configuration, the data on clustered disks, or both.
49.2.1. Restoring the cluster configuration from backup
The Cluster service keeps track of which cluster configuration is the most recent, and it replicates that configuration to all cluster nodes. (If the cluster has a disk witness, the Cluster service also replicates the configuration to the disk witness). Therefore, when you restore a single cluster node from backup, there are two possibilities:
- Restoring the node to normal function, but not rolling back the cluster configuration: This is called a “non-authoritative restore.” In this scenario, the reason you use the backup is only to restore a damaged node to normal function. When the restored node begins functioning and joins the cluster, the latest cluster configuration automatically replicates to that node.
- Rolling back the cluster configuration to the configuration stored in the backup: This is called an “authoritative restore.” In this scenario, you have determined that you want to use the cluster configuration that is stored in the backup, not the configuration currently on the cluster nodes. By specifying appropriate options when you restore the backup, you can cause the cluster to treat the restored configuration as the “most recent.” In this case, the Cluster service will not overwrite the restored configuration, but instead will replicate it across all nodes. For details about how to perform an authoritative restore, see the Help or other documentation for your backup software.
49.2.2. Restoring data to clustered disks from backup
When you restore backed-up data through a failover cluster node, notice which disks are Online on that node at that time. Data can be written only to disks that are Online and owned by that cluster node when the backup is being restored.
50. Bring a Clustered Service or Application Online or Take It Offline
Sometimes during maintenance or diagnosis that involves a service or application in a failover cluster, you might need to bring that service or application online or take it offline. Bringing an application online or taking it offline does not trigger failover, although the Cluster service handles the process in an orderly fashion. For example, if a particular disk is required by a particular clustered application, the Cluster service ensures that the disk is available before the application starts.
For information about related actions, such as pausing a cluster node, see Managing a Failover Cluster.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To bring a clustered service or application online or take it offline |
- In the Failover Cluster Manager snap-in, if the cluster you want to manage is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to manage.
- Under Services and Applications, expand the console tree.
- Check the status of the service or application that you want to bring online or take offline by clicking the service or application and viewing the Status column (in the center pane).
- Right-click the service or application that you want to bring online or take offline.
- Click the appropriate command: Bring this service or application online or Take this service or application offline.
50.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- Note that in a clustered file server, shared folders are associated with a File Server When you bring that resource online or take it offline, all shared folders in the resource go offline or online at the same time. You cannot change the online or offline status of one of the shared folders without affecting all of the shared folders in the File Server resource.
51. Live Migrate, Quick Migrate, or Move a Virtual Machine from Node to Node
Failover clusters in Windows Server 2008 R2 provide several ways to move virtual machines from one cluster node to another. You can live migrate, quick migrate, or move a virtual machine to another node.
To live migrate, quick migrate, or move a virtual machine to another node |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then click the clustered instance containing the virtual machine you want to migrate or move.
- Under Actions (on the right), click one of the following:
- Live migrate virtual machine to another node
- Quick migrate virtual machine(s) to another node
- Move virtual machine(s) to another node
For more information about these choices, see Understanding Hyper-V and Virtual Machines in the Context of a Cluster.
51.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- You cannot use live migration to move multiple virtual machines simultaneously. On a given server running Hyper-V, only one live migration (to or from the server) can be in progress at a given time.
- If you decide to change the settings of a clustered virtual machine, be sure to see Modify the Virtual Machine Settings for a Clustered Virtual Machine.
- For live migration and quick migration, we recommend that you make the hardware and system settings of the nodes as similar as possible to minimize potential problems.
- For each clustered virtual machine, you can also specify the action that the cluster performs before taking the virtual machine offline. This setting does not affect live migration, quick migration, or unplanned failover. It affects only moving (or taking the resource offline through the action of Windows PowerShell or an application). To specify the setting, make sure that after selecting the clustered virtual machine in the console tree (on the left), you right-click the virtual machine resource displayed in the center pane (not on the left), and then click Properties. Click the Settings tab and select an option. The actions are described in Understanding Hyper-V and Virtual Machines in the Context of a Cluster in the section called “Live migration, quick migration, and moving of virtual machines,” in the description for moving of virtual machines.
52. Refresh the Configuration of a Virtual Machine
Important | |
|
To refresh the configuration of a virtual machine |
- In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster that you want to configure.
- Expand Services and Applications, and then click the virtual machine for which you want to refresh the configuration.
- In the Actions pane, scroll down, click More Actions, and then click Refresh virtual machine configuration.
- Click Yes to view the details of this action.
52.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
53. Pause or Resume a Node in a Failover Cluster
When you pause a node, existing groups and resources stay online, but additional groups and resources cannot be brought online on the node. Pausing a node is usually done when applying software updates to the node, where the recommended sequence is to move all services and applications off of the node, pause the node, then apply software updates to the node.
If you need to perform extensive diagnosis or maintenance on a cluster node, it might not be workable to simply pause the node. In that case, you can stop the Cluster service on that node. For more information, see Start or Stop the Cluster Service on a Cluster Node.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To pause or resume a node in a failover cluster by using the Windows interface |
- In the Failover Cluster Manager snap-in, if the cluster you want to manage is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and select or specify the cluster you want.
- If the console tree is collapsed, expand the tree under the cluster you want to manage.
- Expand the console tree under Nodes.
- Right-click the node you want to pause or resume, and then either click Pause or Resume.
53.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
54. Run a Disk Maintenance Tool Such as Chkdsk on a Clustered Disk
To run a disk maintenance tool such as Chkdsk on a disk or volume that is configured as part of a clustered service, application, or virtual machine, you must use maintenance mode. When maintenance mode is on, the disk maintenance tool can finish running without triggering a failover. If you have a disk witness, you cannot use maintenance mode for that disk.
Maintenance mode works somewhat differently on a volume in Cluster Shared Volumes than it does on other disks in cluster storage, as described in Additional considerations, later in this topic.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To run a disk maintenance tool such as Chkdsk on a clustered disk |
- In the Failover Cluster Manager snap-in, if the cluster is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and select or specify the cluster you want.
- If the console tree is collapsed, expand the tree under the cluster that uses the disk on which you want run a disk maintenance tool.
- In the console tree, click Storage.
- In the center pane, click the disk on which you want to run the disk maintenance tool.
- Under Actions, click More Actions, and then click the appropriate command:
- If the disk you clicked is under Cluster Shared Volumes and contains multiple volumes, click Maintenance, and then click the command for the appropriate volume. If prompted, confirm your action.
- If the disk you clicked is under Cluster Shared Volumes and contains one volume, click Maintenance, and then click Turn on maintenance mode for this volume. If prompted, confirm your action.
- If the disk you clicked is not under Cluster Shared Volumes, click Turn on maintenance mode for this disk.
- Run the disk maintenance tool on the disk or volume.
When maintenance mode is on, the disk maintenance tool can finish running without triggering a failover.
- When the disk maintenance tool finishes running, with the disk still selected, under Actions, click More Actions, and then click the appropriate command:
- If the disk you clicked is under Cluster Shared Volumes and contains multiple volumes, click Maintenance, and then click the command for the appropriate volume.
- If the disk you clicked is under Cluster Shared Volumes and contains one volume, click Maintenance, and then click Turn off maintenance mode for this volume.
- If the disk you clicked is not under Cluster Shared Volumes, click Turn off maintenance mode for this disk.
54.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- Maintenance mode works somewhat differently on a volume in Cluster Shared Volumes than it does on other disks in cluster storage:
For Cluster Shared Volumes | For disks not in Cluster Shared Volumes |
Changes the state of a volume. | Changes the state of a disk (LUN). |
Takes dependent resources offline (which interrupts client access). | Leaves dependent resources online. |
Removes access through the \ClusterStorage\volume path, still allowing the owner node to access the volume through its identifier (GUID). Also, suspends direct access from other nodes, allowing access only through the owner node. | Leaves access to the disk unchanged. |
- Maintenance mode will remain on until one of the following occurs:
- You turn it off.
- The node on which the resource is running restarts or loses communication with other nodes (which causes failover of all resources on that node).
- For a disk that is not in Cluster Shared Volumes, the disk resource goes offline or fails.
- You can see whether a disk is in maintenance mode by looking at the status in the center pane when Storage is selected in the console tree.
55. Start or Stop the Cluster Service on a Cluster Node
You might need to stop and restart the Cluster service on a cluster node during some troubleshooting or maintenance operations. When you stop the Cluster service on a node, services or applications on that node will fail over, and the node will stop functioning in the cluster until the Cluster service is restarted.
If you want to leave a particular node functioning so that it supports the services or applications it currently owns, and at the same time prevent additional services and applications from failing over to that node, pause the node (do not stop the Cluster service). For more information, see Pause or Resume a Node in a Failover Cluster.
Membership in the local Administrators group on each clustered server, or equivalent, is the minimum required to complete this procedure. Also, the account you use must be a domain account. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To start or stop the Cluster service on a cluster node by using the Windows interface |
- In the Failover Cluster Manager snap-in, if the cluster you want to manage is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster you want to manage.
- To minimize disruption to clients, before stopping the Cluster service on a node, move the applications that are currently owned by that node to another node. To do this, expand the console tree under the cluster that you want to manage, and then expand Services and Applications. Click each service or application and (in the center pane) view the Current Owner. If the owner is the node on which you want to stop the Cluster service, right-click the service or application, click Move this service or application to another node, and then choose the node. (For an explanation of the Best possible command option, see “Additional considerations” in this topic.)
- Expand the console tree under Nodes.
- Right-click the node that you want to start or stop, and then click More Actions.
- Click the appropriate command:
- To start the service, click Start Cluster Service.
- To stop the service, click Stop Cluster Service.
55.1.1.1. Additional considerations
- You can also perform the task described in this procedure by using Windows PowerShell. For more information about using Windows PowerShell for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=135119 and https://go.microsoft.com/fwlink/?LinkId=135120.
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- On a cluster with more than two nodes, from the options next to Move this service or application to another node, you can choose Best possible. This option has no effect if you have not configured a Preferred owners list for the service or application you are moving (in this case, the node will be chosen randomly). If you have configured a Preferred owners list, Best possible will move the service or application to the first available node on the list.
- In the center pane of the Failover Cluster Manager snap-in, you can view information about the state of a node. To specifically check whether the Cluster service is running on a node, right-click the node and click More Actions. On a node that is started, Start Cluster Service is dimmed, and on a node that is stopped, Stop Cluster Service is dimmed.
- If you are using the Node and File Share Majority quorum option, at least one of the available cluster nodes must contain a current copy of the cluster configuration before you can start the cluster. Otherwise, you must force the starting of the cluster through a particular node. The cluster will then use the copy of the cluster configuration that is on that node and replicate it to all other nodes. To force the cluster to start, on a node that contains a copy of the cluster configuration that you want to use, open the Failover Cluster Manager snap-in, click the cluster, and then under Actions (on the right), click Force Cluster Start. (Under most circumstances, this command is not available in the Windows interface.)
Note | |
When you use this command on a given node, the copy of the cluster configuration that is on that node will be treated as the authoritative copy of the configuration and will be replicated to all other nodes. |
- The Cluster service performs essential functions on each cluster node, including managing the cluster configuration, coordinating with the instances of the Cluster service running on other nodes, and performing failover operations.
56. Use Validation Tests for Troubleshooting a Failover Cluster
The Validate a Configuration Wizard can be useful when troubleshooting a failover cluster. By running tests related to the symptoms you see, you can learn more about what to do to correct the issue.
Important | |
|
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To use validation tests for troubleshooting a failover cluster |
- Decide whether you want to run all or only some of the available validation tests. You can select or clear the following tests individually or by category:
- Cluster Configuration tests: Validate important cluster configuration settings. For more information, see Understanding Cluster Validation Tests: Cluster Configuration.
- Inventory tests: Provide an inventory of the hardware, software, and settings (such as network settings) on the servers, and information about the storage. For more information, see Understanding Cluster Validation Tests: Inventory.
- Network tests: Validate that networks are set up correctly for clustering. For more information, see Understanding Cluster Validation Tests: Network.
- Storage tests: Validate that the storage on which the failover cluster depends is behaving correctly and supports the required functions of the cluster. For more information, see Understanding Cluster Validation Tests: Storage.
- System Configuration tests: Validate that the system software and configuration settings are compatible across servers. For more information, see Understanding Cluster Validation Tests: System Configuration.
- In the Failover Cluster Manager snap-in, if the cluster that you want to troubleshoot is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If you want to test disks that you have configured as Cluster Shared Volumes, perform the following steps:
- Expand the console tree and click Cluster Shared Volumes.
- In the center pane, right-click a disk that you want to test and then click Take this resource offline.
- Repeat the previous step for any other disks that you want to test.
- Right-click the cluster that you want to troubleshoot, and then click Validate This Cluster.
- Follow the instructions in the wizard to specify the tests, run the tests, and view the results.
- If you took Cluster Shared Volumes offline in a previous step, perform the following steps:
- Click Cluster Shared Volumes.
- In the center pane, right-click a disk that is offline and then click Bring this resource online.
- Repeat the previous step for any other disks that you previously took offline.
56.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- To view the results of the tests after you close the wizard, choose one of the following:
- Open the folder systemroot\Cluster\Reports (on a clustered server).
- In the console tree, right-click the cluster, and then click View Validation Report. This displays the most recent validation report for that cluster.
57. iew Events and Logs for a Failover Cluster
When you view events through the Failover Cluster Manager snap-in, you can see events for all nodes in the cluster instead of seeing events for only one node at a time.
For information about viewing the reports from the failover cluster wizards, see View Reports of the Actions of Failover Cluster Wizards.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To view events and logs for a failover cluster by using the Windows interface |
- In the Failover Cluster Manager snap-in, if the cluster is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
- If the console tree is collapsed, expand the tree under the cluster for which you want to view events.
- In the console tree, right-click Cluster Events and then click Query.
- In the Cluster Events Filter dialog box, select the criteria for the events that you want to display.
To return to the default criteria, click the Reset button.
- Click OK.
- To sort the events, click a heading, for example, Level or Date and Time.
- To view a specific event, click the event and view the details in the Event Details
57.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
- You can also use Event Viewer to open a log related to failover clustering. To locate the log, in Event Viewer, expand Applications and Services Logs, expand Microsoft, expand Windows, and then expand FailoverClustering. The log file is stored in systemroot\system32\winevt\Logs.
58. View Reports of the Actions of Failover Cluster Wizards
Each of the failover cluster wizards in Windows Server 2008 R2 produces a report of its actions. This topic describes how to view the report of a particular wizard by clicking a button on the last page of that wizard or by opening a report from a specific folder in Windows Server 2008 R2.
Membership in the local Administrators group, or equivalent, is the minimum required to complete this procedure. Review details about using the appropriate accounts and group memberships at https://go.microsoft.com/fwlink/?LinkId=83477.
To view a report of the actions of a failover cluster wizard |
- Choose one of the following options:
- If the wizard for which you want to view a report is displayed, proceed through the wizard until you are on the Summary page, and then click View Report.
- If you ran the wizard but it is not displayed now, use Windows Explorer or another method to navigate to the following location and open the report you want:systemroot\Cluster\Reports
58.1.1.1. Additional considerations
- To open the failover cluster snap-in, click Start, click Administrative Tools, and then click Failover Cluster Manager. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.
59. Add Resource Type Dialog Box
Item | Details |
Resource DLL path and file name | Specify the path and file name of the resource DLL that the Cluster service should use when it communicates with your service or application. A resource DLL monitors and controls the associated service or application in response to commands and requests from the Cluster service and associated software components. For example, the resource DLL saves and retrieves service or application properties in the cluster database, brings the resource online and takes it offline, and checks the health of the resource. |
Resource type name | Specify the name that the Cluster service uses for the resource type. This name stays the same regardless of the regional and language options that are currently selected. |
Resource type display name | Specify the n |
60. Resources for Failover Clusters
For more information about failover clusters, see the following resources on the Microsoft Web site:
- For a list of links to a variety of topics about failover clusters, see https://go.microsoft.com/fwlink/?LinkId=68633.
- For information about hardware compatibility for Windows Server 2008 R2, see https://go.microsoft.com/fwlink/?LinkId=139145.
- For design and deployment information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137832.
- For operations information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137835.
- For troubleshooting information for failover clusters, see https://go.microsoft.com/fwlink/?LinkId=137836.
60.1.
60.2. Server overview
Attached is an initial list of servers that will be deployed.
61. Network Infrastructure
61.1. WAN logical Network diagram
61.1.1. Topology
Diagram here
61.1.1.1. On-prem Network
On-prem will have only production and supporting subnets (e.g. SQL Cluster infra). At the moment of writing this document only infra subnet is provisioned. Other subnets will be decided later on then project sees need for it.
.
61.1.1.2. IP’s
All SQL CLUSTER traffic goes out via these SQL CLUSTER public IP’s:
IP1
IP2
IP3
61.1.2. Firewalls
61.1.2.1. Application firewall rules
Please see section 8.1 Application orders.
61.1.2.2. Predefined security groups
Name here
61.1.2.3. SQL Cluster solution firewall rules
Diagram below shows ports required for SQL Cluster components and sessions hosts to communicate with each other over Network as well as custom ports for applications residing on session hosts them self.
Figure 7. Ports required for solution
NOTE: To simplify future deployment and to have more flexibility we will allow each subnet to have access to Microsoft Office online services for Office activation, Outlook connection OneDrive synchronization. Limiting access to only TVG’s tenant.
61.1.3. Subnet segmentation
General purpose to divide sessions into multiple segments/subnets is to minimize risk of lateral movement between servers then one of the servers is compromised by a malicious activity. For that purpose, we are segregating session hosts and SQL Cluster infrastructure components into subnets below. Common applications are grouped together to form a unified communications channel.
White fill means that subnet will be implemented when needed.
Subnet name | CIDR range | Information | On-prem(p) / SQL CLUSTER (a) | SQL CLUSTER Node name |
std-users | 10.0.0.0/24 | Services. | ||
app-admins-p | 10.0.0.0/24 | Prod application e.g. RDP, SSH |
61.1.4. Domains
All servers and users will be residing in TGV.net domain. In case other projects will be implementing additional Active directory, forests or domains and require SQL Cluster to access to them, careful considerations and planning should be carried out. On a high level this is supported by SQL Cluster but has to some limitations if other systems are implemented not within supported level.
62. SQL Cluster configuration
62.1. Applications and Groups
pa-SQL-globalacces-regular – access to SQL Cluster environment for TVG employees and/or contractors.
pa-SQL-globalaccess-restricted – access with restrictions. No copying in/out files or text.
pa-SQL-globalaccess-full – Full copying in and out of environments.
NOTE: Further restrictions can we achieved using AD groups depending on security requirements:
- Full, bi-directional clipboard redirection
- Full, client-to-session only clipboard redirection
- Full, session-to-client only clipboard redirection
- Selective formats, bi-directional clipboard redirection
- Selective formats, client-to-session only clipboard redirection
- Selective formats, session-to-client only clipboard redirection
62.2. OU segmentation, GPO’s and policies
All machines in the same subnet/segment/group should be placed in the same OU and have same GPO’s. For that proposed OU structure would be:
All SQL Cluster OU’s should have general policies and GPO’s applied according to company policies + SQL Cluster specific policies e.g. log off users after idle time etc. (exact settings are subject to change therefore they are not documented on the design document).
62.3. Disaster Recovery
For a disaster recovery case ADFS and SQL Cluster Network will not be available at this time , we will be using Test environment on VMware with on-prem configuration.
63. Migration
63.1. Migration from SQL
High level migration from Windows 2008 to Windows 2019 and SQL Cluster steps:
- SQL Cluster DB to new SQL Cluster steps:
- Identify all applications and users who use them in the SQL environment
- Existing DB will be migrated one by one from legacy SQL Cluster to new SQL platform
64. System Management
64.1. Application orders
If new application order comes in to publish it. Bellow form must be filled in from the requestor or application owner.
64.2. Management and control
SQL Cluster Network administration is done via: https:/url URL, where authentication is done through Azure AD.
The ADFS and Cluster Management will be done through URL or directly connecting to the console from RDP platform.
64.3. Backup schedule
All backup should be done on a TVG’s standard basis at a standard times within off work hours. Bellow is table representing what and then each component should be backed up.
SQL Cluster component | Backup required | Schedule |
SQL Cluster | Yes | Incremental once a week. Full once a month. Retention 6 months. |
SQL Cluster Cluster Managemet | Yes | Incremental once a day. Full once a week. Retention 3 months. |
SQL Cluster Network Connector | Yes | – |
SQL Cluster Configuration | Yes | – |
File servers | Yes | Incremental once a day. Full once a week. Retention 3 months. |
64.4. System recovery and resilience
Please see list below for how resilient and what is done for the system to be recoverable.
SQL Cluster Network – SQL Cluster as a PaaS vendor will provide 99.5% uptime for its services. In case of SQL Cluster Network failure DR plans will commence and users will be redirected to use alternate path of connecting to applications.
SQL Cluster s – are deployed for a DR only scenario. Nevertheless, it is deployed in a highly available configuration. Two instances deployed on two different physical hosts with redundant Network controllers in active-passive mode.
SQL Cluster Cluster Managemet – servers are deployed for DR scenario as well. There are two servers which are load balanced with s. In case one of the servers goes offline the other will serve all users until the first one is recovered.
SQL Cluster Network Connectors – are the main components for SQL Cluster Network. They do not require any load balancing and are stateless. This means that they can be easily scaled out in case there is need. Automatic scaling will be done in case of failure on SQL CLUSTER part. On on-prem Network in case one of connectors will fail the other one will take over all load.
SQL Cluster session hosts – there will be deployed multiple hosts per application.
File Servers – file servers are deployed in a highly available configuration within clusters.
SQL DB – resiliency and recovery is solved within leveraged platform.
64.5. Scalability
Below is the explanation on each component how it can scale and what actions should be taken to do so:
SQL Cluster – as it is DR only component it is deployed to server users after disaster and does not need scaling.
SQL Cluster Cluster Managemet – deployed only for DR and does not need scaling, but it is deployed to support all incoming users.
SQL Cluster Network Connectors – are the main components for SQL Cluster Network. They do not require any load balancing and are stateless. This means that they can be easily scaled out in case there is need. On SQL CLUSTER part autoscaling is enabled to do it automatically in case of failure or performance degradation. Connectors can be scaled out/in or up/down.
SQL Cluster Scaling– scaling for more nodes is done manually depending on the applications usage. Failover on SQL CLUSTER is done automatically depending on the load through SQL Cluster feature called Automatic Failover.
File Servers – file servers are deployed in SQL CLUSTER as a service (FSx) in a highly available configuration within clusters. To scale file servers is to add more storage.
64.6. Licensing
Then SQL Cluster Network licenses will be bought they will reside on Volume Licensing (LTSA) and will be managed and provided for TVG to consume.
Additional ADFS licenses should be installed on a current LTSA or VLS server to satisfy increasing demand due to employee onboarding to SQL Cluster solution.
65. Appendix I
66. <ClusteredInstance> Properties: General Tab
Item | Details |
Preferred owners | Select nodes from the list, and then use the buttons to list them in order.
If you want this service or application to be moved to a particular node whenever that node is available: · Select the check box for a node and use the buttons to place it at the top of the list. · On the Failover tab, ensure that failback is allowed for this service or application. Note that in the Preferred owners list, even if you clear the check box for a node, the service or application could fail over to that node for either of two reasons: · You have not specified any preferred owners. · No node that is a preferred owner is currently online. To ensure that a clustered service or application never fails over to a particular node:
|
67. <Resource> Properties: Advanced Policies tab
Item | Details |
Possible owners | Clear the check box for a node only if you want to prevent this resource (and the clustered service or application that contains this resource) from failing over to that node. Otherwise, leave the boxes checked for all nodes.
Note that if you leave the box for only one node checked, this resource (and the clustered service or application that contains this resource) cannot fail over. |
Basic resource health check interval | Specify how often you want the cluster to perform a basic check to see whether the resource appears to be online. We recommend that you use the standard time period for the resource type unless you have a reason to change it.
This health check is also known as the Looks Alive poll. |
Thorough resource health check interval | Specify how often you want the cluster to perform a more thorough check, looking for indications that the resource is online and functioning properly. We recommend that you use the standard time period for the resource type unless you have a reason to change it.
This health check is also known as the Is Alive poll. |
68. <Resource> Properties: Dependencies Tab
When you specify resource dependencies, you control the order in which the Cluster service brings resources online and takes them offline. A dependent resource is:
- Brought online after the resource or resources it depends on.
- Taken offline before the resource or resources it depends on.
Note | |
After specifying dependencies, to see a report, right-click the clustered service or application (not the resource), and then click Show Dependency Report. |
This tab contains the following controls:
Item | Details |
Resource | Click the box, and then select a resource from the list. The list contains the other resources in this clustered service or application that have not already been selected. |
AND/OR | For the second or later lines:
· Specify AND if both the resource in that line and one or more previously listed resources must be online before the dependent resource is brought online. · Specify OR if either the resource in that line or another previously listed resource must be online before the dependent resource is brought online. (The dependent resource can also be brought online if both are online.) Notice that gray brackets appear to show how the listings are grouped, based on the AND/OR settings. |
Insert | When you select a line in the list and click Insert, a line is added to the list before the selected line. |
Delete | When you select a line in the list and click Delete, the line is deleted. |
69. <Resource> Properties: General Tab
If a disk that you are using in a failover cluster has failed, you can use Repair (on the General tab of the Properties sheet) to assign a different disk.
Item | Details | ||||
Repair button (for a disk) | When you click this button, you can stop using a failed disk and assign a different disk to this service or application. The disk that you assign must be one that can be used for clustering but is not yet clustered.
For more information about disks that can be used for clustering, see “List Potential Cluster Disks” in Understanding Cluster Validation Tests: Storage.
Before using the Repair button, expose a LUN to each node of the cluster, but do not add that LUN to the cluster (through the Failover Cluster Manager snap-in). You can restore the data to the disk before or after using the Repair button. |
70. <Resource> Properties: Policies Tab
Item | Details |
Maximum restarts in the specified period | Specify the number of times that you want the Cluster service to try to restart the resource during the period you specify. If the resource cannot be started after this number of attempts in the specified period, the Cluster service will take actions as specified by other fields of this tab.
For example, if you specify 3 for Maximum restarts in the specified period and 15:00 for the period, the Cluster service attempts to restart the resource three times in a given 15 minute period. If the resource still does not run, instead of trying to restart it a fourth time, the Cluster service will take the actions that you specified in the other fields of this tab. |
Period (mm:ss) | Specify the length of the period (minutes and seconds) during which the Cluster service counts the number of times that a resource has been restarted. For an example of how the period works with the maximum number of restarts, see the previous cell in this table. |
If restart is unsuccessful, fail over all resources in this service or application | Use this box to control the way the Cluster service responds if the maximum restarts fail:
· Select this box if you want the Cluster service to respond by failing the clustered service or application over to another node. · Clear this box if you want the Cluster service to respond by leaving this clustered service or application running on this node (even if this resource is in a failed state). |
If all the restart attempts fail, begin restarting again after the specified period (hh:mm) | Select this box if you want the Cluster service to go into an extended waiting period after attempting the maximum number of restarts on the resource. Note that this extended waiting period is measured in hours and minutes. After the waiting period, the Cluster service will begin another series of restarts. This is true regardless of which node owns the clustered service or application at that time. |
71. Virtual Machine <Resource Name> Properties: Settings Tab
On this tab, one of the settings you can specify is the action that the cluster will perform when taking the virtual machine offline. This setting does not affect live migration, quick migration, or unplanned failover. It affects only moving (or taking the resource offline through the action of Windows PowerShell or an application).
|
||||||||||
|
72. Appendix II
72.1. Appendix A glossary of terms
Reference | Description |
DR | Disaster Recovery |
IP | Internet Protocol |
FAS | Federated Authentication Service |
LAN | Local Area Network |
POP | Point of Presence |
Personal data | Documents and files user creates |
User data | Settings user creates while working |
SLA | Service Level Agreement |
VPN | Virtual Private Network |
WAN | Wide Area Network |
NPS | Network Policy Server |
GW | Gateway |
SQLH | Remote Desktop Session Host |
WEM | Workspace Environment Management |
BAPL | Business Application Presentation Layer |
NTP | Network Time Protocol |
NS | |
WEM | Workspace Environment Management |
CAL | Client access license |
72.2. Appendix B bibliography
Reference | Description |
v0.1 | Initial Draft |
72.3. Appendix C assumptions risks and constraints
72.3.1. Assumptions
ID | Assumption | Criticality (H/M/L) |
Business requirements | How users connect and what features they need | |
Security requirements | Network segmentation | |
72.3.2. Constraints
ID | Constraint | Impact Action |
Business requirements | There are no business representatives on a project to maintain and introduce business requirements. | |
Security | There is lack of security guidelines and overall company policy | |
72.3.3. Givens
ID | Givens |
72.3.4. Risks
ID | Risk | Impact Mitigation |
Requirements | A lot of functional requirements was assumed by the architect as there is no business owner involvement. | |
72.4. Appendix D document distribution
Distribution | Version | ||||||||||
0.1 | |||||||||||
Recipient or location | x | ||||||||||
Table 13. Document Distribution