Windows Server 2008 R2 Unleashed (234 page)

BOOK: Windows Server 2008 R2 Unleashed
4.25Mb size Format: txt, pdf, ePub

volumes, or LUNs presented to the Windows systems must be presented as basic

Windows disks.

All storage drivers must be digitally signed and certified for use with Windows Server 2008

R2. Many storage devices certified for Windows Server 2003 or even Windows Server 2008

might not work with Windows Server 2008 R2 and either simply cannot be used for

failover cluster shared storage, or might require a firmware and driver upgrade to be

supported. One main reason for this is that all failover shared storage must comply with

ptg

SCSI-3 Architecture Model SAM-2. This includes any and all legacy and serial attached

SCSI controllers, Fibre Channel host bus adapters, and iSCSI hardware- and software-based

initiators and targets. If the cluster attempts to perform an action on a LUN or shared disk

and the attempt causes an interruption in communication to the other nodes in the

cluster or any other system connected to the shared storage device, data corruption can

occur and the entire cluster and each storage area network (SAN)–connected system might

lose connectivity to the storage.

When LUNS are presented to failover cluster nodes, each LUN must be presented to each

node in the cluster. Also, when the shared storage is accessed by the cluster and other

systems, the LUNs must be masked or presented only to the cluster nodes and the shared

storage device controllers to ensure that no other systems can access or disrupt the cluster

communication. There are strict requirements for shared storage support, especially with

failover clusters. Using SANs or other types of shared storage must meet the following list

of requirements and recommendations:

29

. All Fibre, SAS, and iSCSI host bus adapters (HBAs) and Ethernet cards used with iSCSI

software initiators must obtain the “Designed for Microsoft Windows” logo for

Windows Server 2008 R2 and have suitable signed device drivers.

. SAS, Fibre, and iSCSI HBAs must use StorPort device drivers to provide targeted LUN

resets and other functions inherent to the StorPort driver specification. SCSIport was

at one point supported for two-node clusters, but if a StorPort driver is available, it

should be used to ensure support from the hardware vendors and Microsoft.

. All shared storage HBAs and back-end storage devices, including iSCSI targets, Fibre,

and SAS storage arrays, must support SCSI-3 standards and must also support persis-

tent bindings or reservations of LUNs.

1188

CHAPTER 29

System-Level Fault Tolerance (Clustering/Network Load Balancing)

. All shared storage HBAs must be deployed with matching firmware and driver

versions. Failover clusters using shared storage require a very stable infrastructure

and applying the latest storage controller driver to an outdated HBA firmware can

cause very undesirable situations and might disrupt access to data.

. All nodes in the cluster should contain the same HBAs and use the same version of

drivers and firmware. Each cluster node should be an exact duplicate of each other

node when it comes to hardware selection, configuration, and driver and firmware

revisions. This allows for a more reliable configuration and simplifies management

and standardization.

. When iSCSI software initiators are used to connect to iSCSI software- or hardware-

based targets, the network adapter used for iSCSI communication should be connect-

ed to a dedicated switch, should not be used for any cluster communication, and

cannot be a teamed network adapter as teamed adapters are not supported with iSCSI.

For Microsoft to officially support failover clusters and shared storage, in addition to the

hardware meeting the requirements listed previously, the entire configuration of the server

brand and model, local disk configuration, HBA or network card controller firmware and

driver version, iSCSI software initiator software, storage array, and storage array controller

firmware or SAN operating system version must be tested as a whole system before it will

be considered a “Windows Server 2008 R2 Failover Cluster Supported Configuration.” The

ptg

point to keep in mind is that if a company really wants to consider using failover clusters,

they should research and find a suitable solution that will meet their budget. If a tested

and supported solution cannot be found within their price range, the company should

consider alternative solutions that can restore systems in about an hour or a few hours if

not within a few minutes. The truth is that failover clusters are not for everyone, they are

not for the faint of heart, and they are not within every organization’s information tech-

nology budget from an implementation, training, and support standpoint. Administrators

who want to test failover cluster configurations to gain knowledge and experience can

leverage several low-cost shared storage alternatives, including using the Windows iSCSI

initiator and a software-based iSCSI target, but they must remember that the configuration

may not be supported by Microsoft in case a problem is encountered or data loss results.

Serial Attached SCSI (SAS) Storage Arrays

Serial Attached SCSI or SAS storage arrays provide organizations with affordable, entry-

level, hardware-based direct attached storage arrays suitable for Windows Server 2008 R2

clusters. SAS storage arrays commonly are limited to four hosts, but some models support

extenders to add additional hosts as required. One of the major issues with direct attached

storage is that replication of the data within the storage is usually not achievable without

involving one of the host systems or software provided by the hardware vendor.

Fibre Channel Storage Arrays

Using Fibre Channel (FC) HBAs, Windows Server 2008 R2 can access both shared and

nonshared disks residing on a SAN connected to a common FC switch. This allows both

the shared storage and operating system volumes to be located on the SAN, if desired, to

provide diskless servers. In many cases, however, diskless servers might not be desired if

the operating system performs many paging actions because the cache on the storage

Overview of Failover Clusters

1189

controllers can be used up very fast and can cause delays in disk read and write operations

for dedicated cluster storage. If this is desired, however, the SAN must support this option

and be configured to present the operating system dedicated LUNs to only a single host

exclusively. The LUNs defined for shared cluster storage must be zoned and presented to

every node in the cluster, and no other systems. The LUN zoning or masking in many

cases is configured on the Fibre Channel switch that connects the cluster nodes and the

shared storage device. This is a distinct difference between direct attached storage and FC

or iSCSI shared storage. Both FC and iSCSI require a common fiber or Ethernet switch and

network to establish and maintain connections between the hosts and the storage.

A properly configured FC zone for a cluster will include the World Wide Port Number

(WWPN) of each cluster host’s FC HBAs and the WWPN of the HBA controller(s) from the

shared storage device. If either the server or the storage device utilizes multiple HBAs to

connect to a single or multiple FC switches to provide failover or load-balancing function-

ality, this is known as Multipath I/O (MPIO) and a qualified driver for MPIO management

and communication must be used. Also, the function of either MPIO failover and/or

MPIO load balancing must be verified as approved for Windows Server 2008 R2. Consult

the shared storage vendor, including the Fibre Channel switch vendor, for documentation

and supported configurations, and check the cluster Hardware Compatibility List (HCL)

on the Microsoft website to find approved configurations.

ptg

iSCSI Storage

When organizations want to utilize iSCSI storage for Windows Server 2008 R2 failover

clusters, security and network isolation is highly recommended. iSCSI utilizes an initiator

on the host that requires access to the LUNs or iSCSI targets. Targets are located or hosted

on iSCSI target portals. Using the target portal interface, the target must be configured to

be accessed by multiple initiators in a cluster configuration. Both the iSCSI initiators and

target portals come in software- and hardware-based models, but both models utilize IP

networks for communication between the initiators and the targets. The targets need to be

presented to Windows as a basic disk. When standard network cards will be used for iSCSI

communication on Windows Server 2008 R2 systems, the built-in Windows Server 2008

R2 iSCSI initiator can be used, provided that the iSCSI target can support the authentica-

tion and security options provided, if used.

29

Regardless of the choice of the Microsoft iSCSI initiator, software- or hardware-based

initiators, or targets, iSCSI communication should be deployed on isolated network

segments and preferably dedicated network switches and network interface cards.

Furthermore, the LUNs presented to the failover cluster should be masked and secured

from any systems that are not nodes participating in the cluster, by using authentication

and IPSec communication, when possible. Within the Windows Server 2008 R2 operating

system, the iSCSI HBA or designated network card should not be used for any failover

cluster configuration and cannot be deployed using network teaming software—or it will

not be supported by Microsoft.

Hopefully by now, it is very clear that Microsoft only wants to support organizations that

deploy failover clusters on tested and approved entire systems, but in many cases, failover

1190

CHAPTER 29

System-Level Fault Tolerance (Clustering/Network Load Balancing)

clusters can still be deployed and can function, as the Create a Cluster Wizard will allow a

cluster to be deployed that is not in a supported configuration.

NOTE

When deploying a failover cluster, pay close attention to the results of the Validate a

Cluster Wizard to ensure that the system has passed all storage tests to ensure a sup-

ported configuration is deployed.

Multipath I/O

Windows Server 2008 R2 supports Multipath I/O to external storage devices such as SANs

and iSCSI targets when multiple HBAs are used in the local system or by the shared

storage. Multipath I/O can be used to provide failover access to disk storage in case of a

controller or HBA failure, but some drivers also support load balancing across HBAs in

both standalone and failover cluster deployments. Windows Server 2008 R2 provides a

built-in Multipath I/O driver for iSCSI that can be leveraged when the manufacturer

conforms to the necessary specifications to allow for the use of this built-in driver. The

iSCSI initiator built in to Windows Server 2008 R2 is very user friendly and makes adding

iSCSI targets simple and easy by making new targets reconnect by default. Multipath I/O

ptg

(MPIO) support is also installed by default, and this is different from previous releases of

the iSCSI initiator software.

Volume Shadow Copy for Shared Storage Volume

The Volume Shadow Copy Service (VSS) is supported on shared storage volumes. Volume

Shadow Copy can take a point-in-time snapshot of an entire volume, enabling administra-

tors and users to recover data from a previous version. Furthermore, failover clusters and

the entire Windows Server Backup architecture utilize VSS to store backup data. Many of

today’s services and applications that are certified to work on Windows Server 2008 R2

failover clusters are VSS compliant; careful choice and consideration should be made

when choosing an alternative backup system, unless the system is provided by the shared

storage manufacturer and certified to work in conjunction with VSS, Windows Server 2008

R2, and the service or application running on the failover cluster.

Failover Cluster Node Operating System Selection

Windows Server 2008 R2 supports only the 64-bit operating systems but the nodes must

be running either the Enterprise or Datacenter Edition. If any services or applications

require deployment on 32-bit operating systems, and if this application is deployed on a

Windows Server 2008 R2 failover cluster, performance of that application might suffer and

should be performance tested thoroughly before deploying these applications on produc-

tion failover clusters. Also, verify that these 32-bit applications are indeed supported on

Windows Server 2008 R2 failover clusters and not just on Windows Server 2008 failover

clusters or Windows Server 2003 server clusters.

Deploying Failover Clusters

1191

Deploying Failover Clusters

The Windows Server 2008 R2 Failover Clustering feature is not installed on a system by

default and must be installed before failover clusters can be deployed. Remote manage-

ment on administrative workstations can be accomplished by using the Remote Server

Administration Tools feature, which includes the Failover Cluster Manager snap-in, but

the feature needs to be installed on all nodes that will participate in the failover cluster.

Even before installing the Failover Clustering features, several steps should be taken on

Other books

Murder on the Leviathan by Boris Akunin
Obsession by Traci Hunter Abramson
Train From Marietta by Dorothy Garlock
The Mandel Files by Peter F. Hamilton
The Myriad Resistance by John D. Mimms
Sweet Laurel Falls by Raeanne Thayne
Star's Reach by John Michael Greer
The King Of The South by Karrington, Blake