Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden. Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and Dell OpenManage are trademarks of Dell Inc.; Microsoft, Windows, and Windows NT are registered trademarks of Microsoft Corporation, and Windows Server is a trademark of Microsoft Corporation.
....... . Cabling the Mouse, Keyboard, and Monitor Power Cabling for the PowerEdge Cluster SE500W Solution ......
Page 4
Preparing the PERC RAID Adapter for Clustering Enabling the Cluster Mode Using the PERC RAID Adapter Setting the SCSI Host Adapter IDs Configuring and Managing Virtual Disks Windows 2000 and Windows Server 2003 Dynamic Disks and Volumes Naming and Formatting Drives on the Shared Storage System Contents .
Page 5
Creating the Quorum Resource Configuring Cluster Networks Running Windows 2000 Configuring Cluster Networks Running Windows Server 2003 Installing and Configuring Microsoft Windows 2000 ......
Page 6
....... . . Configurations Using Non-Dell Products Completing the Upgrade...
Page 7
Replacing a Cluster-Enabled Dell PERC RAID Adapter ......Replacing a Cluster Node Changing the Cluster Service Account Password in Windows Server 2003 .
Page 8
Figure 2-6. Figure 2-7. Figure 3-1. Figure 3-2. Figure 3-3. Tables Table 1-1. Table 1-2. Table 1-3. Table 2-1. Table 2-2. Table 2-3. Table 3-1. Table 3-2. Table 3-3. Table 5-1. Table 5-2. Table 5-3. Table A-1. Contents Power Cabling Example With One Power Supply .
Intended Audience This guide was developed for experienced IT professionals who need to install, cable, and configure a PowerEdge Cluster SE500W solution in an enterprise environment and for trained service technicians who perform cluster upgrade and maintenance procedures. Obtaining More Information See "Obtaining Technical Assistance"...
Virtual Servers and Resource Groups In a cluster environment, you do not access a physical server; you access a virtual server, which is managed by MSCS. Each virtual server has its own IP address, name, and hard drive(s) in the shared storage system.
See "Quorum Disk (Quorum Resource)" and the MSCS online documentation for more information. NOTE: PowerEdge Cluster SE500W solutions do not support the Majority Node Set (MNS) Quorum resource type. Shared Storage Systems Cluster nodes can share access to external storage systems; however, only one of the nodes can own any RAID volume in the external storage system at any time.
NOTE: MSCS and Network Load Balancing (NLB) features cannot coexist on the same node, but can be used together in a multitiered cluster. For more information, see the Dell PowerEdge Clusters website at www.dell.com/ha or the Microsoft website at www.microsoft.com.
PERC 4/DC or PERC 4e/DC adapter(s) for the cluster’s shared storage. NOTE: The PowerEdge Cluster SE500W supports up to two PERC 4/DC or PERC 4e/DC adapters in a single cluster node. Dell does not support use of PERC 4/DC and PERC 4e/DC adapters together in the PowerEdge Cluster SE500W solution.
Figure 1-1 shows a sample configuration of the PowerEdge Cluster SE500W components and their interconnections. See the Dell PowerEdge Cluster SE500W Platform Guide for system-specific configuration information. Figure 1-1. Maximum Configuration of the PowerEdge Cluster SE500W Solution PowerEdge systems (2)
The network adapters installed in each cluster node must be identical and supported by the server platform. Cluster storage PowerEdge Cluster SE500W configurations support up to four PowerVault 22xS storage systems per cluster. Dell strongly recommends that you use hardware-based RAID or...
The Product Information Guide provides important safety and regulatory information. Warranty information may be included within this document or as a separate document. • The Dell PowerEdge Cluster SE500W Systems Platform Guide provides information about the systems that support the PowerEdge Cluster SE500W configuration. •...
Cabling Your Cluster Hardware Dell™ PowerEdge™ Cluster SE500W configurations require cabling for the storage systems, cluster interconnects, client network connections, and power connections. Cabling for the Cluster SE500W Solution The cluster systems and components are interconnected to provide four independent functions as listed in Table 2-1, each of which is described in more detail throughout this section.
Cabling One PowerVault 22xS Shared Storage System to a Cluster SE500W NOTE: See "Configuring the PowerVault 22xS Storage System for Cluster Mode" for more information about configuring the storage systems. NOTICE: Do not turn on the systems or the storage system(s) until the split-bus module on the back of the PowerVault system has been set to cluster mode and all cabling is complete.
Page 19
3 Connect the VHDCI connector of the second SCSI cable (see Figure 2-2) to the channel 0 connector on the cluster-enabled PERC RAID adapter in the second PowerEdge system, and then tighten the retaining screws. 4 Connect the SCSI connector B (see Figure 2-1) on the back of the PowerVault 22xS storage system to the 68-pin connector on the second SCSI cable (see Figure 2-2), and tighten the retaining screws.
"Cabling One PowerVault 22xS Shared Storage System to a Cluster SE500W." Repeat the process for channel 1 on the controller in each node using a second PowerVault 22xS storage system. See Figure 2-3.
NOTICE: If you have dual storage systems that are attached to a second controller, Dell supports disk mirroring between channels on the second controller. However, Dell does not support mirroring disks on one cluster-enabled PERC RAID adapter to disks on another cluster-enabled PERC RAID adapter.
Figure 2-4 shows an example of network adapter cabling in which dedicated network adapters in each node are connected to the public network and the remaining network adapters are connected to each other (for the private network). Figure 2-4. Example of Network Cabling Connection public network adapter Cabling Your Public Network...
Ethernet cable in a point-to-point connection can impact node-to-node communications. See Microsoft Knowledge Base articles 239924, 242430, 254651, and 258750 at www.microsoft.com for more information. This issue has been corrected in Windows Server 2003. NIC Teaming Network Interface Card (NIC) teaming combines two or more NICs to provide load balancing and/or fault tolerance.
Cabling the Mouse, Keyboard, and Monitor If you are installing a PowerEdge Cluster SE500W configuration in a Dell rack, your cluster will require a switch box to enable the mouse, keyboard, and monitor for your cluster nodes. See your rack installation documentation included with your rack for instructions on cabling each cluster node’s Keyboard Video Mouse (KVM) to the mouse/keyboard/monitor switch box...
Page 25
Figure 2-5. Power Cabling Example With Three Power Supplies in the Systems primary power supplies on two AC power strips (or on two AC PDUs [not shown]) CAUTION: The arrangement of the cluster components in this illustration is intended only to demonstrate the power distribution of the components. Do not stack components as in the configuration shown.
Page 26
Figure 2-6. Power Cabling Example With One Power Supply in the Systems CAUTION: only to demonstrate the power distribution of the components. Do not stack components as in the configuration shown. Cabling Your Cluster Hardware primary power supplies on one AC power strip (or on one AC PDU [not shown]) The arrangement of the cluster components in this illustration is intended redundant power supplies on one...
Figure 2-7. Power Cabling Example With Two Power Supplies in the Systems primary power supplies on one AC power strip (or on one AC PDU [not shown]) CAUTION: The arrangement of the cluster components in this illustration is intended only to demonstrate the power distribution of the components. Do not stack components as in the configuration shown.
Any additional peripheral components • RAID controllers for internal drives (optional) 3 Ensure that the following components are installed in each Dell PowerVault™ 22xS system in the cluster. See "Installing and Configuring the Shared Storage System." • Two enclosure management modules (EMMs) •...
7 Install and configure the storage management software. See the documentation included with your Array Manager software or available at the Dell Support website (located at support.dell.com) for more information. 8 Configure the hard drives on the shared storage system(s).
Windows operating system. NOTE: Dell strongly recommends that you use the "PowerEdge Cluster SE500W Solution Data Form" during the installation of your cluster to ensure that all installation steps are completed. The data form is located in "Cluster Data Form."...
See the documentation for your specific RAID controller for more information on RAID configurations. NOTE: If you are not going to use a hardware-based RAID controller, Dell recommends using the Windows Disk Management tool or Dell OpenManage Array Manager or Dell OMSM to provide software-based redundancy for the Windows system partitions.
Page 33
8 Shut down both nodes and connect each node to shared storage. See "Cabling Your Cluster Hardware." 9 Turn on one node and configure shared storage using Dell Storage Management or the PERC RAID adapter BIOS utility. See "Installing and Configuring the Shared Storage System."...
Microsoft SQL Server, Enterprise Edition requires at least one static IP address for the virtual server. (Microsoft SQL Server does not use the cluster's IP address.) Also, each IIS Virtual Root or IIS Server instance configured for failover needs a unique static IP address.
Windows Server 2003. Configuring IP Addresses for the Private Network Dell recommends using the static IP address assignments for the network adapters used for the private network (cluster interconnect). The IP addresses in Table 3-2 are used as examples only.
Page 36
NOTE: Dell recommends that you do not configure Default Gateway, NetBIOS, WINS, and DNS on your private network. If you are running Windows 2000 Advanced Server or Windows Server 2003 disable NetBIOS on your private network. If multiple cluster interconnect network adapters are connected to a network switch, ensure that all of the private network’s network adapters have a unique address.
2 At the prompt, type: ipconfig /all 3 Press <Enter>. All known IP addresses for each local server appear on the screen. 4 Issue the ping command from each remote system. Ensure that each local server responds to the ping command.
If a PERC RAID adapter driver CD was not shipped with your system, go to the Dell Support website at support.dell.com to download the latest Windows driver for the PERC RAID adapter.
RAID level you want to use, the number of hard drives installed in your system, and the number of application programs you want to run in your cluster environment. For information about installing hard drives in the PowerVault 22xS storage system, see the Dell PowerVault 220S and 221S System Installation and Troubleshooting Guide.
13 available hard drives in cluster mode. For more information about SCSI ID assignments and cluster mode operation, see your Dell PowerVault 220S and 221S Systems Installation and Troubleshooting Guide. See Table 3-3 for a description of the split-bus module modes and functions.
Table 3-3. Split-bus Module Modes and Functions Mode Joined-bus mode Split-bus mode Cluster mode The split-bus module has only one LED indicator (see Figure 3-1 for location), which is illuminated when the module is receiving power. Enclosure Management Module (EMM) The EMM serves two primary functions in your storage system: •...
This operation may change the configuration of disks and can cause loss of data! Ensure: 1.Peer server is powered up for its controller NVRAM to be updated. Otherwise, disk configuration should be read from disk and saved to controller's NVRAM.
This warning message alerts you to the possibility of data loss if certain precautions are not taken to protect the integrity of the data on your cluster. NOTICE: To prevent data loss, your cluster must meet the conditions in the following bulleted list before you attempt any data-destructive operation on your shared hard drives.
NOTE: Dell recommends that you use a RAID level other than RAID 0 (which is commonly called striping). RAID 0 configurations provide very high performance, but do not provide the level of availability required for the quorum resource. See the documentation for your storage system for more information about setting up RAID levels for the system.
Page 46
Drive letters A through D are reserved for the local system. The number of drive letters required by individual servers in a cluster may vary. Dell recommends that the shared drives be named in reverse alphabetical order beginning with the letter z.
Page 47
5 In the dialog box, create a partition the size of the entire drive (the default) and then click OK. NOTE: The MSCS software allows only one node to access a logical drive at a time. If a logical drive is partitioned into multiple disks, only one node is able to access all the partitions for that logical drive.
NOTE: In Windows Server 2003, mapping a network drive to the same drive letter as a cluster disk resource renders the cluster disk inaccessible from Windows Explorer on the host. Ensure that mapped network drives and cluster disks are never assigned the same drive letter.
To properly identify the quorum resource, Dell recommends that you assign the drive letter Q to the quorum resource partition. Dell does not recommend using the remainder of the virtual disk for other cluster resources. If you do use the space for cluster resources, be aware that when you create two volumes (partitions) on a single virtual disk, they will both fail over together if a server fails.
Selecting Cluster Service copies the required files to the system. If your operating system was preinstalled by Dell or you did not select Cluster Service when you installed the operating system, you can install the Cluster Service later by running the Add/Remove Components from the Control Panel.
In the Action box of the Open Connection to Cluster, select Add nodes to cluster. In the Cluster or server name box, type the name of the cluster or click Browse to select an available cluster from the list, and then click OK.
Adding Cluster Nodes Using the Advanced Configuration Option If you are adding additional nodes to the cluster using the Add Nodes Wizard and the nodes are not configured with identical internal storage devices, the wizard may generate one or more errors while checking cluster feasibility in the Analyzing Configuration menu.
The Cluster Group contains a network name and IP address resource, which is used to manage the cluster. Because the Cluster Group is dedicated to cluster management and for best cluster performance, Dell recommends that you do not install applications in this group. Preparing Your Systems for Clustering...
In general, if a resource is offline, it can be brought online by right-clicking the resource and selecting Bring Online from the pull-down menu. See the documentation and online help for Windows 2000 Advanced Server or Windows Server 2003 for information about troubleshooting resource failures.
The Windows 2000 Administration Tools can only be installed on systems running Windows 2000. Additionally, the Windows 2003 Administrative Tools can only be installed on systems running Windows XP (with Service Pack 1 or later) and Windows Server 2003. ®...
When using Cluster Administrator provided with Windows NT 4.0 operating systems on a system running Windows NT 4.0, Cluster Administrator may generate error messages if the software detects Windows 2000 or Windows Server 2003 cluster resources. Dell strongly recommends using client systems running Windows 2000 or Windows Server 2003 with the appropriate Administrator Pack for cluster administration and monitoring.
Using MSCS This section provides information about Microsoft intended to be an overview of MSCS and provides information about the following: • Cluster objects • Cluster networks • Network interfaces • Cluster nodes • Groups • Cluster resources • Failover and failback For information about specific MSCS procedures, see the MSCS online help.
This tracking system allows you to view the state of all cluster network interfaces from a cluster management application, such as Cluster Administrator. Cluster Nodes A cluster node is a system in a server cluster that has a working installation of the Windows operating system and the Cluster Service. Cluster nodes have the following characteristics: •...
• Every node in the cluster is aware of another system joining or leaving the cluster. • Every node in the cluster is aware of the resources that are running on all nodes in the cluster. • All nodes in the cluster are grouped under a common cluster name, which is used when accessing and managing the cluster.
• Can be brought online and taken offline • Can be managed in a server cluster • Can be hosted (owned) by only one node at a time To manage resources, the Cluster Service communicates to a resource Dynamic Link Libraries (DLL) through a Resource Monitor.
Dependent Resources A dependent resource requires—or depends on—another resource to operate. For example, if a Generic Application resource requires access to clustered physical storage, it would depend on a physical disk resource. A resource can specify one or more resources on which it is dependent; it can also specify a list of nodes on which it is able to run.
Configuring Resource Dependencies Groups function properly only if resource dependencies are configured correctly. The Cluster Service uses the dependencies list when bringing resources online and offline. For example, if a group in which a physical disk and a file share are located is brought online, the physical disk containing the file share must be brought online before the file share.
Resource Parameters The Parameters tab in the Properties dialog box is available for most resources. Table 5-3 lists each resource and its configurable parameters. Table 5-3. Resources and Configurable Parameters Resource Configurable Parameters File share Share permissions and number of simultaneous users Share name (clients will detect the name in their browse or explore lists) Share comment Shared file path...
Using the Quorum Disk for Cluster Integrity The quorum disk is also used to ensure cluster integrity by performing the following functions: • Maintaining the cluster node database • Ensuring cluster unity When a node joins or forms a cluster, the Cluster Service must update the node's private copy of the cluster database.
Adjusting the Threshold and Period Values If the resource DLL reports that the resource is not operational, the Cluster Service attempts to restart the resource. You can specify the number of times the Cluster Service can attempt to restart a resource in a given time interval. If the Cluster Service exceeds the maximum number of restart attempts (Threshold value) within the specified time period (Period value), and the resource is still not operational, the Cluster Service considers the resource to be failed.
Share subdirectories — Publishes several network names—one for each file folder and all of its immediate subfolders. This method is an efficient way to create large numbers of related file shares on a single file server. For example, you can create a file share for each user with files on the cluster node.
Configuring Active and Passive Cluster Nodes Active nodes process application requests and provide client services. Passive nodes are backup nodes that ensure that client applications and services are available if a hardware or software failure occurs. Cluster configurations may include both active and passive nodes. NOTE: Passive nodes must be configured with appropriate processing power and storage capacity to support the resources that are running on the active nodes.
Page 68
After failover, the Cluster Administrator can reset the following recovery policies: • Application dependencies • Application restart on the same cluster node • Workload rebalancing (or failback) when a failed cluster node is repaired and brought back online Failover Process The Cluster Service attempts to fail over a group when any of the following conditions occur: •...
The Cluster Service continues to try and fail over a group until it succeeds or until the number of attempts occurs within a predetermined time span. A group’s failover policy specifies the maximum number of failover attempts that can occur in an interval of time. The Cluster Service will discontinue the failover process when it exceeds the number of attempts in the group’s failover policy.
"Cabling Your Cluster Hardware" – "Preparing Your Systems for Clustering" Dell certifies and supports only Cluster SE500W solutions that are configured with the Dell products described in this guide. For a description of the PowerEdge cluster components, see the Platform Guide.
After you install the required hardware and network adapter upgrades, you can set up and cable the system hardware. The final phase for upgrading to a Cluster SE500W solution is to install and configure Windows 2000 Advanced Server, or Windows Server 2003 with MSCS.
11 Go to "Upgrading Node 2." NOTE: After you upgrade node 1, your cluster is running two separate operating systems. Dell recommends that you do not modify your cluster configuration—such as adding or removing cluster nodes or resources—until you upgrade both cluster nodes.
Page 74
The cluster group is moved and restarted on node 1. 5 Repeat step 4 for the remaining cluster groups. 6 Insert the Microsoft Windows Server 2003 Enterprise Edition CD into the CD drive. 7 Double-click Install Windows Server 2003 Enterprise Edition.
• Replacing a cluster node Adding a Network Adapter to a Cluster Node This procedure assumes that Windows 2000 Advanced Server, or Windows Server 2003 with the latest Windows Service Pack, and MSCS are installed on both cluster nodes. NOTE: The IP addresses used in the following sections are examples only and do not represent of actual addresses to use.
Page 76
3 Boot to the Windows operating system. Windows Plug and Play detects the new network adapter and installs the appropriate drivers. NOTE: If Plug and Play does not detect the new network adapter, the network adapter is not supported. Update the network adapter drivers (if required). After the drivers are installed, click the Start button, select Control Panel, and then double-click Network Connections.
Uninstalling MSCS From Clusters Running Windows 2000 Advanced Server 1 Take all resource groups offline or move them to another cluster node. 2 Stop Cluster Service on the node that you want to uninstall.
Removing Nodes From Clusters Running Windows Server 2003 1 Take all resource groups offline or move them to another cluster node. 2 Click the Start button, select Programs→ Administrative Tools, and then double-click Cluster Administrator. 3 In Cluster Administrator, right-click the icon of the node you want to uninstall and then select Stop Cluster Service.
1 Open a command prompt window. 2 Select the cluster folder directory by typing one of the following: cd \2000\cluster (for Windows 2000 Advanced Server), or cd \windows\cluster (for Windows Server 2003) 3 Start the cluster in manual mode (on one node only) with no quorum logging by typing...
3 Install the correct network adapter drivers, assign the appropriate IP addresses, and install the PERC RAID adapter driver. 4 Shut down the replacement node. 5 Connect the SCSI cables from each PERC RAID adapter to the Dell™ PowerVault™ 22xS storage system. Maintaining Your Cluster...
11 Use Cluster Administrator to verify that the node rejoins the cluster, and check the Windows Event Viewer to ensure errors were not encountered. 12 Reinstall any cluster applications (such as Microsoft SQL Server or Exchange Server onto the new node, if required).
Reformatting a Cluster Disk NOTE: Ensure that all client systems are disconnected from the cluster disk before you perform this procedure. 1 Click the Start button and select Programs 2 In the Cluster Administrator left window pane, expand the Groups directory. 3 In the Groups directory, right-click a cluster resource group that contains the disk to be reformatted, and select Take Offline.
Adding New Physical Drives to an Existing Shared Storage System The Dell™ PowerEdge™ Cluster SE500W solutions consist of two systems that share an external SCSI storage enclosure PowerVault 22xS storage system. Each system contains a PERC RAID adapter with cluster-enabled firmware. The following procedure describes adding additional storage to an existing shared storage system in the cluster configuration.
Rebuilding Operation in Dell OpenManage Utilities For the rebuild operation, see your Dell OpenManage™ Array Manager or Dell OMSM documentation. If the cluster node is rebooted or power to the node is lost while a PERC RAID adapter is rebuilding a shared array, the controller terminates the rebuild operation and identifies the hard drive as failed.
Page 85
9 Verify that the selected file is correct. 10 Click Download Firmware to begin the download process. This process takes several minutes to complete. 11 When the message Firmware Downloaded Successfully appears, click OK. 12 Repeat steps 3 through 9 for each channel that has an enclosure attached. 13 To verify the firmware upgrade for each channel, right-click the channel number, select Properties, and view the version information.
Troubleshooting This appendix provides troubleshooting information for Dell™ PowerEdge™ Cluster SE500W configurations. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A-1. General Cluster Troubleshooting Problem Probable Cause The RAID drives in the Dell The SCSI cables are loose or PowerVault™...
Page 88
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Enclosure management modules (EMMs) are not installed. The PERC RAID adapter drivers are not installed in your Microsoft operating system. The option to change the SCSI Cluster mode is not enabled. Enabling cluster mode will permit you to IDs is not visible in the PERC 3/DC BIOS.
Page 89
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause You are prompted to configure The TCP/IP configuration is one network instead of two incorrect. during MSCS installation. The private (point-to-point) network is disconnected. Client systems are dropping off With MSCS, the service of the network while the cluster provided by the recovery is failing over.
Page 90
For more information see KB883398 at support.microsoft.com. Dell strongly recommends that you use Windows 2000 Professional, Windows 2000 Server, or Windows 2000 Advanced Server for remote administration of a cluster running Windows 2000 Advanced Server.
Page 91
NOTE: For support and troubleshooting information for Dell PowerVault™ 220S and 221S Systems refer to support.dell.com/support/edocs/stor-sys/spv22xs/. For support and troubleshooting information for PERC cards refer to support.dell.com/support/edocs/storage/RAID/. Corrective Action...
Page 93
Abbreviations and Acronyms ampere(s) Application Programming Interface alternating current advanced cooling module Bulletin Board Service backup domain controller BIOS basic input/output system bits per second British thermal unit Celsius centimeter(s) direct current distributed file system DHCP dynamic host configuration protocol dynamic link library domain naming system electrostatic discharge...
Page 94
Gb/s gigabits per second graphical user interface host bus adapter HSSDC high-speed serial data connector high-voltage differential hertz identification Internet Information Server input/output Internet Protocol kilobit(s) kilobyte(s) Keyboard Video Mouse Abbreviations and Acronyms pound(s) local area network light-emitting diode...
Page 95
Network Load Balancing NTFS NT File System NVRAM nonvolatile random-access memory OMSM OpenManage Enhanced Storage Management physical address extension printed circuit board primary domain controller power distribution unit PERC PowerEdge™ Expandable RAID Controller PERC 4/DC PERC fourth generation, dual channel PERC 4e/DC PERC fourth generation express, dual channel Peripheral Component Interconnect...
Make a copy of the appropriate section of the data form to use for the installation or upgrade, complete the requested information on the form, and have the completed form available if you need to call Dell for technical assistance. If you have more than one cluster, complete a copy of the form for each cluster.
Page 101
53 Cluster Administrator about, 55 cluster cabling components, 17 one PowerVault 22xS cabled to a cluster SE500W, 18 two PowerVault 22xS systems cabled to a cluster SE500W, 20 cluster configurations active/active, 67 active/passive, 67 using non-Dell products, 72...
Page 102
enabling cluster mode on PERC RAID adapter, 44 enclosure management module, 42 failback about, 67, 69 failover about, 67 configuring, 65 modifying failover policy, 69 process, 68 file share resource type, 66 forced joint mode configuring EMMs for, 39 groups about, 60 hard drives setting up shared storage...
Page 103
PERC RAID adapter enabling cluster mode, 44 period values adjusting, 65 poll intervals adjusting, 64 power cabling on the Cluster SE500W, 17, 24 PowerEdge Cluster SE500W components, 13 PowerVault 22xS storage system cluster cabling, 18 clustering, 39 private network cabling, 21, 23...
Page 104
Windows driver for PERC RAID adapter, 38 upgrading clusters, 71 virtual disks configuring and managing, 45 virtual servers, 10 volumes using, 45 warranty, 16 Windows 2000 Advanced Server installing, 30 Windows Server 2003, Enterprise Edition installing, 30 Index...
Figures Figure 1-1. Maximum Configuration of the PowerEdge Cluster SE500W Solution Figure 2-1. PowerVault 22xS Back Panel Figure 2-2. Cabling a Clustered System With One PowerVault 22xS Storage System Figure 2-3. Cabling Two PowerVault 22xS Storage Systems to a PERC RAID adapter Figure 2-4.
Page 107
Tables Table 1-1. Windows Operating System Features Table 1-2. Cluster Storage Requirements Table 1-3. Cluster Node Requirements Table 2-1. Cluster Cabling Components Table 2-2. Network Connections Table 2-3. Private Network Hardware Components and Connections Table 3-1. Applications and Hardware Requiring IP Address Assignments Table 3-2.