HP Integrity NonStop BladeSystem Planning

Guide

HP Part Number: 545740-002

Published: May 2008

Edition: J06.03 and subsequent J-series RVUs

?? Copyright 2008 Hewlett-Packard Development Company, L.P.

Legal Notice

Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor???s standard commercial license.

The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Export of the information contained in this publication may require authorization from the U.S. Department of Commerce.

Microsoft, Windows, and Windows NT are U.S. registered trademarks of Microsoft Corporation.

Intel, Pentium, and Celeron are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Java is a U.S. trademark of Sun Microsystems, Inc.

Motif, OSF/1, UNIX, X/Open, and the "X" device are registered trademarks, and IT DialTone and The Open Group are trademarks of The Open Group in the U.S. and other countries.

Open Software Foundation, OSF, the OSF logo, OSF/1, OSF/Motif, and Motif are trademarks of the Open Software Foundation, Inc.

OSF MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THE OSF MATERIAL PROVIDED HEREIN, INCLUDING, BUT NOT

LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.

OSF shall not be liable for errors contained herein or for incidental consequential damages in connection with the furnishing, performance, or use of this material.

??1990, 1991, 1992, 1993 Open Software Foundation, Inc. The OSF documentation and the OSF software to which it relates are derived in part from materials supplied by the following:

??1987, 1988, 1989 Carnegie-Mellon University. ?? 1989, 1990, 1991 Digital Equipment Corporation. ?? 1985, 1988, 1989, 1990 Encore Computer Corporation. ?? 1988 Free Software Foundation, Inc. ?? 1987, 1988, 1989, 1990, 1991 Hewlett-Packard Company. ?? 1985, 1987, 1988, 1989, 1990, 1991, 1992 International Business Machines Corporation. ?? 1988, 1989 Massachusetts Institute of Technology. ?? 1988, 1989, 1990 Mentat Inc. ?? 1988 Microsoft Corporation. ?? 1987, 1988, 1989, 1990, 1991, 1992 SecureWare, Inc. ?? 1990, 1991 Siemens Nixdorf Informationssysteme AG. ?? 1986, 1989, 1996, 1997 Sun Microsystems, Inc. ?? 1989, 1990, 1991 Transarc Corporation.

OSF software and documentation are based in part on the Fourth Berkeley Software Distribution under license from The Regents of the University of California. OSF acknowledges the following individuals and institutions for their role in its development: Kenneth C.R.C. Arnold, Gregory S. Couch, Conrad C. Huang, Ed James, Symmetric Computer Systems, Robert Elz. ?? 1980, 1981, 1982, 1983, 1985, 1986, 1987, 1988, 1989 Regents of the University of California.

4Table of Contents

6Table of Contents

List of Figures

7

8

List of Tables

9

10

About This Document

This guide describes the HP Integrity NonStop??? BladeSystem and provides examples of system configurations to assist you in planning for installation of a new HP Integrity NonStop??? NB50000c BladeSystem.

Supported Release Version Updates (RVUs)

This publication supports J06.03 and all subsequent J-series RVUs until otherwise indicated in a replacement publication.

Intended Audience

This guide is written for those responsible for planning the installation, configuration, and maintenance of a NonStop BladeSystem and the software environment at a particular site. Appropriate personnel must have completed HP training courses on system support for NonStop BladeSystems.

New and Changed Information in This Edition

This is a new manual.

Document Organization

Notation Conventions

General Syntax Notation

This list summarizes the notation conventions for syntax presentation in this manual.

horizontally, enclosed in a pair of braces and separated by vertical lines. For example:

LISTOPENS PROCESS { $appl-mgr-name }

{ $process-name }

ALLOWSU { ON | OFF }

12

HP Encourages Your Comments

HP encourages your comments concerning this document. We are committed to providing documentation that meets your needs. Send any errors found, suggestions for improvement, or compliments to:

pubs.comments@hp.com

Include the document title, part number, and any comment, error found, or suggestion for improvement you have concerning this document.

14

1 NonStop BladeSystem Overview

NOTE: This document describes products and features that are not yet available on systems running J-series RVUs. These products and features include:

???CLuster I/O Modules (CLIMs)

???The Cluster I/O Protocols (CIP) subsystem

???Serial attached SCSI (SAS) disk drives and their enclosures

The Integrity NonStop BladeSystem provides an integrated infrastructure with consolidated server, network, storage, power, and management capabilities. The NonStop BladeSystem implements the BladeSystem c-Class architecture and is optimized for enterprise data center applications. The NonStop NB50000c BladeSystem is introduced as part of the J06.03 RVU.

NonStop NB50000c BladeSystem

The NonStop NB50000c BladeSystem combines the NonStop operating system and HP Integrity NonStop BL860c Server Blades in a single footprint as part of the ???NonStop Multicore Architecture (NSMA)??? (page 16).

The characteristics of an Integrity NonStop NB50000c BladeSystem are:

1When CLIMs are also included in the configuration, the maximum number of IOAMs might be smaller. Check with your HP representative to determine your system's maximum for IOAMs.

NonStop NB50000c BladeSystem 15

Figure 1-1 ???Example of a NonStop NB50000c BladeSystem??? shows the front view of an example NonStop NB50000c BladeSystem with eight server blades in a 42U modular cabinet with the optional HP R12000/3 UPS and the HP AF434A extended runtime module (ERM).

Figure 1-1 Example of a NonStop NB50000c BladeSystem

NonStop Multicore Architecture (NSMA)

The NonStop BladeSystem employs the HP NonStop Multicore Architecture (NSMA) to achieve full software fault tolerance by running the NonStop operating system on NonStop Server Blades. With the NSMA's multiple core microprocessor architecture, a set of cores comprised of instruction processing units (IPUs) share the same memory map (except in low-level software). The NSMA extends the traditional NonStop logical processor to a multiprocessor and includes:

???No hardware lockstep checking

???Itanium fault detection

16 NonStop BladeSystem Overview

???High-end scalability

???Application virtualization

???Cluster programming transparency

The NonStop NB50000c BladeSystem can be configured with 2 to 16 processors, communicates with other NonStop BladeSystems using Expand, and achieves ServerNet connectivity using a ServerNet mezzanine, PCI Express (PCIe) interface card installed in the server blade.

NonStop NB50000c BladeSystem Hardware

A large number of enclosure combinations is possible within the modular cabinets of a NonStop NB50000c BladeSystem. The applications and purpose of any NonStop BladeSystem determine the number and combinations of hardware within the cabinet.

Standard hardware for a NonStop BladeSystem includes:

??????c7000 Enclosure???

??????NonStop Server Blade??? (page 19)

??????Storage CLuster I/O Module (CLIM)??? (page 19)

??????SAS Disk Enclosure ??? (page 20)

??????IP CLuster I/O Module (CLIM)??? (page 19)

??????IOAM Enclosure??? (page 20)

??????Fibre Channel Disk Module (FCDM)??? (page 20)

??????Maintenance Switch??? (page 20)

??????System Console??? (page 21)

Optional Hardware for a NonStop BladeSystem includes:

??????UPS and ERM (Optional)??? (page 21)

??????Enterprise Storage System (Optional)??? (page 22)

??????Tape Drive and Interface Hardware (Optional)??? (page 23)

All NonStop BladeSystem components are field-replaceable units that can only be serviced by service providers trained by HP.

Because of the number of possible configurations, you can calculate the total power consumption, heat dissipation, and weight of each modular cabinet based on the hardware configuration that you order from HP. For site preparation specifications for the modular cabinets and the individual enclosures, see Chapter 3 (page 37).

c7000 Enclosure

The three-phase c7000 enclosure provides integrated processing, power, and cooling capabilities along with connections to the I/O infrastructure. The c7000 enclosure features include:

???Up to 8 NonStop Server Blades per c7000 enclosure ??? populated in pairs

???Two Onboard Administrator (OA) management modules that provide detection, identification, management, and control services for the NonStop BladeSystem.

???The HP Insight Display provides information about the health and operation of the enclosure. For more information about the HP Insight Display, which is the visual interface located at the bottom front of the OA, see the HP BladeSystem Onboard Administrator User Guide.

???Two Interconnect Ethernet switches that download Halted State Services (HSS) bootcode via the maintenance LAN.

???Two ServerNet switches that provide ServerNet connectivity between processors, between processors and I/O, and between systems (through connections to cluster switches). There are two types of ServerNet switches: Standard I/O or High I/O.

NonStop NB50000c BladeSystem 17

???Six power supplies that implement Dynamic Power Saving Mode. This mode is enabled by the OA module, and when enabled, monitors the total power consumed by the c7000 enclosure in real-time and automatically adjusts to changes in power demand.

???Ten Active Cool fans use the parallel, redundant, scalable, enclosure-based cooling (PARSEC) architecture where fresh, cool air flows over all the blades (in the front of the enclosure) and all the interconnect modules (in the back of the enclosure).

Figure 1-2 shows all of these c7000 features, except the HP Insight Display:

Figure 1-2 c7000 Enclosure Features

For information about the LEDs associated with the c7000 enclosure components, see the HP BladeSystem c7000 Enclosure Setup and Installation Guide.

18 NonStop BladeSystem Overview

NonStop Server Blade

The NonStop BL860c Server Blade is a two socket full-height server blade featuring an Intel?? Itanium?? dual-core processor. Each server blade contains a ServerNet interface mezzanine card with PCI-Express x4 to PCI-x bridge connections to provide ServerNet fabric connectivity. Other features include four integrated Gigabit Ethernet ports for redundant network boot paths and 12 DIMM slots providing a maximum of 48 GB of memory per server blade.

IP CLuster I/O Module (CLIM)

The IP CLIM is a rack-mounted server that is part of some NonStop BladeSystem configurations. The IP CLIM functions as a ServerNet Ethernet adapter providing HP standard Gigabit Ethernet Network Interface Cards (NICs) to implement one of the IP CLIM configurations (either IP CLIM A or IP CLIM B):

IP CLIM A Configuration (5 Copper Ports)

???Slot 1 contains a NIC that provides four copper Ethernet ports

???Eth01 port (between slots 1 and 2) provides one copper Ethernet port

???Slot 3 contains a ServerNet PCIe interface card, which provides the ServerNet fabric connections

IP CLIM B Configuration (3 Copper/2 Fiber Ports)

???Slot 1 contains a NIC that provides three copper Ethernet ports

???Slots 2 contains a NIC that provides one fiber-optical Ethernet port

???Slot 3 contains a ServerNet interface PCIe card, which provides the ServerNet fabric connections

???Slots 4 contains a NIC that provides one fiber-optical Ethernet port

For an illustration of the IP CLIM slots, see ???Ethernet to Networks??? (page 70).

NOTE: Both the IP and Storage CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, see the Cluster I/O Protocols Configuration and Management Manual.

Storage CLuster I/O Module (CLIM)

The Storage CLuster I/O Module (CLIM) is part of some NonStop BladeSystem configurations. The Storage CLIM is a rack-mounted server and functions as a ServerNet I/O adapter providing:

???Dual ServerNet fabric connections

???A Serial Attached SCSI (SAS) interface for the storage subsystem via a SAS Host Bus Adapter (HBA) supporting SAS disk drives and SAS tapes

???A Fibre Channel (FC) interface for ESS and FC tape devices via a customer-ordered FC HBA. A Storage CLIM can have 0, 2, or 4 FC ports.

The Storage CLIM contains 5 PCIe HBA slots with these characteristics:

NonStop NB50000c BladeSystem 19

Connections to FCDMs are not supported.

For an illustration of the Storage CLIM HBA slots, see ???Storage CLIM Devices??? (page 57).

SAS Disk Enclosure

The SAS disk enclosure is a rack-mounted disk enclosure and is part of some NonStop BladeSystem configurations. The SAS disk enclosure supports up to 25 SAS disk drives, 3Gbps SAS protocol, and a dual SAS domain from Storage CLIMs to dual port SAS disk drives. The SAS disk enclosure supports connections to SAS disk drives. Connections to FCDMs are not supported. For more information about the SAS disk enclosure, see the manual for your SAS disk enclosure model (for example, the HP StorageWorks 70 Modular Smart Array Enclosure Maintenance and Service Guide).

The SAS disk enclosure contains:

???25, 2.5??? disk drive slots with size options:

???72GB, 15K rpm

???146GB, 10K rpm

???Two independent I/O modules:

???SAS Domain A

???SAS Domain B

???Two fans

???Two power supplies

IOAM Enclosure

The IOAM enclosure is part of some NonStop BladeSystem configurations. The IOAM enclosure uses Gigabit Ethernet 4-port ServerNet adapters (G4SAs) for networking connectivity and Fibre Channel ServerNet adapters (FCSAs) for Fibre Channel connectivity between the system and Fibre Channel disk modules (FCDMs), ESS, and Fibre Channel tape.

Fibre Channel Disk Module (FCDM)

The Fibre Channel disk module (FCDM) is a rack-mounted enclosure that can only be used with NonStop BladeSystems that have IOAM enclosures. The FCDM connects to to an FCSA in an IOAM enclosure and contains:

???Up to 14 Fibre Channel arbitrated loop disk drives (enclosure front)

???Environmental monitoring unit (EMU) (enclosure rear)

???Two fans and two power supplies

???Fibre Channel arbitrated loop (FC-AL) modules (enclosure rear)

You can daisy-chain together up to four FCDMs with 14 drives in each one.

Maintenance Switch

The HP ProCurve 2524 maintenance switch provides the communication between the NonStop BladeSystem through the Onboard Administrator, c7000 enclosure interconnect Ethernet switch, Storage and IP CLIMs, IOAM enclosures, the optional UPS, and the system console running HP NonStop Open System Management (OSM). For a general description of the maintenance switch, refer to the NonStop NS14000 Planning Guide. Details about the use or implementation of the maintenance switch that are specific to a NonStop BladeSystem are presented here.

20 NonStop BladeSystem Overview

The NonStop BladeSystem requires multiple connections to the maintenance switch. The following describes the required connections for each hardware component.

BladeSystem Connections to Maintenance Switch

???One connection per Onboard Administrator on the NonStop BladeSystem

???One connection per Interconnect Ethernet switch on the NonStop BladeSystem

???One connection to the optional UPS module

???One connection for the system console running OSM

CLIM Connections to Maintenance Switch

???One connection to the iLO port on a CLIM

???One connection to an eth0 port on a CLIM

IOAM Enclosure Connections to Maintenance Switch

???One connection to each of the two ServerNet switch boards in one I/O adapter module (IOAM) enclosure.

???At least two connections to any two Gigabit Ethernet 4-port ServerNet adapters (G4SAs), if the NonStop BladeSystem maintenance LAN is implemented through G4SAs.

System Console

A system console is a personal computer (PC) purchased from HP that runs maintenance and diagnostic software for NonStop BladeSystems. When supplied with a new NonStop BladeSystem, system consoles have factory-installed HP and third-party software for managing the system. You can install software upgrades from the HP NonStop System Console Installer DVD image.

Some system console hardware, including the PC system unit, monitor, and keyboard, can be mounted in the NonStop BladeSystem's 19-inch rack. Other PCs are installed outside the rack and require separate provisions or furniture to hold the PC hardware.

For more information on the system console, refer to ???System Consoles??? (page 89).

UPS and ERM (Optional)

An uninterruptible power supply (UPS) is optional but recommended where a site UPS is not available. HP supports the HP model R12000/3 UPS because it utilizes the power fail support provided by the OSM. For information about the requirements for installing a UPS, see ???Uninterruptible Power Supply (UPS)??? (page 32).

There are two different versions of the R12000/3 UPS:

???For North America and Japan, the HP AF429A is utilized and uses an IEC309 560P9 (60A) input connector with 208V three phase (120V phase-to-neutral)

???For International, the HP AF430A is utilized and uses an IEC309 532P6 (32A) input connector with 400V three phase (230V phase-to-neutral).

Cabinet configurations that include the HP UPS can also include extended runtime modules (ERMs). An ERM is a battery module that extends the overall battery-supported system run time.

NonStop NB50000c BladeSystem 21

Up to four ERMs can be used for even longer battery-supported system run time. HP supports the HP AF434A ERM.

WARNING! UPS's and ERMs must be mounted in the lowest portion of the NonStop BladeSystem to avoid tipping and stability issues.

NOTE: The R12000/3 UPS has two output connectors. For I/O racks, only the output connector to the rack level PDU is used. For processor racks, one output connector goes to the c7000 chassis and the other to the rack PDU. For power feed setup instructions, see ???NonStop BladeSystem Power Distribution??? (page 37) and ???Power Feed Setup for the NonStop BladeSystem??? (page 38).

For the R12000/3 UPS power and environmental requirements, refer to Chapter 3 (page 37). For planning, installation, and emergency power-off (EPO) instructions, refer to the HP 3 Phase UPS User Guide. This guide is available at:

http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01079392/c01079392.pdf

For other UPS's, refer to the documentation shipped with the UPS.

Enterprise Storage System (Optional)

An Enterprise Storage System (ESS) is a collection of magnetic disks, their controllers, and a disk cache in one or more standalone cabinets. ESS connects to the NonStop BladeSystem via the Storage CLIM's Fibre Channel HBA ports (direct connect), Fibre Channel ports on the IOAM enclosures (direct connect), or through a separate storage area network (SAN) using a Fibre Channel SAN switch (switched connect). For more information about these connection types, see your service provider.

NOTE: The Fibre Channel SAN switch power cords might not be compatible with the modular cabinet PDU. Contact your service provider to order replacement power cords for the SAN switch that are compatible with the modular cabinet PDU.

Cables and switches vary, depending on whether the connection is direct, switched, or a combination:

1Customer must order the FC HBA ports on the Storage CLIM.

Figure 1-3 shows an example of connections between two Storage CLIMs and an ESS via separate Fibre Channel switches:

22 NonStop BladeSystem Overview

Figure 1-3 Connections Between Storage CLIMs and ESS

For fault tolerance, the primary and backup paths to an ESS logical device (LDEV) must go through different Fibre Channel switches.

Some storage area procedures, such as reconfiguration, can cause the affected switches to pause. If the pause is long enough, I/O failure occurs on all paths connected to that switch. If both the primary and the backup paths are connected to the same switch, the LDEV goes down.

Refer to the documentation that accompanies the ESS.

Tape Drive and Interface Hardware (Optional)

For an overview of tape drives and the interface hardware, see ???Fibre Channel Ports to Fibre Tape Devices??? (page 57) or ???SAS Ports to SAS Tape Devices??? (page 57).

For a list of supported tape devices, ask your service provider to refer to the NonStop BladeSystem Hardware Installation Manual.

Preparation for Other Server Hardware

This guide provides the specifications only for the NonStop BladeSystem modular cabinets and enclosures identified earlier in this section. For site preparation specifications for other HP hardware that will be installed with the NonStop BladeSystems, consult your HP account team. For site preparation specifications relating to hardware from other manufacturers, refer to the documentation for those devices.

Management Tools for NonStop BladeSystems

NOTE: For information about changing the default passwords for NonStop BladeSystem components and associated software, see ???Changing Customer Passwords??? (page 71).

This subsection describes the management tools available on your NonStop BladeSystem:

??????OSM Package??? (page 24)

??????Onboard Administrator (OA)??? (page 24)

??????Integrated Lights Out (iLO)??? (page 24)

Preparation for Other Server Hardware 23

??????Cluster I/O Protocols (CIP) Subsystem??? (page 24)

??????Subsystem Control Facility (SCF) Subsystem??? (page 24)

OSM Package

The HP Open System Management (OSM) product is the required system management tool for NonStop BladeSystems. OSM works together with the Onboard Administrator (OA) and Integrated Lights Out (iLO) management interfaces to manage c7000 enclosures. A new client-based component, the OSM Certificate Tool, facilitates communication between OSM and the OA.

For more information on the OSM package, including a description of the individual applications see the OSM Migration and Configuration Guide and the OSM Service Connection User's Guide.

Onboard Administrator (OA)

The Onboard Administrator (OA) is the enclosure's management, processor, subsystem, and firmware base and supports the c7000 enclosure and NonStop Server Blades. The OA software is integrated with OSM and the Integrated Lights Out (iLO) management interface.

Integrated Lights Out (iLO)

iLO allows you to perform activities on the NonStop Bladesystem from a remote location and provides anytime access to system management information such as hardware health, event logs and configuration is available to troubleshoot and maintain the NonStop Server Blades.

Cluster I/O Protocols (CIP) Subsystem

The Cluster I/O Protocols (CIP) subsystem provides a configuration and management interface for I/O on NonStop BladeSystems. The CIP subsystem has several tools for monitoring and managing the subsystem. For more information about these tools and the CIP subsystem, see the Cluster I/O Protocols (CIP) Configuration and Management Manual.

Subsystem Control Facility (SCF) Subsystem

The Subsystem Control Facility (SCF) also provides monitoring and management of the CIP subsystem on the NonStop BladeSystem. See the Cluster I/O Protocols (CIP) Configuration and Management Manual for more information about two subsystems with NonStop BladeSystems.

Component Location and Identification

This subsection includes these topics:

??????Terminology??? (page 25)

??????Rack and Offset Physical Location??? (page 26)

??????ServerNet Switch Group-Module-Slot Numbering??? (page 26)

??????NonStop Server Blade Group-Module-Slot Numbering??? (page 27)

??????CLIM Enclosure Group-Module-Slot-Port-Fiber Numbering??? (page 27)

??????IOAM Enclosure Group-Module-Slot Numbering??? (page 27)

??????Fibre Channel Disk Module Group-Module-Slot Numbering??? (page 29)

24 NonStop BladeSystem Overview

Terminology

These are terms used in locating and describing components:

On NonStop BladeSystems, locations of the modular components are identified by:

???Physical location:

???Rack number

???Rack offset

???Logical location: group, module, and slot (GMS) notation as defined by their position on the ServerNet rather than the physical location

OSM uses GMS notation in many places, including the Tree view and Attributes window, and it uses rack and offset information to create displays of the server and its components.

Component Location and Identification 25

Rack and Offset Physical Location

Rack name and rack offset identify the physical location of components in a NonStop BladeSystem. The rack name is located on an external label affixed to the rack, which includes the system name plus a 2-digit rack number.

Rack offset is labeled on the rails in each side of the rack. These rails are measured vertically in units called U, with one U measuring 1.75 inches (44 millimeters). The rack is 42U with U1 located at the bottom and 42U at the top. The rack offset is the lowest number on the rack that the component occupies.

ServerNet Switch Group-Module-Slot Numbering

???Group (100-101):

???Group 100 is the first c7000 processor enclosure containing logical processors 0-7.

???Group 101 is the second c7000 processor enclosure containing logical processors 8-15.

???Module (2-3):

???Module 2 is the X fabric.

???Module 3 is the Y fabric.

???Slot (5 or 7):

???Slot 5 contains the double-wide ServerNet switch for the X fabric.

???Slot 7 contains the double-wide ServerNet switch for the Y fabric.

NOTE: There are two types of c7000 ServerNet switches: Standard I/O and High I/O. For more information and illustrations of the ServerNet switch ports, refer to ???I/O Connections (Standard and High I/O ServerNet Switch Configurations)??? (page 55).

???Port (1-18):

???Ports 1 through 2 support the inter-enclosure links. Port 1 is marked GA. Port 2 is marked GB.

???Ports 3 through 8 support the I/O links (IP CLIM, Storage CLIM, and IOAM)

NOTE: IOAMs must use Ports 4 through 7. These ports support 4-way IOAM links.

???Ports 9 and 10 support the cross links between two ServerNet switches in the same enclosure.

???Ports 11 and 12 support the links to a cluster switch. SH on Port 11 stands for short haul. LH on Port 12 stands for long haul.

???Ports 13 through 18 are not supported.

???Fiber (1-4)

These fibers support up to 4 ServerNet links on ports 3-8 of the c7000 enclosure ServerNet switch.

26 NonStop BladeSystem Overview

NonStop Server Blade Group-Module-Slot Numbering

These tables show the default numbering for the NonStop Server Blades of a NonStop BladeSystem when the server blades are powered on and functioning:

GMS Numbering For the Logical Processors:

*In the OSM Service Connection, the term Enclosure is used for the group and the term Bay is used for the slot.

CLIM Enclosure Group-Module-Slot-Port-Fiber Numbering

This table shows the valid values for GMSPF numbering for the X1 ServerNet switch connection point to a CLIM:

IOAM Enclosure Group-Module-Slot Numbering

A NonStop BladeSystem supports IOAM enclosures, identified as group 110 through 115:

Component Location and Identification 27

This illustration shows the slot locations for the IOAM enclosure:

28 NonStop BladeSystem Overview

Fibre Channel Disk Module Group-Module-Slot Numbering

This table shows the default numbering for the Fibre Channel disk module:

The form of the GMS numbering for a disk in a Fibre Channel disk module is:

This example shows the disk in bay 03 of the Fibre Channel disk module that connects to the FCSA in the IOAM group 111, module 2, slot 1, FSAC 1:

Component Location and Identification 29

System Installation Document Packet

To keep track of the hardware configuration, internal and external communications cabling, IP addresses, and connect networks, assemble and retain as the systems records an Installation Document Packet. This packet can include:

??????Technical Document for the Factory-Installed Hardware Configuration???

??????Configuration Forms for the ServerNet Adapters and CLIMs???

Technical Document for the Factory-Installed Hardware Configuration

Each new NonStop BladeSystem includes a document that describes:

???The cabinet included with the system

???Each hardware enclosure installed in the cabinet

???Cabinet U location of the bottom edge of each enclosure

???Each ServerNet cable with:

???Source and destination enclosure, component, and connector

???Cable part number

???Source and destination connection labels

This document is called a technical document and serves as the physical location and connection map for the system.

Configuration Forms for the ServerNet Adapters and CLIMs

To add configuration forms for ServerNet adapters or CLIMs to your Installation Document Packet, copy the necessary forms from the adapter manuals or the CLuster I/O Module (CLIM) Installation and Configuration Guide. Follow any planning instructions in these manuals.

30 NonStop BladeSystem Overview

2 Site Preparation Guidelines

This section describes power, environmental, and space considerations for your site.

Modular Cabinet Power and I/O Cable Entry

Power and I/O cables can enter the NonStop BladeSystem from either the top or the bottom rear of the modular cabinets, depending on how the cabinets are ordered from HP and the routing of the AC power feeds at the site. NonStop BladeSystem cabinets can be ordered with the AC power cords for the PDUs exiting either:

???Top: Power and I/O cables are routed from above the modular cabinet.

???Bottom: Power and I/O cables are routed from below the modular cabinet

For information about modular cabinet power and cable options, refer to ???AC Input Power for Modular Cabinets??? (page 44).

Emergency Power-Off Switches

Emergency power off (EPO) switches are required by local codes or other applicable regulations when computer equipment contains batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes. Systems that have these batteries also have internal EPO hardware for connection to a site EPO switch or relay. In an emergency, activating the EPO switch or relay removes power from all electrical equipment in the computer room (except that used for lighting and fire-related sensors and alarms).

EPO Requirement for NonStop BladeSystems

NonStop BladeSystems without an optional UPS (such as an HP R12000/3 UPS) installed in the modular cabinet do not contain batteries capable of supplying more than 750 volt-amperes (VA) for more that five minutes, so they do not require connection to a site EPO switch.

EPO Requirement for HP R12000/3 UPS

The rack-mounted HP R12000/3, 12kVA UPS can be optionally installed in a modular cabinet, contains batteries, and has a remote EPO (REPO) port. For site EPO switches or relays, consult your HP site preparation specialist or electrical engineer regarding requirements.

If an EPO switch or relay connector is required for your site, contact your HP representative or refer to the HP 3 Phase UPS User Guide for connector and wiring for the 12kVA model. This guide is available at:

http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01079392/c01079392.pdf

Electrical Power and Grounding Quality

Proper design and installation of a power distribution system for a NonStop BladeSystem requires specialized skills, knowledge, and understanding of appropriate electrical codes and the limitations of the power systems for computer and data processing equipment. For power and grounding specifications, refer to ???AC Input Power for Modular Cabinets??? (page 44).

Power Quality

This equipment is designed to operate reliably over a wide range of voltages and frequencies, described in ???Enclosure AC Input??? (page 45). However, damage can occur if these ranges are

Modular Cabinet Power and I/O Cable Entry 31

exceeded. Severe electrical disturbances can exceed the design specifications of the equipment. Common sources of such disturbances are:

???Fluctuations occurring within the facility???s distribution system

???Utility service low-voltage conditions (such as sags or brownouts)

???Wide and rapid variations in input voltage levels

???Wide and rapid variations in input power frequency

???Electrical storms

???Large inductive sources (such as motors and welders)

???Faults in the distribution system wiring (such as loose connections)

Computer systems can be protected from the sources of many of these electrical disturbances by using:

???A dedicated power distribution system

???Power conditioning equipment

???Lightning arresters on power cables to protect equipment against electrical storms

For steps to take to ensure proper power for the servers, consult with your HP site preparation specialist or power engineer.

Grounding Systems

The site building must provide a power distribution safety ground/protective earth for each AC service entrance to all NonStop BladeSystem equipment. This safety grounding system must comply with local codes and any other applicable regulations for the installation locale.

For proper grounding/protective earth connection, consult with your HP site preparation specialist or power engineer.

Power Consumption

In a NonStop BladeSystem, the power consumption and inrush currents per connection can vary because of the unique combination of enclosures housed in the modular cabinet. Thus, the total power consumption for the hardware installed in the cabinet should be calculated as described in ???Enclosure Power Loads??? (page 46).

Uninterruptible Power Supply (UPS)

Modular cabinets do not have built-in batteries to provide power during power failures. To support system operation and ride-through support during a power failure, NonStop BladeSystems require either an optional UPS (HP supports the HP model R12000/3 UPS) installed in each modular cabinet or a site UPS to support system operation through a power failure. This system operation support can include a planned orderly shutdown at a predetermined time in the event of an extended power failure. A timely and orderly shutdown prevents an uncontrolled and asymmetric shutdown of the system resources from depleted UPS batteries.

OSM provides this ride-through support during a power failure. When OSM detects a power failure, it triggers a ride-through timer. To set this timer, you must configure the ride-through time in SCF. For this information, refer to the SCF Reference Manual for the Kernel Subsystem. If AC power is not restored before the configured ride-through time period ends, OSM initiates an orderly shutdown of I/O operations and processors. For additional information, see ???AC Power Monitoring??? (page 95).

32 Site Preparation Guidelines

NOTE: Retrofitting a system in the field with a UPS and ERMs will likely require moving all installed enclosures in the rack to provide space for the new hardware. One or more of the enclosures that formerly resided in the rack might be displaced and therefore have to be installed in another rack that would also need a UPS and ERMs installed. Additionally, lifting equipment might be required to lift heavy enclosures to their new location.

For information and specifications on the R12000/3 UPS, see Chapter 3 (page 37) and refer to the HP 3 Phase UPS User Guide. This guide is available at:

http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01079392/c01079392.pdf

If you install a UPS other than the HP model R12000/3 UPS in each modular cabinet of a NonStop BladeSystem, these requirements must be met to insure the system can survive a total AC power fail:

???The UPS output voltage can support the HP PDU input voltage requirements.

???The UPS phase output matches the PDU phase input. For NonStop BladeSystems, 3-phase output UPSs and 3-phase input HP PDUs are supported. For details, refer to Chapter 3 (page 37).

???The UPS output can support the targeted system in the event of an AC power failure. Calculate each cabinet load to insure the UPS can support a proper ride-through time in the event of a total AC power failure. For more information, refer to ???Enclosure Power Loads??? (page 46).

NOTE: A UPS other than the HP model R12000/3 UPS will not be able to utilize the power fail support of the Configure a Power Source as UPS OSM action.

If your applications require a UPS that supports the entire system or even a UPS or motor generator for all computer and support equipment in the site, you must plan the site???s electrical infrastructure accordingly.

Cooling and Humidity Control

Do not rely on an intuitive approach to design cooling or to simply achieve an energy balance???that is, summing up to the total power dissipation from all the hardware and sizing a comparable air conditioning capacity. Today???s high-performance NonStop BladeSystems use semiconductors that integrate multiple functions on a single chip with very high power densities. These chips, plus high-power-density mass storage and power supplies, are mounted in ultra-thin system and storage enclosures, and then deployed into computer racks in large numbers. This higher concentration of devices results in localized heat, which increases the potential for hot spots that can damage the equipment.

Additionally, variables in the installation site layout can adversely affect air flows and create hot spots by allowing hot and cool air streams to mix. Studies have shown that above 70??F (20??C), every increase of 18??F (10??C) reduces long-term electronics reliability by 50%.

Cooling airflow through each enclosure in the NonStop BladeSystem is front-to-back. Because of high heat densities and hot spots, an accurate assessment of air flow around and through the system equipment and specialized cooling design is essential for reliable system operation. For an airflow assessment, consult with your HP cooling consultant or your heating, ventilation, and air conditioning (HVAC) engineer.

Cooling and Humidity Control 33

NOTE: Failure of site cooling with the NonStop BladeSystem continuing to run can cause rapid heat buildup and excessive temperatures within the hardware. Excessive internal temperatures can result in full or partial system shutdown. Ensure that the site???s cooling system remains fully operational when the NonStop BladeSystem is running.

Because each modular cabinet houses a unique combination of enclosures, use the ???Heat Dissipation Specifications and Worksheet??? (page 50) to calculate the total heat dissipation for the hardware installed in each cabinet. For air temperature levels at the site, refer to ???Operating Temperature, Humidity, and Altitude??? (page 50).

Weight

Because modular cabinets for NonStop BladeSystems house a unique combination of enclosures, total weight must be calculated based on what is in the specific cabinet, as described in ???Modular Cabinet and Enclosure Weights With Worksheet ??? (page 49).

Flooring

NonStop BladeSystems can be installed either on the site???s floor with the cables entering from above the equipment or on raised flooring with power and I/O cables entering from underneath. Because cooling airflow through each enclosure in the modular cabinets is front-to-back, raised flooring is not required for system cooling.

The site floor structure and any raised flooring (if used) must be able to support the total weight of the installed computer system as well as the weight of the individual modular cabinets and their enclosures as they are moved into position. To determine the total weight of each modular cabinet with its installed enclosures, refer to ???Modular Cabinet and Enclosure Weights With Worksheet ??? (page 49).

For your site???s floor system, consult with your HP site preparation specialist or an appropriate floor system engineer. If raised flooring is to be used, the design of the NonStop BladeSystem modular cabinet is optimized for placement on 24-inch floor panels.

Dust and Pollution Control

NonStop BladeSystems do not have air filters. Any computer equipment can be adversely affected by dust and microscopic particles in the site environment. Airborne dust can blanket electronic components on printed circuit boards, inhibiting cooling airflow and causing premature failure from excess heat, humidity, or both. Metallically conductive particles can short circuit electronic components. Tape drives and some other mechanical devices can experience failures resulting from airborne abrasive particles.

For recommendations to keep the site as free of dust and pollution as possible, consult with your heating, ventilation, and air conditioning (HVAC) engineer or your HP site preparation specialist.

Zinc Particulates

Over time, fine whiskers of pure metal can form on electroplated zinc, cadmium, or tin surfaces such as aged raised flooring panels and supports. If these whiskers are disturbed, they can break off and become airborne, possibly causing computer failures or operational interruptions. This metallic particulate contamination is a relatively rare but possible threat. Kits are available to test for metallic particulate contamination, or you can request that your site preparation specialist or HVAC engineer test the site for contamination before installing any electronic equipment.

Space for Receiving and Unpacking the System

Identify areas that are large enough to receive and to unpack the system from its shipping cartons and pallets. Be sure to allow adequate space to remove the system equipment from the shipping

34 Site Preparation Guidelines

pallets using supplied ramps. Also be sure adequate personnel are present to remove each cabinet from its shipping pallet and to safely move it to the installation site.

WARNING! A fully populated cabinet is unstable when moving down the unloading ramp from its shipping pallet. Arrange for enough personnel to stabilize each cabinet during removal from the pallet and to prevent the cabinet from falling. A falling cabinet can cause serious or fatal personal injury.

Ensure sufficient pathways and clearances for moving the NonStop BladeSystem equipment safely from the receiving and unpacking areas to the installation site. Verify that door and hallway width and height as well as floor and elevator loading will accommodate not only the system equipment but also all required personnel and lifting or moving devices. If necessary, enlarge or remove any obstructing doorway or wall.

All modular cabinets have small casters to facilitate moving them on hard flooring from the unpacking area to the site. Because of these small casters, rolling modular cabinets along carpeted or tiled pathways might be difficult. If necessary, plan for a temporary hard floor covering in affected pathways for easier movement of the equipment.

For physical dimensions of the NonStop BladeSystem equipment, refer to ???Dimensions and Weights??? (page 47).

Operational Space

When planning the layout of the NonStop BladeSystem site, use the equipment dimensions, door swing, and service clearances listed in ???Dimensions and Weights??? (page 47). Because location of the lighting fixtures and electrical outlets affects servicing operations, consider an equipment layout that takes advantage of existing lighting and electrical outlets.

Also consider the location and orientation of current or future air conditioning ducts and airflow direction and eliminate any obstructions to equipment intake or exhaust air flow. Refer to ???Cooling and Humidity Control??? (page 33).

Space planning should also include the possible addition of equipment or other changes in space requirements. Depending on the current or future equipment installed at your site, layout plans can also include provisions for:

???Channels or fixtures used for routing data cables and power cables

???Access to air conditioning ducts, filters, lighting, and electrical power hardware

???Communications cables, patch panels, and switch equipment

???Power conditioning equipment

???Storage area or cabinets for supplies, media, and spare parts

36

3 System Installation Specifications

This section provides specifications necessary for system installation planning.

NOTE: All specifications provided in this section assume that each enclosure in the modular cabinet is fully populated. The maximum current for each AC service depends on the number and type of enclosures installed in the modular cabinet. Power, weight, and heat loads are less when enclosures are not fully populated; for example, a Fibre Channel disk module with fewer disks.

Modular Cabinets

The modular cabinet is a EIA standard 19-inch, 42U rack for mounting modular components. The modular cabinet comes equipped with front and rear doors and includes a rear extension that makes it deeper than some industry-standard racks. The ???Power Distribution Units (PDUs)??? (page 42) are mounted along the rear extension without occupying any U-space in the cabinet and are oriented inward, facing the components within the rack.

NonStop BladeSystem Power Distribution

There are two power configurations for NonStop BladeSystems:

???North America/Japan (NA/JPN): requires 208V three phase (120V phase to neutral) and loads wired phase-to-phase

???International (INTL): requires 400V three phase with loads wired phase to neutral (230V)

Both power configurations require 200V to 240V distribution and careful attention to phase load balancing. For more information, see ???Phase Load Balancing??? (page 45).

The NonStop BladeSystem's three-phase, c7000 enclosure contains an AC Input Module that provides 2N redundant power distribution for the power configurations. This power module comes with a pair of power cords that provide direct AC power feeds to the c7000 enclosure:

Modular Cabinets 37

One c7000 power feed is from the main power source and the other is from a backup UPS grid. For the R12000/3 UPS installed in a rack, the backup power source for the c7000 is one of the dedicated three phase outputs. There is no power sharing between the c7000 and the rack PDU feed. Two three-phase rack PDUs power all the other components except the c7000 in the NonStop BladeSystem. One PDU is connected to the main power input grid: the other to the backup grid. For racks with integral UPS, this is one of the dedicated three phase outputs of the UPS. For c7000 power setup details, see ???Power Feed Setup for the NonStop BladeSystem??? (page 38)

There are two different versions of the rack level PDU. For more details, see ???Power Distribution Units (PDUs)??? (page 42) and ???AC Input Power for Modular Cabinets??? (page 44).

Power Feed Setup for the NonStop BladeSystem

Power set up depends on your power configuration type:

??????North America/Japan Power Setup With Rack-Mounted UPS???

??????North America/Japan Power Setup Without Rack-Mounted UPS??? (page 39)

??????International Power Setup With Rack-Mounted UPS??? (page 40)

??????International Power Setup Without Rack-Mounted UPS??? (page 41)

North America/Japan Power Setup With Rack-Mounted UPS

To setup the power feed connections as shown in Figure 3-1:

1.Connect one 3-phase 60A power feed to the rack-mounted UPS IEC309 560P9 (60A, 5 wire/4 pole) input connector.

2.Connect one 3-phase 30A power feed to the AF504A PDU NEMA L15-30P (30A, 4 wire/3 pole) input connector.

3.Connect one 3-phase 30A power feed to the c7000 enclosure's NEMA L15-30P (30A, 4 wire/3 pole) input connector.

38 System Installation Specifications

Figure 3-1 North America/Japan 3-Phase Power Setup With Rack-Mounted UPS

North America/Japan Power Setup Without Rack-Mounted UPS

To setup the power feed connections as shown in Figure 3-2:

1.Connect two 3-phase 30A power feeds to the two AF504A PDU NEMA L15-30P (30A, 4 wire/3 pole) input connectors.

2.Connect two 3-phase 30A power feeds to the two NEMA L15-30P (30A, 4 wire/3 pole) input connectors within the c7000 enclosure.

NonStop BladeSystem Power Distribution 39

Figure 3-2 North America/Japan Power Setup

International Power Setup With Rack-Mounted UPS

To setup the power feed connections as shown in Figure 3-3 (page 41):

1.Connect one 3-phase 32A power feed to the rack-mounted UPS IEC309 532P6 (32A, 5 wire/4 pole) input connector.

2.Connect one 3-phase 16A power feed to the AF508A PDU IEC309 516P6 (16A, 5 wire/4 pole) input connector.

3.Connect one 3-phase 16A power feed to the c7000 enclosure's IEC309 516P6 (16A, 5 wire/4 pole) input connector.

40 System Installation Specifications

Figure 3-3 International 3-Phase Power Setup With UPS

International Power Setup Without Rack-Mounted UPS

To setup the power feed connections as shown in Figure 3-4:

1.Connect two 3-phase 16A power feeds to the two AF508A PDU IEC309 516P6 (16A, 5 wire/4 pole) input connectors.

2.Connect two 3-phase 16A power feeds to the two IEC309 516P6 (16A, 5 wire/4 pole) input connectors within the c7000 enclosure.

NonStop BladeSystem Power Distribution 41

Figure 3-4 International Power Setup Without Rack-Mounted UPS

Power Distribution Units (PDUs)

Two power distribution units (PDUs) are installed to provide redundant power outlets for the components mounted in the modular cabinet. The PDUs are oriented inward, facing the components within the rack. Each PDU is 60 inches long and has 39 AC receptacles, three circuit breakers, and an AC power cord. The PDU is oriented with the AC power cord exiting the modular cabinet at either the top or bottom rear corners of the cabinet, depending on the site's power feed needs.

For information about specific PDU input and output characteristics for PDUs factory-installed in modular cabinets, refer to ???AC Input Power for Modular Cabinets??? (page 44).

Each PDU in a modular cabinet has:

???36 AC receptacles per PDU (12 per segment) - IEC 320 C13 10A receptacle type

???3 AC receptacles per PDU (1 per segment) - IEC 320 C19 16A receptacle type

???3 circuit-breakers

These PDU options are available to receive power from the site AC power source:

???208 V AC, three-phase delta for North America and Japan

???400 V AC, three-phase wye for International

Each PDU distributes site three-phase power to 39 single-phase 200 to 240 V AC outlets for connecting the power cords from the components mounted in the modular cabinet.

The AC power feed cables for the PDUs are mounted to exit the modular cabinet at either the top or bottom rear corners of the cabinet depending on what is ordered for the site's power feed.

42 System Installation Specifications

Figure 3-5 shows the power feed cables on PDUs with AC feed at the bottom of the cabinet and the AC power outlets along the PDU. These power outlets face in toward the components in the cabinet.

Figure 3-5 Bottom AC Power Feed

Figure 3-6 shows the power feed cables on PDUs with AC feed at the top of the cabinet:

Figure 3-6 Top AC Power Feed

Power Distribution Units (PDUs) 43

AC Input Power for Modular Cabinets

This subsection provides information about AC input power for modular cabinets and covers these topics:

??????North America and Japan: 208 V AC PDU Power???

??????International: 400 V AC PDU Power???

??????Branch Circuits and Circuit Breakers???

??????Enclosure AC Input??? (page 45)

??????Enclosure Power Loads??? (page 46)

Power can enter the NonStop BladeSystem from either the top or the bottom rear of the modular cabinets, depending on how the cabinets are ordered from HP and the AC power feeds are routed at the site. NonStop BladeSystem cabinets can be ordered with the AC power cords for the PDU installed either:

???Top: Power and I/O cables are routed from above the modular cabinet.

???Bottom: Power and I/O cables are routed from below the modular cabinet

For information on the modular cabinets, refer to ???Modular Cabinets??? (page 37). For information on the PDUs, refer to ???Power Distribution Units (PDUs)??? (page 42).

North America and Japan: 208 V AC PDU Power

The cabinet includes two power distribution units (PDU). The PDU power characteristics are:

??? 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 10A receptacle type

??? 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 16A receptacle type

International: 400 V AC PDU Power

The cabinet includes two power distribution units (PDU). The PDU power characteristics are:

??? 36 AC receptacles per PDU (12 per segment) - IEC 320 C13 10A receptacle type

??? 3 AC receptacles per PDU (1 per segment) - IEC 320 C19 16A receptacle type

Branch Circuits and Circuit Breakers

Modular cabinets for the NonStop BladeSystem contain two PDUs.

44 System Installation Specifications

In cabinets without the optional rack-mounted UPS, each of the two PDUs requires a separate branch circuit of these ratings:

1Category D circuit breaker is required.

CAUTION: Be sure the hardware configuration and resultant power loads of each cabinet within the system do not exceed the capacity of the branch circuit according to applicable electrical codes and regulations.

Branch circuit requirements vary by the input voltage and the local codes and applicable regulations regarding maximum circuit and total distribution loading.

Select circuit breaker ratings according to local codes and any applicable regulations for the circuit capacity. Note that circuit breaker ratings vary if your system includes the optional rack-mounted HP Model R12000/3 Integrated UPS.

These ratings apply to systems with the optional rack-mounted HP Model R12000/3 Integrated UPS:

1The UPS input requires a dedicated (unshared) branch circuit that is suitably rated for your specific UPS.

For further information and specifications on the R12000/3 UPS (12kVA model), refer to the HP 3 Phase UPS User Guide for the 12kVA model. This guide is available at:

http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01079392/c01079392.pdf

Enclosure AC Input

Enclosures (c7000, IP CLIM, IOAM enclosure, and so forth) require:

Phase Load Balancing

Each PDU is wired such that there are three load segments with groups of outlets alternating between load segments, going up and down the PDU. Refer to ???Power Distribution Units (PDUs)??? (page 42). Factory-installed enclosures, other than the c7000, are connected to the PDUs on alternating load segments to facilitate phase load balancing. The c7000 has its own three-phase

AC Input Power for Modular Cabinets 45

input, with each phase (International) or pairs of phases (North America/Japan) associated with one of the c7000 power supplies. When the c7000 is operating in Dynamic Power Saving Mode, the minimum number of power supplies are enabled to redundantly power the enclosure. This mode increases power supply efficiency, but leaves the phases or phase pairs associated with the disabled power supplies unloaded. For multiple-cabinet installations, in order to balance phase loads when Dynamic Power Saving Mode is enabled, HP recommends rotating the phases from one cabinet to the next. For example, if the first cabinet is wired A-B-C, the next cabinet should be wired B-C-A, and the next C-A-B, and so on.

Enclosure Power Loads

The total power and current load for a modular cabinet depends on the number and type of enclosures installed in it. Therefore, the total load is the sum of the loads for all enclosures installed. For examples of calculating the power and current load for various enclosure combinations, refer to ???Calculating Specifications for Enclosure Combinations??? (page 51).

In normal operation, the AC power is split equally between the two PDUs in the modular cabinet. However, if one of the two AC power feeds fails, the remaining AC power feed and PDU must carry the power for all enclosures in that cabinet.

Power and current specifications for each type of enclosure are:

1See ???Power Feed Setup for the NonStop BladeSystem??? (page 38) for c7000 enclosure power feed requirements.

2Total apparent power is the sum of the two AC power lines feeding the enclosure. Electrical load is shared equally between the two lines.

3Decrease the apparent power VA specification by 508VA for each empty Nonstop server blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 4400 VA minus (4 server blades x 508 VA) = 2370 VA apparent power.

4 Measured with 14 disk drives installed and active.

5Maintenance switch has only one AC plug.

46 System Installation Specifications

Dimensions and Weights

This subsection provides information about the dimensions and weights for modular cabinets and enclosures installed in a modular cabinet and covers these topics:

??????Plan View of the 42U Modular Cabinet???

??????Service Clearances for the Modular Cabinets???

??????Unit Sizes???

??????42U Modular Cabinet Physical Specifications??? (page 48)

??????Enclosure Dimensions??? (page 48)

??????Modular Cabinet and Enclosure Weights With Worksheet ??? (page 49)

Plan View of the 42U Modular Cabinet

Service Clearances for the Modular Cabinets

Aisles: 6 feet (182.9 centimeters)

Front: 3 feet (91.4 centimeters)

Rear: 3 feet (91.4 centimeters)

Unit Sizes

Dimensions and Weights 47

42U Modular Cabinet Physical Specifications

Enclosure Dimensions

48 System Installation Specifications

Modular Cabinet and Enclosure Weights With Worksheet

The total weight of each modular cabinet is the sum the weights of the cabinet plus each enclosure installed in it. Use this worksheet to determine the total weight:

1Modular cabinet weight includes the PDUs and their associated wiring and receptacles.

For examples of calculating the weight for various enclosure combinations, refer to ???Calculating Specifications for Enclosure Combinations??? (page 51).

Modular Cabinet Stability

Cabinet stabilizers are required when you have less than four cabinets bayed together.

NOTE: Cabinet stability is of special concern when equipment is routinely installed, removed, or accessed within the cabinet. Stability is addressed through the use of leveling feet, baying kits, fixed stabilizers, and/or ballast.

For information about best practices for cabinets, your service provider can consult:

???HP 10000 G2 Series Rack User Guide

???Best practices for HP 10000 Series and HP 10000 G2 Series Racks

Modular Cabinet Stability 49

Environmental Specifications

This subsection provides information about environmental specifications and covers these topics:

??????Heat Dissipation Specifications and Worksheet???

??????Operating Temperature, Humidity, and Altitude???

??????Nonoperating Temperature, Humidity, and Altitude??? (page 51)

??????Cooling Airflow Direction??? (page 51)

??????Typical Acoustic Noise Emissions??? (page 51)

??????Tested Electrostatic Immunity??? (page 51)

Heat Dissipation Specifications and Worksheet

1Decrease the BTU/hour specification by 1730 BTU/hour for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 13700 BTU/hour minus (4 server blades x 1730 BTU/hour) = 6780 BTU/hour.

2 Measured with 10 Fibre Channel ServerNet adapters installed and active. 3 Measured with 14 disk drives installed and active.

4Maintenance switch has only one plug.

Operating Temperature, Humidity, and Altitude

50 System Installation Specifications

1Operating and recommended ranges refer to the ambient air temperature and humidity measured 19.7 in. (50 cm) from the front of the air intake cooling vents.

2For each 1000 feet (305 m) increase in altitude above 10,000 feet (up to a maximum of 15,000 feet), subtract 1.5?? F (0.83?? C) from the upper limit of the operating and recommended temperature ranges.

Nonoperating Temperature, Humidity, and Altitude

???Temperature:

???Up to 72-hour storage: - 40?? to 150?? F (-40?? to 66?? C)

???Up to 6-month storage: -20?? to 131?? F (-29?? to 55?? C)

???Reasonable rate of change with noncondensing relative humidity during the transition from warm to cold

???Relative humidity: 10% to 80%, noncondensing

???Altitude: 0 to 40,000 feet (0 to 12,192 meters)

Cooling Airflow Direction

NOTE: Because the front door of the enclosure must be adequately ventilated to allow air to enter the enclosure and the rear door must be adequately ventilated to allow air to escape, do not block the ventilation apertures of a NonStop BladeSystem.

Each NonStop BladeSystem includes 10 Active Cool fans that provide high-volume, high pressure airflow at even the slowest fan speeds. Air flow for each NonStop BladeSystem enters through a slot in the front of the c7000 enclosure and is pulled into the interconnect bays. Ducts allow the air to move from the front to the rear of the enclosure where it is pulled into the interconnects and the center plenum. The air is then exhausted out the rear of the enclosure.

Blanking Panels

If the NonStop BladeSystem is not completely filled with components, the gaps between these components can cause adverse changes in the airflow, negatively impacting cooling within the rack. You must cover any gaps with blanking panels. In high density environments, air gaps in the enclosure and between adjacent enclosures should be sealed to prevent recirculation of hot-air from the rear of the enclosure to the front.

Typical Acoustic Noise Emissions

70 dB(A) (sound pressure level at operator position)

Tested Electrostatic Immunity

???Contact discharge: 8 KV

???Air discharge: 20 KV

Calculating Specifications for Enclosure Combinations

Power and thermal calculations assume that each enclosure in the cabinet is fully populated. The power and heat load is less when enclosures are not fully populated, such as a Fibre Channel disk module with fewer disk drives.

AC current calculations assume that one PDU delivers all power. In normal operation, the power is split equally between the two PDUs in the cabinet. However, calculate the power load to assume delivery from only one PDU to allow the system to continue to operate if one of the two AC power sources or PDUs fails.

???Example of Cabinet Load Calculations??? (page 52) lists the weight, power, and thermal calculations for a system with:

???One c7000 enclosure with 8 NonStop Server Blades

???Two IP or Storage CLIMs

???Two SAS disk enclosures

???One IOAM enclosure

???Two Fibre channel disk modules

???One rack-mounted system console with keyboard/monitor units

???One maintenance switch

???One 42U high cabinet

For a total thermal load for a system with multiple cabinets, add the heat outputs for all the cabinets in the system.

Table 3-1 Example of Cabinet Load Calculations

1Decrease the apparent power VA specification by 508VA for each empty NonStop Server Blade slot. For example,

a c7000 that only has four NonStop Server Blades installed would be rated 4400 VA minus (4 server blades x 508

VA) = 2370 VA apparent power.

2Decrease the BTU/hour specification by 1730 BTU/hour for each empty NonStop Server Blade slot. For example, a c7000 that only has four NonStop Server Blades installed would be rated 13700 BTU/hour minus (4 server blades x 1730 BTU/hour) = 6780 BTU/hour.

52 System Installation Specifications

4 System Configuration Guidelines

This chapter provides configuration guidelines for a NonStop BladeSystem and includes these main topics:

??????Internal ServerNet Interconnect Cabling???

??????ServerNet Fabric and Supported Connections??? (page 54)

??????NonStop BladeSystem Port Connections??? (page 56)

NonStop BladeSystems use a flexible modular architecture. Therefore, various configurations of the system???s modular components are possible within configuration restrictions stated in this section and Chapter 5 (page 77).

Internal ServerNet Interconnect Cabling

This subsection includes:

??????Dedicated Service LAN Cables???

??????Length Restrictions for Optional Cables???

??????Cable Product IDs??? (page 54)

Dedicated Service LAN Cables

The NonStop BladeSystem uses Category 5, unshielded twisted-pair Ethernet cables for the internal dedicated service LAN and for connections between the application LAN equipment and IP CLIM or IOAM enclosure.

Length Restrictions for Optional Cables

NOTE: For product IDs, see ???Cable Types, Connectors, Lengths, and Product IDs??? (page 93).

Maximum allowable lengths of optional cables connecting to components outside the modular cabinet are:

Although a considerable cable length can exist between the modular enclosures in the system, HP recommends that cable length between each of the enclosures as short as possible.

Cable Product IDs

For product IDs, see ???Cable Types, Connectors, Lengths, and Product IDs??? (page 93)

ServerNet Fabric and Supported Connections

This subsection includes:

??????ServerNet Cluster Connections ???

??????ServerNet Fabric Cross-Link Connections??? (page 55)

??????Interconnections Between c7000 Enclosures??? (page 55)

??????I/O Connections (Standard and High I/O ServerNet Switch Configurations)??? (page 55)

??????Connections to IOAM Enclosures??? (page 56)

??????Connections to CLIMs??? (page 56)

??????ServerNet Fabric Cross-Link Connections??? (page 55)

The Servernet X and Y fabrics for the NonStop BladeSystem are provided by the double-wide ServerNet switch in the c7000 enclosure. Each c7000 enclosure requires two ServerNet switches for fault tolerance and each switch has four ServerNet connection groups:

???ServerNet Cluster Connections

???ServerNet Fabric Cross-Link Connections

???Interconnections between c7000 enclosures

???I/O Connections (Standard I/O and High I/O options)

The I/O connectivity to each of these groups is provided by one of two ServerNet switch options: either Standard I/O or High I/O.

ServerNet Cluster Connections

At J06.03, only standard ServerNet cluster connections via cluster switches using connections to both types of ServerNet-based cluster switches (6770 and 6780) is supported. There are two small form-factor pluggable (SFP) ports on each c7000 enclosure ServerNet switch: a single mode fiber (SMF) port (port 12) and a multi mode fiber (MMF) port (port 11) for the two ServerNet style connections. Only one of these ports can be used at a time and only one connection per fabric (from the appropriate ServerNet switch for that fabric in group 100) to the system's cluster fabric is supported.

ServerNet cluster connections on NonStop BladeSystems follow the ServerNet cluster and cable length rules and restrictions. For more information, see these manuals:

???ServerNet Cluster Supplement for NonStop BladeSystems

???For 6770 switches and star topologies: ServerNet Cluster Manual

???For 6780 switches and layered topology: ServerNet Cluster 6780 Planning and Installation Guide

54 System Configuration Guidelines

ServerNet Fabric Cross-Link Connections

A pair of small form-factor pluggable (SFPs) with standard LC-Duplex connectors are provided to allow for the ServerNet fabric cross-link connection. Connections are made to ports 9 and 10 (labeled X1 and X2) on the c7000 enclosure ServerNet switch.

Interconnections Between c7000 Enclosures

A single c7000 enclosure can contain eight NonStop Server Blades. Two c7000 enclosures are interconnected to create a 16 processor system. These interconnections are provided by two quad optic ports ??? ports 1 and 2 (labeled GA and GB) located on the c7000 enclosure ServerNet switches in the 5 and 7 interconnect bays. The GA port on the first c7000 enclosure is connected to the GA port on the second c7000 enclosure (same fabric) and then likewise the GB port to the GB port. These connections provide eight Servernet cross-links between the two sets of eight NonStop processors and the ServerNet routers on the c7000 enclosure ServerNet switch.

I/O Connections (Standard and High I/O ServerNet Switch Configurations)

There are two types of c7000 enclosure ServerNet switches: Standard I/O and High I/O. Each pair of ServerNet switches in a c7000 enclosure must be identical, either Standard I/O or High I/O. However, you can mix ServerNet switches between enclosures.

The main difference between the Standard I/O or High I/O switches is the number and type of quad optics modules that are installed for I/O connectivity.

The Standard I/O ServerNet switch has three quad optic modules: ports 3, 4, and 8 (labeled GC, EA, and EE) for a total of 12 Servernet links as shown following:

Figure 4-1 ServerNet Switch Standard I/O Supported Connections

The High I/O ServerNet switch has six quad optic modules ??? ports 3, 4, 5, 6, 7, and 8 (labelled GC, EA, EB, EC, and ED) for a total of 24 Servernet links as shown following. If both c7000 enclosures in a 16 processor system contain High I/O ServerNet switches, there are a total of 48 ServerNet connections for I/O.

Figure 4-2 ServerNet Switch High I/O Supported Connections

Connections to IOAM Enclosures

The NonStop BladeSystem supports connections to an IOAM Enclosure. The IOAM Enclosure requires 4-way Servernet links. If you want 4 IOAMs in the first enclosure, only the ServerNet High I/O Switch provides these number of connections, which are available on quad optic ports 4, 5, 6, and 7 (labelled EA, EB, EC, and ED) as illustrated in Figure 4-2.

The NonStop BladeSystem supports a maximum of six IOAMs in a NonStop BladeSystem system with 16 processors. For a 16 processor system, the connection points are asymmetrical between the ServerNet Switches. Only ports EA and EC support connections to an IOAM enclosures on the second ServerNet switch. For the Standard I/O ServerNet switch, only one IOAM module can be attached per c7000 enclosure. Additionally, if a Standard I/O ServerNet switch is used for the first c7000 enclosure for one IOAM enclosure, then the second c7000 enclosure only supports one more IOAM enclosure regardless of the type of ServerNet switch (Standard I/O or High I/O).

Connections to CLIMs

The NonStop BladeSystem supports a maximum of 24 CLIM modules per system. A CLIM uses either one or two ServerNet connections to a fabric. The Storage CLIM typically uses two connections per fabric to achieve high disk performance. The IP CLIM typically uses one connection per ServerNet fabric. For I/O connections, a breakout cable is used on the back panel of the c7000 enclosure ServerNet switch to convert to standard LC-Duplex style connections.

NonStop BladeSystem Port Connections

This subsection includes:

??????Fibre Channel Ports to Fibre Channel Disk Modules???

??????Fibre Channel Ports to Fibre Tape Devices??? (page 57)

??????SAS Ports to SAS Disk Enclosures??? (page 57)

??????SAS Ports to SAS Tape Devices??? (page 57)

Fibre Channel Ports to Fibre Channel Disk Modules

Fibre Channel disk modules (FCDMs) can only be connected to the FCSA in an IOAM enclosure. FCDMs are directly connected to the Fibre Channel ports on an IOAM enclosure with this exception:

Up to four FCDMs (or up to four daisy-chained configurations with each daisy-chain configuration containing 4 FCDMs) can be connected to the FCSA ports on an IOAM enclosure in a NonStop Blades System.

56 System Configuration Guidelines

Fibre Channel Ports to Fibre Tape Devices

Fibre Channel tape devices can be directly connected to the Fibre Channel ports on a Storage CLIM or an FCSA in an IOAM enclosure. With a Fibre Channel tape drive connected to the system, you can use the BACKUP and RESTORE utilities to save data to and restore data from tape.

SAS Ports to SAS Disk Enclosures

SAS disk enclosures can be connected directly to the two HBA SAS ports on a Storage CLIM with this exception:

Daisy-chain configurations are not supported.

SAS Ports to SAS Tape Devices

SAS tape devices have one SAS port that can be directly connected to the HBA SAS port on a Storage CLIM. Each SAS tape enclosure supports two tape drives. With a SAS tape drive connected to the system, you can use the BACKUP and RESTORE utilities to save data to and restore data from tape.

Storage CLIM Devices

This subsection includes:

??????Factory-Default Disk Volume Locations for SAS Disk Devices??? (page 58)

??????Configuration Restrictions for Storage CLIMs??? (page 58)

??????Configurations for Storage CLIM and SAS Disk Enclosures??? (page 58)

The NonStop BladeSystem uses the rack-mounted SAS disk enclosure and its SAS disk drives are controlled through the Storage CLIM. This illustration shows the ports on a Storage CLIM:

NOTE: Both the Storage and IP CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about the CIP subsystem, see the Cluster I/O Protocols Configuration and Management Manual.

This illustration shows the locations of the hardware in the SAS disk enclosure as well as the I/O modules on the rear of the enclosure for connecting to the Storage CLIM.

Storage CLIM Devices 57

SAS disk enclosures connect to Storage CLIMs via SAS cables. For details on cable types, see ???Cable Types, Connectors, Lengths, and Product IDs??? (page 93).

Factory-Default Disk Volume Locations for SAS Disk Devices

This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate disk enclosures:

Configuration Restrictions for Storage CLIMs

The maximum number of logical unit numbers (LUNs) for each CLIM, including SAS disks, ESS and tapes is 512. Each primary, backup, mirror and mirror backup path is counted in this maximum.

Use only the supported configurations as described below.

Configurations for Storage CLIM and SAS Disk Enclosures

These subsections show the supported configurations for SAS Disk enclosures with Storage CLIMs:

??????Two Storage CLIMs, Two SAS Disk Enclosures??? (page 58)

??????Two Storage CLIMs, Four SAS Disk Enclosures??? (page 59)

Two Storage CLIMs, Two SAS Disk Enclosures

This illustration shows example cable connections between the two Storage CLIM, two SAS disk enclosure configuration:

58 System Configuration Guidelines

Figure 4-3 Two Storage CLIMs, Two SAS Disk Enclosure Configuration

This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of two Storage CLIMs and two SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes:

* For an illustration of the factory-default slot locations for a SAS disk enclosure, see ???Factory-Default Disk Volume Locations for SAS Disk Devices??? (page 58).

Two Storage CLIMs, Four SAS Disk Enclosures

This illustration shows example cable connections for the two Storage CLIM, four SAS disk enclosures configuration:

Storage CLIM Devices 59

Figure 4-4 Two Storage CLIMs, Four SAS Disk Enclosure Configuration

This table lists the Storage CLIM, LUN, and bay identification for the factory-default system disk locations in the configuration of two Storage CLIMs and four SAS disk enclosures. In this case, $SYSTEM, $DSMSCM, $AUDIT, and OSS are configured as mirrored SAS disk volumes:

Fibre Channel Devices

This subsection describes Fibre Channel devices and covers these topics:

??????Factory-Default Disk Volume Locations for FCDMs??? (page 61)

??????Configurations for Fibre Channel Devices??? (page 62)

??????Configuration Restrictions for Fibre Channel Devices??? (page 62)

??????Recommendations for Fibre Channel Device Configuration??? (page 62)

??????Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module??? (page 63)

The rack-mounted Fibre Channel disk module (FCDM) can only be used with NonStop BladeSystems that have IOAM enclosures. An FCDM and its disk drives are controlled through the Fibre Channel ServerNet adapter (FCSA). For more information on the FCSA, see the

Fibre-Channel ServerNet Adapter Installation and Support Guide. For more information on the Fibre Channel disk module (FCDM), see ???Fibre Channel Disk Module (FCDM)??? (page 20). For examples of cable connections between FCSAs and FCDMs, see ???Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module??? (page 63).

60 System Configuration Guidelines

This illustration shows an FCSA with indicators and ports:

This illustration shows the locations of the hardware in the Fibre Channel disk module as well as the Fibre Channel port connectors at the back of the enclosure:

Fibre Channel disk modules connect to Fibre Channel ServerNet adapters (FCSAs) via Fiber Channel arbitrated loop (FC-AL) cables. This drawing shows the two Fibre Channel arbitrated loops implemented within the Fibre Channel disk module:

Factory-Default Disk Volume Locations for FCDMs

This illustration shows where the factory-default locations for the primary and mirror system disk volumes reside in separate Fibre Channel disk modules:

FCSA location and cable connections vary according to the various controller and Fibre Channel disk module combinations.

Configurations for Fibre Channel Devices

Storage subsystems in NonStop S-series systems used a fixed hardware layout. Each enclosure can have up to four controllers for storage devices and up to 16 internal disk drives. The controllers and disk drives always have a fixed logical location with standardized location IDs of group-module-slot. Only the group number changes as determined by the enclosure position in the ServerNet topology.

However, the NonStop BladeSystems have no fixed boundaries for the Fibre Channel hardware layout. Up to 60 FCSA (or 120 ServerNet addressable controllers) and 240 Fibre Channel disk enclosures, with identification depending on the ServerNet connection of the IOAM and slot housing in the FCSAs.

Configuration Restrictions for Fibre Channel Devices

These configuration restrictions apply and are invoked by Subsystem Control Facility (SCF):

???Primary and mirror disk drives cannot connect to the same Fibre Channel loop. Loss of the Fibre Channel loop makes both the primary volume and the mirrored volume inaccessible. This configuration inhibits fault tolerance.

Disk drives in different Fibre Channel disk modules on a daisy chain connect to the same Fibre Channel loop.

???The primary path and backup Fibre Channel communication links to a disk drive should not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated system, loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel communications path. This configuration is allowed, but only if you override an SCF warning message.

???The mirror path and mirror backup Fibre Channel communication links to a disk drive should not connect to FCSAs in the same module of an IOAM enclosure. In a fully populated system, loss of one FCSA can make up to 56 disk drives inaccessible on a single Fibre Channel communications path. This configuration is allowed, but only if you override an SCF warning message.

Recommendations for Fibre Channel Device Configuration

These recommendations apply to FCSA and Fibre Channel disk module configurations:

???Primary Fibre Channel disk module connects to the FCSA F-SAC 1.

???Mirror Fibre Channel disk module connects to the FCSA F-SAC 2.

???FC-AL port A1 is the incoming port from an FCSA or from another Fibre Channel disk module.

???FC-AL port A2 is the outbound port to another Fibre Channel disk module.

???FC-AL port B2 is the incoming port from an FCSA or from a Fibre Channel disk module.

62 System Configuration Guidelines

???FC-AL port B1 is the outbound port to another Fibre Channel disk module

???In a daisy-chain configuration, the ID expander harness determines the enclosure number. Enclosure 1 is always at the bottom of the chain.

???FCSAs can be installed in slots 1 through 5 in an IOAM.

???G4SAs can be installed in slots 1 through 5 in an IOAM.

???In systems with two or more cabinets, primary and mirror Fibre Channel disk modules reside in separate cabinets to prevent application or system outage if a power outage affects one cabinet.

???With primary and mirror Fibre Channel disk modules in the same cabinet, the primary Fibre Channel disk module resides in a lower U than the mirror Fibre Channel disk module.

???Fibre Channel disk drives are configured with dual paths.

???Where possible, FCSAs and Fibre Channel disk modules are configured with four FCSAs and four Fibre Channel disk modules for maximum fault tolerance. If FCSAs are not in groups of four, the remaining FCSAs and Fibre Channel disk modules can be configured in other fault-tolerant configurations such as with two FCSAs and two Fibre Channel disk modules or four FCSAs and three Fibre Channel disk modules.

???In systems with one IOAM enclosure:

???With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in module 2 of the IOAM enclosure, and the backup FCSA resides in module 3. (See the example configuration in ???Two FCSAs, Two FCDMs, One IOAM Enclosure??? (page 64).)

???With four FCSAs and four Fibre Channel disk modules, FCSA 1 and FCSA 2 reside in module 2 of the IOAM enclosure, and FCSA 3 and FCSA 4 reside in module 3. (See the example configuration in ???Four FCSAs, Four FCDMs, One IOAM Enclosure??? (page 64).)

???In systems with two or more IOAM enclosures

???With two FCSAs and two Fibre Channel disk modules, the primary FCSA resides in IOAM enclosure 1, and the backup FCSA resides in IOAM enclosure 2. (See the example configuration in ???Two FCSAs, Two FCDMs, Two IOAM Enclosures??? (page 65).)

???With four FCSAs and four Fibre Channel disk modules, FCSA 1 and FCSA 2 reside in IOAM enclosure 1, and FCSA 3 and FCSA 4 reside in IOAM enclosure 2. (See the example configuration in ???Four FCSAs, Four FCDMs, Two IOAM Enclosures??? (page 66).)

???Daisy-chain configurations follow the same configuration restrictions and rules that apply to configurations that are not daisy-chained. (See ???Daisy-Chain Configurations??? (page 67).)

???Fibre Channel disk modules containing mirrored volumes must be installed in separate daisy chains.

???Daisy-chained configurations require that all Fibre Channel disk modules reside in the same cabinet and be physically grouped together.

???Daisy-chain configurations require an ID expander harness with terminators for proper Fibre Channel disk module and disk drive identification.

???After you connect all Fibre Channel disk modules in configurations of four FCSAs and four Fibre Channel disk modules, yet three Fibre Channel disk modules remain not connected, connect them to the four FCSAs. (See the example configuration in ???Four FCSAs, Three FCDMs, One IOAM Enclosure??? (page 69).)

Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module

These subsections show various example configurations of FCSA controllers and Fibre Channel disk modules with IOAM enclosures.

NOTE: Although it is not a requirement for fault tolerance to house the primary and mirror disk drives in separate FCDMs. the example configurations show FCDMs housing only primary or mirror drives, mainly for simplicity in keeping track of the physical locations of the drives.

??????Two FCSAs, Two FCDMs, One IOAM Enclosure???

??????Four FCSAs, Four FCDMs, One IOAM Enclosure???

??????Two FCSAs, Two FCDMs, Two IOAM Enclosures??? (page 65)

??????Four FCSAs, Four FCDMs, Two IOAM Enclosures??? (page 66)

??????Daisy-Chain Configurations??? (page 67)

??????Four FCSAs, Three FCDMs, One IOAM Enclosure??? (page 69)

Two FCSAs, Two FCDMs, One IOAM Enclosure

This illustration shows example cable connections between the two FCSAs and the primary and mirror Fibre Channel disk modules:

This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, two Fibre Channel disk modules, and one IOAM enclosure:

* For an illustration of the factory-default slot locations for a Fibre Channel disk module, see ???Factory-Default Disk Volume Locations for FCDMs??? (page 61).

Four FCSAs, Four FCDMs, One IOAM Enclosure

This illustration shows example cable connections between the four FCSAs and the two sets of primary and mirror Fibre Channel disk modules:

64 System Configuration Guidelines

This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, four Fibre Channel disk modules, and one IOAM enclosure:

1For an illustration of the factory-default slot locations for a Fibre Channel disk module, see ???Factory-Default Disk Volume Locations for FCDMs??? (page 61).

Two FCSAs, Two FCDMs, Two IOAM Enclosures

This illustration shows example cable connections between the two FCSAs split between two IOAM enclosures and one set of primary and mirror Fibre Channel disk modules:

This table list the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of two FCSAs, two Fibre Channel disk modules, and two IOAM enclosures:

1For an illustration of the factory-default slot locations for a Fibre Channel disk module, see ???Factory-Default Disk Volume Locations for FCDMs??? (page 61).

Four FCSAs, Four FCDMs, Two IOAM Enclosures

This illustration shows example cable connections between the four FCSAs split between two IOAM enclosures and two sets of primary and mirror Fibre Channel disk modules:

66 System Configuration Guidelines

This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in the configuration of four FCSAs, four Fibre Channel disk modules, and two IOAM enclosures:

* For an illustration of the factory-default slot locations for a Fibre Channel disk module, see ???Factory-Default Disk Volume Locations for FCDMs??? (page 61)

Daisy-Chain Configurations

When planning for possible use of daisy-chained disks, consider:

1See ???Fibre Channel Devices??? (page 60).

This illustration shows an example of cable connections between the two FCSAs and four Fibre Channel disk modules in a single daisy-chain configuration:

A second equivalent configuration, including an IOAM enclosure, two FCSAs, four Fibre Channel disk modules with an ID expander, is required for fault-tolerant mirrored disk storage. Installing each mirrored disk in the same corresponding FCDM and bay number as its primary disk in not required, but it is recommend to simplify the physical management and identification of the disks.

This table list the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default system disk locations in a daisy-chained configuration:

68 System Configuration Guidelines

* For an illustration of the factory-default slot locations for a Fibre Channel disk module, see ???Factory-Default Disk Volume Locations for FCDMs??? (page 61).

Four FCSAs, Three FCDMs, One IOAM Enclosure

This illustration shows example cable connections between the four FCSAs and three Fibre Channel disk modules with the primary and mirror drives split within each Fibre Channel disk module:

This table lists the FCSA group-module-slot-port (GMSP) and disk group-module-shelf-bay (GMSB) identification for the factory-default disk volumes for the configuration of four FCSAs, three Fibre Channel disk modules, and one IOAM enclosure:

This illustration shows the factory-default locations for the configurations of four FCSAs and three Fibre Channel disk modules where the primary system file disk volumes are in Fibre Channel disk module 1:

This illustration shows the factory-default locations for the configurations of four FCSAs with three Fibre Channel disk modules where the mirror system file disk volumes are in Fibre Channel disk module 3:

Ethernet to Networks

Depending on your configuration, the Ethernet ports in an IP CLIM or a G4SA installed in an IOAM enclosure provide Gigabit connectivity between NonStop BladeSystems and Ethernet LANs. The Ethernet port is an end node on the ServerNet and uses either fiber-optic or copper cable for connectivity to user application LANs, as well as for the dedicated service LAN.

For information on the Ethernet ports on a G4SA installed in an IOAM enclosure, see the Gigabit Ethernet 4-Port Adapter (G4SA) Installation and Support Guide.

The IP CLIM has two types of Ethernet configurations: IP CLIM A and IP CLIM B.

This illustration shows the Ethernet ports and ServerNet fabric connections on an IP CLIM with the IP CLIM A configuration:

70 System Configuration Guidelines

This illustration shows the Ethernet ports and ServerNet fabric connections on an IP CLIM with the IP CLIM B configuration:

Both the IP and Storage CLIMs use the Cluster I/O Protocols (CIP) subsystem. For more information about managing your CLIMs using the CIP subsystem, see the Cluster I/O Protocols Configuration and Management Manual.

Managing NonStop BladeSystem Resources

This subsection provides procedures and information for managing your NonStop BladeSystem resources and includes these topics:

??????Changing Customer Passwords???

??????Default Naming Conventions??? (page 73)

??????Possible Values of Disk and Tape LUNs??? (page 75)

Changing Customer Passwords

NonStop BladeSystems are shipped with default user names and default passwords for the Administrator for certain components and software. Once your system is set up, you should change these passwords to your own passwords.

Managing NonStop BladeSystem Resources 71

Table 4-1 Default User Names and Passwords

Change the Onboard Administrator (OA) Password

To change the OA password:

1.Login to the OA. (You can use the Launch OA URL action on the processor blade from the OSM Service Connection.)

2.Click the + (plus sign) in front of the Enclosure information on the left.

3.Click the + (plus sign) in front of Users/Authentication.

4.Click Local Users and all users are displayed on the right side.

5.Select Administrator and click Edit.

6.Enter the new password, then confirm it again. Click update user.

7.Keep track of your OA password.

8.Change the password for each OA.

Change the CLIM iLO Password

To change the CLIM iLO password:

1.In OSM, right click on the CLIM and select Actions.

2.In the next screen, in the Available Actions drop-down window, select Invoke iLO and click Perform Action.

3.Select the Administration tab.

4.Select User Administration.

5.Select Admin local user.

6.Select View/Modify.

7.Change the password.

8.Click Save User Information.

9.Keep track of your CLIM iLO password.

10.Change the iLO password for each CLIM.

Change the Maintenance Interface (Eth0) Password

To change the maintenance interface (eth0) password:

72 System Configuration Guidelines

1.From the NonStop host system, enter the climcmd command for password:

>climcmd clim-name, ip-address, or host-name passwd

It will ask for password two times. For example: $SYSTEM STARTUP 3> climcmd c1002531 passwd

comForte SSH client version T9999H06_11Feb2008_comForte_SSH_0078 Enter new UNIX password: hpnonstop

Retype new UNIX password: hpnonstop

passwd: password updated successfully

Termination Info: 0

2.Change the maintenace interface (eth0) password for each CLIM.

The user name and password for the eth0:0 maintenance provider are the standard NonStop host system ones, for example, super.super, and so on. Other than standard procedures for setting up NonStop host system user names and passwords, nothing further is required for the eth0:0 maintenance provider passwords.

Change the NonStop ServerBlade MP (iLO) Password

To change the NonStop Server Blade MP (iLO) password:

1.Login to the ILO (You can use the Launch iLO URL action on the processor blade from the OSM Service Connection.)

2.Select the Administration tab.

3.Click Local Accounts from the left side window.

4.Select the user on the right-hand side and click the Add/Edit button below.

5.In the new page, enter the new password in the Password confirmation fields, and click

Submit.

6.Keep track of your NonStop ServerBlade MP (iLO) password.

7.Change the password for each NonStop ServerBlade MP.

Change the Remote Desktop Password

You must change the Remote Desktop Administrator's password to enable connections to the NonStop system console. To change the password for the Administrator's account (which you have logged onto):

1.Press the Ctrl+Alt+Del keys and the Windows Security dialogue appears.

2.Click Change Password.

3.In the Change Password window:

a.Enter the old password.

b.Enter the new password.

c.Click OK.

Default Naming Conventions

The NonStop BladeSystem implements default naming conventions in the same manner as Integrity NonStop NS-series systems.

With a few exceptions, default naming conventions are not necessary for the modular resources that make up a NonStop BladeSystem. In most cases, users can name their resources at will and use the appropriate management applications and tools to find the location of the resource.

However, default naming conventions for certain resources simplify creation of the initial configuration files and automatic generation of the names of the modular resources.

Managing NonStop BladeSystem Resources 73

Preconfigured default resource names are:

74 System Configuration Guidelines

Possible Values of Disk and Tape LUNs

The possible values of disk and tape LUN numbers depend on the type of the resource.

???For a SAS disk, the LUN number is calculated as base LUN + offset.

base LUN is the base LUN number for the SAS enclosure. Its value can be 100, 200, 300, 400, 500, 600, 700, 800, or 900, and should be numbered sequentially for each of the SAS enclosures attached to the same CLIM.

offset is the bay (slot) number of the disk in the SAS enclosure.

???For an ESS disk, the LUN number is calculated as base LUN + offset.

base LUN is the base LUN number for the ESS port. Its value can be 1000, 1500, 2000, 2500, 3000, 3500, 4000, or 4500, and should be numbered sequentially for each of the ESS ports attached to the same CLIM.

offset is the LUN number of the ESS LUN.

???For a physical Fibre Channel tape, the value of LUN number can be 1, 2, 3, 4, 5, 6, 7, 8, or 9, and should be numbered sequentially for each of the physical tapes attached to the same CLIM.

???For a VTS tape, the LUN number is calculated as base LUN + offset.

base LUN is the base LUN number for the VTS port. Its value can be 5000, 5010, 5020, 5030, 5040, 5050, 5060, 5070, 5080, or 5090, and should be numbered sequentially for each of the VTS ports attached to the same CLIM.

offset is the LUN number of the VTS LUN.

Managing NonStop BladeSystem Resources 75

76

5 Hardware Configuration in Modular Cabinets

This chapter shows locations of hardware components within the 42U modular cabinet for a NonStop BladeSystem. A number of physical configurations are possible because of the flexibility inherent to the NonStop Multicore Architecture and ServerNet network.

NOTE: Hardware configuration drawings in this chapter represent the physical arrangement of the modular enclosures but do not show PDUs. For information about PDUs, see ???Power Distribution Units (PDUs)??? (page 42).

Maximum Number of Modular Components

This table shows the maximum number of the modular components installed in a BladeSystem. These values might not reflect the system you are planning and are provided only as an example, not as exact values.

1 The IOAM maximum requires ServerNet High I/O Switches

2The CLIM maximum requires ServerNet High I/O Switches

Enclosure Locations in Cabinets

This table provides details about the location of NonStop BladeSystem enclosures and components within a cabinet. The enclosure location refers to the U location on the rack where the lower edge of the enclosure resides, such as the bottom of a system console at 20U.

Maximum Number of Modular Components 77

Typical Configuration

Figure 5-1 (page 79) shows the U locations in the 42U modular cabinet of some of the hardware components that can be installed in the modular cabinet.

78 Hardware Configuration in Modular Cabinets

Figure 5-1 42U Configuration

These options can be installed in locations marked Configurable Space in the configuration drawings:

???Maintenance switch: 1U required, preferably at the top of the cabinet when there is no UPS or the bottom of the cabinet when a UPS is present.

???Console: 2U required, with recommended installation at cabinet offset U20 when there is no UPS or U21 when a UPS is present.

???Fibre Channel disk module: 3U required

A second cabinet is required when:

???A second c7000 enclosure is needed for additional NonStop Server Blades or other components.

???Additional SAS disk enclosures and FCDMs are needed for storage, but space doesn't exist in the cabinet.

???Space for optional components exceeds the capacity of the cabinet.

80 Hardware Configuration in Modular Cabinets

6 Maintenance and Support Connectivity

Local monitoring and maintenance of the NonStop BladeSystem occurs over the dedicated service LAN. The dedicated service LAN provides connectivity between the system console and the maintenance infrastructure in the system hardware. Remote support is provided by OSM, which runs on the system console and communicates over the HP Instant Support Enterprise Edition infrastructure or an alternative remote access solution.

Only components specified by HP can be connected to the dedicated LAN. No other access to the LAN is permitted.

The dedicated service LAN uses a ProCurve 2524 Ethernet switch for connectivity between the c7000 enclosure, CLIMs, IOAM enclosures, and the system console.

The HP ISEE call-out and call-in access is provided by the hpVPN Cisco 831 router, which connects to the customer internet access. Alternatively, call-out and call-in access is provided by a modem.

NOTE: Your account representative must place a separate order of the ISEE VPN router with the assistance of the ISEE team.

An important part of the system maintenance architecture, the system console is a personal computer (PC) purchased from HP to run maintenance and diagnostic software for NonStop BladeSystems. Through the system console, you can:

???Monitor system health and perform maintenance operations using the HP NonStop Open System Management (OSM) interface

???View manuals and service procedures

???Run HP Tandem Advanced Command Language (TACL) sessions using terminal-emulation software

???Install and manage system software using the Distributed Systems Management/Software Configuration Manager (DSM/SCM)

???Make remote requests to and receive responses from a system using remote operation software

Dedicated Service LAN

A NonStop BladeSystem requires a dedicated LAN for system maintenance through OSM. Only components specified by HP can be connected to a dedicated LAN. No other access to the LAN is permitted.

This subsection includes:

??????Basic LAN Configuration???

??????Fault-Tolerant LAN Configuration ??? (page 83)

??????IP Addresses??? (page 84)

??????Ethernet Cables??? (page 88)

??????SWAN Concentrator Restrictions??? (page 88)

??????Dedicated Service LAN Links Using G4SAs??? (page 88)

??????Dedicated Service LAN Links Using IP CLIMs??? (page 89)

??????Initial Configuration for a Dedicated Service LAN??? (page 89)

Basic LAN Configuration

A basic dedicated service LAN that does not provide a fault-tolerant configuration requires connection of these components to the ProCurve 2524 maintenance switch installed in the modular cabinet as shown in example :

Dedicated Service LAN 81

???One connection for each system console running OSM

???One connection to each of the two Onboard Administrators (OAs) in each c7000 enclosure

???One connection to each of the two Interconnect Ethernet switches in each c7000 enclosure

???One connection to the maintenance interface (eth0) for each IP and Storage CLIM.

???One connection to the iLO interface for each IP CLIM and Storage CLIM

???One connection to each of the ServerNet switch boards in each IOAM enclosure, and optionally, two connections to two G4SAs in the system (if the NonStop maintenance LAN is implemented using G4SAs)

???UPS (optional) for power-fail monitoring

Figure 6-1 Example of a Basic LAN Configuration With One Maintenance Switch

82 Maintenance and Support Connectivity

Fault-Tolerant LAN Configuration

HP recommends that you use a fault-tolerant LAN configuration. A fault-tolerant configuration includes these connections to two maintenance switches as shown in example Figure 6-2 (page 84):

???A system console to each maintenance switch

???One connection from one Onboard Administrator (OA) in the c7000 enclosure to one maintenance switch, and another connection from the other Onboard Administrator to the second maintenance switch

???One connection from one Interconnect Ethernet switch in the c7000 enclosure to one maintenance switch, and another connection from the other Interconnect Ethernet switch to the second maintenance switch

???For every CLIM pair, connect the iLO and eth0 ports of the primary CLIM to one maintenance switch, and the iLO and eth0 ports of the backup CLIM to the second maintenance switch

???For IP CLIMs, the primary and backup CLIMs are defined, based on the CLIM-to-CLIM failover configuration

???For Storage CLIMs, the primary and backup CLIMs are defined, based on the disk path configuration

???A Storage CLIM to one maintenance switch and another Storage CLIM to the other maintenance switch

???One of the two IOAM enclosure ServerNet switch boards to each maintenance switch (optional)

???If CLIMs are used to configure the maintenance LAN, connect the CLIM that configures $ZTCP0 to one maintenance switch, and connect the other CLIM that configures $ZTCP1 to the second maintenance switch

???If G4SAs are used to configure the maintenance LAN, connect the CLIM that configures $ZTCP0 to one maintenance switch, and connect the other CLIM that configures $ZTCP1 to the second maintenance switch

Dedicated Service LAN 83

Figure 6-2 Example of a Fault-Tolerant LAN Configuration With Two Maintenance Switches

IP Addresses

NonStop BladeSystems require Internet protocol (IP) addresses for these components that are connected to the dedicated service LAN:

???c7000 enclosure ServerNet switches

???IOAM enclosure ServerNet switch boards

???Maintenance switches

???System consoles

???OSM Service Connection

???UPS (optional)

84 Maintenance and Support Connectivity

NOTE: Factory-default IP addresses for G4SAs are in the LAN Configuration and Management Manual. IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and Management Manual.

These components have default IP addresses that are preconfigured at the factory. You can change these preconfigured IP addresses to addresses appropriate for your LAN environment:

Dedicated Service LAN 85

86 Maintenance and Support Connectivity

Dedicated Service LAN 87

Ethernet Cables

Ethernet connections for a dedicated service LAN require Category 5 unshielded twisted-pair (UTP) cables. For supported cables, see Appendix A (page 93).

SWAN Concentrator Restrictions

???Isolate any ServerNet wide area networks (SWANs) on the system. The system must be equipped with at least two LANs: one LAN for SWAN concentrators and one for the dedicated service LAN.

???Most SWAN concentrators are configured redundantly using two or more subnets. Those subnets also must be isolated from the dedicated service LAN.

???Do not connect SWANs on a subnet containing a DHCP.

Dedicated Service LAN Links Using G4SAs

You can implement system-up service LAN connectivity using G4SAs or IP CLIMs. The values in this table show the identification for G4SAs in slot 5 of both modules of an IOAM enclosure and connected to the maintenance switch:

88 Maintenance and Support Connectivity

NOTE: For a fault-tolerant dedicated service LAN, two G4SAs are required, with each G4SA connected to a separate maintenance switch. These G4SA can reside in modules 2 and 3 of the same IOAM enclosure or in module 2 of one IOAM enclosure and module 3 of a second IOAM enclosure. When the G4SA provides connection to the dedicated service LAN, use the slower 10/100 Mbps PIF A rather than one of the high-speed 1000 Mbps Ethernet ports of PIF C or D.

Dedicated Service LAN Links Using IP CLIMs

You can implement up-system service LAN connectivity using IP CLIMs, if the system has at least two IP CLIMs. The values in this table show the identification for the CLIMs in a NonStop BladeSystem and connected to the maintenance switch. In this table, a CLIM named C1002581 is connected to the first fiber and eighth port of the ServerNet switch in Group 100, module 2, interconnect bay 5 of a c7000 enclosure:

NOTE: For a fault-tolerant dedicated service LAN, two IP CLIMs are required, with each IP CLIM connected to a separate maintenance switch.

Initial Configuration for a Dedicated Service LAN

New systems are shipped with an initial set of IP addresses configured. For a listing of these initial IP addresses, see ???IP Addresses??? (page 84).

Factory-default IP addresses for the G4SAs are in the LAN Configuration and Management Manual. IP addresses for SWAN concentrators are in the WAN Subsystem Configuration and Management Manual.

HP recommends that you change these preconfigured IP addresses to addresses appropriate for your LAN environment. You must change the preconfigured IP addresses on:

???A backup system console if you want to connect it to a dedicated service LAN that already includes a primary system console or other system console

???Any system console if you want to connect it to a dedicated service LAN that already includes a primary system console

Keep track of all the IP addresses in your system so that no IP address is assigned twice.

System Consoles

New system consoles are preconfigured with the required HP and third-party software. When upgrading to the latest RVU, you can install software upgrades from the HP NonStop System Console Installer DVD image.

Some system console hardware, including the PC system unit, monitor, and keyboard, can be mounted in the cabinet. Other PCs are installed outside the cabinet and require separate provisions or furniture to hold the PC hardware.

System consoles communicate with NonStop BladeSystems over a dedicated service local area network (LAN) or a secure operations LAN. A dedicated service LAN is required for use of OSM Low-Level Link and Notification Director functionality, which includes configuring primary and backup dial-out points (referred to as the primary and backup system consoles, respectively). HP recommends that you also configure the backup dedicated service LAN with a backup system console.

System Console Configurations

Several system console configurations are possible:

??????One System Console Managing One System (Setup Configuration)???

??????Primary and Backup System Consoles Managing One System???

??????Multiple System Consoles Managing One System??? (page 91)

??????Managing Multiple Systems Using One or Two System Consoles??? (page 91)

??????Cascading Ethernet Switch or Hub Configuration??? (page 91)

One System Console Managing One System (Setup Configuration)

The one system console on the LAN must be configured as the primary system console. This configuration can be called the setup configuration and is used during initial setup and installation of the system console and the server.

The setup configuration is an example of a secure, stand-alone network as shown in Figure 6-1 (page 82). A LAN cable connects the primary system console to the maintenance switch, and additional LAN cables connect the switches and Ethernet ports. The maintenance switch or an optional second maintenance switch allows you to later add a backup system console and additional system consoles.

NOTE: Because the system console and maintenance switch are single points of failure that could disrupt access to OSM, this configuration is not recommended for operations that require high availability or fault tolerance.

When you use this configuration, you do not need to change the preconfigured IP addresses.

Primary and Backup System Consoles Managing One System

This configuration is recommended. It is similar to the setup configuration, but for fault-tolerant redundancy, it includes a second maintenance switch, backup system console, and second modem (if a modem-based remote solution is used). The maintenance switches provide a dedicated LAN in which all systems use the same subnet. Figure 6-2 (page 84)shows a fault-tolerant configuration without modems.

NOTE: A subnet is a network division within the TCP/IP model. Within a given network, each subnet is treated as a separate network. Outside that network, the subnets appear as part of a single network. The terms subnet and subnetwork are used interchangeably.

If a remote maintenance LAN connection is required, use the second network interface card (NIC) in the NonStop system console to connect to the operations LAN, and access the other devices in the maintenance LAN using Remote Desktop via the console.

Because this configuration uses only one subnet, you must:

???Enable Spanning Tree Protocol (STP) in switches or routers that are part of the operations LAN.

90 Maintenance and Support Connectivity

NOTE: Do not perform the next two bulleted items if your backup system console is shipped with a new NonStop BladeSystem. In this case, HP has already configured these items for you.

???Change the preconfigured DHCP configuration of the backup system console before you add it to the LAN.

???Change the preconfigured IP address of the backup system console before you add it to the LAN.

CAUTION: Networks with more than one path between any two systems can cause loops that result in message duplication and broadcast storms that can bring down the network. If a second connection is used, refer to the documentation for the ProCurve 2524 maintenance switch and enable STP in the maintenance switches. STP ensures only one active path at any given moment between two systems on the network. In networks with two or more physical paths between two systems, STP ensures only one active path between them and blocks all other redundant paths.

Multiple System Consoles Managing One System

Two maintenance switches provide fault tolerance and extra ports for adding system consoles. You must change the preconfigured IP addresses of the second and subsequent system consoles before you can add them to the LAN. Only two system consoles should run the DHCP, DNS, BOOTP, FTP, and TFTP servers. These services should not be running on other consoles in the same maintenance LAN.

Managing Multiple Systems Using One or Two System Consoles

If you want to manage more than one system from a console (or from a fault-tolerant pair of consoles), you can daisy chain the maintenance switches together. This configuration requires an IP address scheme to support it. Contact your HP service provider to design this configuration.

Cascading Ethernet Switch or Hub Configuration

Additional Ethernet switches or hubs can be connected (cascaded) to the maintenance switches already installed. Primary and backup system consoles and the server must be on the same subnet.

You must change the preconfigured IP addresses of the second and subsequent system consoles before you can add them to the LAN.

92

A Cables

Cable Types, Connectors, Lengths, and Product IDs

Available cables and their lengths are:

NOTE: ServerNet cluster connections on NonStop BladeSystems follow the ServerNet cluster and cable length rules and restrictions. For more information, see these manuals:

???ServerNet Cluster Supplement for NonStop BladeSystems

???For 6770 switches and star topologies: ServerNet Cluster Manual

???For 6780 switches and layered topology: ServerNet Cluster 6780 Planning and Installation Guide

Cable Length Restrictions

Maximum allowable lengths of cables connecting the modular system components are:

Although a considerable distance can exist between the modular enclosures in the system, HP recommends placing all cabinets adjacent to each other and bolting them together, with cable length between each of the enclosures as short as possible.

94 Cables

B Operations and Management Using OSM Applications

OSM client-based components are installed on new system console shipments and also delivered by an OSM installer on the HP NonStop System Console (NSC) Installer DVD image. The NSC DVD image also delivers all other client software required for managing and servicing NonStop servers. For installation instructions, see the NonStop System Console Installer Guide.

OSM server-based components are incorporated in a single OSM server-based SPR, T0682 (OSM Service Connection Suite), that is installed on NonStop BladeSystems running the HP NonStop operating system.

For information on how to install, configure and start OSM server-based processes and components, see the OSM Migration and Configuration Guide. The OSM components are:

System-Down OSM Low-Level Link

In NonStop BladeSystems, the maintenance entity (ME) in the c7000 ServerNet switch or IOAM enclosures provides dedicated service LAN services via the OSM Low-Level Link for both OS coldload, system management, and hardware configuration when hardware is powered up but the OS is not running.

AC Power Monitoring

NonStop BladeSystems require one of the following to support system operation through power transients or an orderly shutdown of I/O operations and processors during a power failure:

???The optional, HP-supported model R12000/3 UPS (with one to four ERMs for additional battery power)

???A user-supplied UPS installed in each modular cabinet

???A user-supplied site UPS

System-Down OSM Low-Level Link 95

If the HP R12000/3 UPS is installed, it is connected to the system???s dedicated service LAN via the maintenance switch where OSM monitors the power state of either AC on or AC off.

For OSM to provide AC power fail support, an HP R12000/3 UPS must be installed, connected to the system's dedicated service LAN via the maintenance switch and configured as described in the NonStop BladeSystems Hardware Installation Manual.

Then, you must perform these actions in the OSM Service Connection:

???Configure a Power Source as AC, located under Enclosure 100, to configure the power rail (either A or B) connected to AC power.

???Configure a Power Source as UPS, located under Enclosure 100, to configure the power rail (either A or B) connected to the UPS. While performing this action, you must enter the IP address of the UPS.

???(Optional/recommended) Verify Power Fail Configuration, located under the system object, to verify that power fail support has been properly configured and is in place for the NonStop BladeSystem.

If a power outage occurs, OSM starts a ride-through timer and outputs an EMS notification that the system is running on the UPS batteries. The ride-through timer can be used to let the system continue operation for a short period in case the power outage was only a momentary transient. The ERMs installed in each cabinet can extend the battery-supported system runtime.

The system user must use SCF to configure the system ride-through time to execute an orderly shutdown before the UPS batteries are depleted. The time available for battery support depends on the charge in the batteries and the power that the system draws.

Additionally, if the site???s air conditioning shuts down in a power failure, the system should be shut down before its internal air temperatures can rise to the point that initiates a thermal shutdown. A timely and orderly shutdown prevents an uncontrolled and asymmetric shutdown of the system resources from depleted UPS batteries or thermal shutdown.

If a user-supplied rack-mounted UPS or a site UPS is used rather than the HP-supported model R12000/3 UPS, the system is not notified of the power outage. The user is responsible for detecting power transients and outages and developing the appropriate actions, which might include a ride-through time based on the capacity of the site UPS and the power demands made on that UPS.

The R12000/3 UPS and ERM installed in modular cabinets do not support any devices that are external to the cabinets. External devices can include tape drives, external disk drives, LAN routers, and SWAN concentrators. Any external peripheral devices that do not have UPS support will fail immediately at the onset of a power failure. Plan for UPS support of any external peripheral devices that must remain operational as system resources. This support can come from a site UPS or individual units as necessary.

This information relates to handling power failures:

???For ride-through time, see the SCF Reference Manual for the Kernel Subsystem.

???For the TACL SETTIME command, see the TACL Reference Manual.

???To set system time programmatically, see the Guardian Procedure Calls Reference Manual.

96 Operations and Management Using OSM Applications

AC Power-Fail States

These states occur when a power failure occurs and an optional HP model R12000/3 UPS is installed in each cabinet within the system:

AC Power-Fail States 97

98

C Default Startup Characteristics

Each NonStop BladeSystem ships with these default startup characteristics:

???$SYSTEM disks residing in either SAS disk enclosures or FCDM enclosures:

SAS Disk Enclosures

???Systems with only two to three Storage CLIMs and two SAS disk enclosures with the disks in these locations:

???Systems with at least four Storage CLIMs and two SAS disk enclosures with the disks in these locations:

FCDM Enclosures

???Systems with one IOAM enclosure, two FCDMs, and two FCSAs with the disks in these locations:

???Systems with two IOAM enclosures, two FCDMs, and two FCSAs with the disks in these locations:

99

???Systems with one IOAM enclosure, two FCDMs, and four FCSAs with the disks in these locations:

???Systems with two IOAM enclosures, two FCDMs, and four FCSAs with the disks in these locations:

???Configured system load paths

???Enabled command interpreter input (CIIN) function

If the automatic system load is not successful, additional paths for loading are available in the boot task. Using one load path, the system load task attempts to use another path and keeps trying until all possible paths have been used or the system load is successful. These 16 paths are available for loading and are listed in the order of their use by the system load task:

100 Default Startup Characteristics

The command interpreter input file (CIIN) is automatically invoked after the first processor is loaded. The CIIN file shipped with new systems contains the TACL RELOAD * command, which loads the remaining processors.

For default configurations of the Fibre Channel ports, Fibre Channel disk modules, and load disks, see ???Example Configurations of the IOAM Enclosure and Fibre Channel Disk Module??? (page 63). For default configurations of the HBA SAS ports, SAS disk enclosures, and load disks, see ???Configurations for Storage CLIM and SAS Disk Enclosures??? (page 58).

101

102

Index

103

104 Index

N

Naming conventions, 73 NB50000c BladeSystem characteristics, 15

Noise emissions, 51 NonStop BladeSystem

characteristics, 15 components, 17 management tools, 23 overview, 15

phase load balancing, 45 power feed setup, 38

NonStop Multicore Architecture (NSMA) overview, 16

NonStop Server Blade, 25 overview, 19

NSMA (see NonStop multiprocessor architecture)

O

Onboard Administrator password, 72

Operating system load paths, 99 Operational space, 35

OSM, 90, 95 description of, 24

OSM Certificate Tool, 95

OSM Console Tools, 95

OSM Low-Level Link, 95 OSM Notification Director, 95 OSM System Inventory Tool, 95

OutsideView, converting files, 95

P

Particulates, metallic, 34 Password

changing for CLIM iLO , 72

changing for CLIM Maintenance Interface (eth01), 72 changing for Onboard Administrator (OA), 72 changing for Remote Desktop, 72

changing for server blade iLO (MP), 72 Passwords, changing, 71

Passwords, default, 71

Paths, operating system load, 99

PDU

AC power feed, 42 description, 42 fuses, 43 receptacles, 44

PDU, International , 44

PDU, North America and Japan, 44 PDUs, 42

Phase Load Balancing, 45 Port, 25

Power and thermal calculations, 51 Power configurations, 37

Power consumption, 32 Power distribution, 37

Power distribution units (PDUs), 31, 42, 44 Power feed setup

INTL with UPS, 40 INTL without UPS, 41 NA/JPN with UPS, 38 NA/JPN without UPS, 39 NonStop BladeSystem, 38

Power feed, top or bottom, 31, 44 Power input, 44

Power quality, 31

Power receptacles, PDU, 44 Power-fail

monitoring, 95 states, 97

Primary and mirror disk drive location recommendations, 63

R

R12000/3 UPS, 21 Rack, 25

Rack offset, 25, 26 Raised flooring, 34

Receiving and unpacking space, 34 Receptacles, PDU, 44

Remote Desktop password for, 72

Restrictions

cable length, 53, 94

Fibre Channel device configuration, 62

S

Safety ground/protective earth, 32 SAS disk enclosure

bay locations, 57 connecting, 57

front and back view, 57 location in cabinet, 78 LUN, 75

overview, 20 SAS Tape

connecting, 57 Server blade, 25 ServerNet cluster switch

connections, 54 ServerNet switch

cross-connections, 55 High I/O configuration, 55

Standard I/O configuration, 55 ServerNet switch, connection types, 54 ServerNet switches in c7000

Standard I/O and High I/O configurations, 26 ServerNet switches in c7000 enclosure

types, 55

Service clearances, 47 Service LAN, 81 Slot, bay, position, 25 Specifications

assumptions, 37 cabinet physical, 48 enclosure dimensions, 48 heat, 50

105

nonoperating temperature, humidity, altitude, 51 operating temperature, humidity, altitude, 50 weight, 49

Startup characteristics, default, 99 Storage CLIM

HBA slots, 19 location in cabinet, 78 overview, 19

Storage CLIM, illustration of ports and HBAs, 57 SWAN concentrator restriction, 88

System console configurations, 90 description, 81 location in cabinet, 78 overview, 21

System disk location, 99

T

Tape drives, 23

Terminal Emulator File Converter, 95

Terminology, 25

Tools

CIP Subsystem, 24

Integrated Lights Out (iLO), 24

Onboard Administrator (OA), 24

OSM, 24

SCF Subsystem, 24

U

U height, enclosures, 47

Uninterruptible power supply (UPS), 21, 32

UPS

HP R12000/3, 21, 32, 45 input rating, 45

user-supplied rack-mounted, 33 user-supplied site, 33

V

Virtual tape

LUN, 75

W

Weight calculation, 34, 49 Weights, 47

Worksheet

heat calculation, 50 weight calculation, 49

Z

Zinc, cadmium, or tin particulates, 34

106 Index

107