Dell compellent sc8000 manual


















Provide a suitable power source with electrical overload protection. Make sure that there is a safe electrical earth connection to power supply cords. Check the grounding before applying power. The plugs on the power supply cords are used as the main disconnect device.

Page General Safety Precautions General Safety Precautions To avoid injury and damage to the equipment, always follow general safety precautions.

Keep the area around the chassis clean and free of clutter. Before moving an enclosure, remove the PCMs to minimize weight. Do not remove bay covers or drives until ready to replace them. Always get assistance when lifting the controller.

Caution: Do not operate the controller without the cover, except when replacing cooling fans. Warning: The memory modules may be hot for several minutes after the controller has been powered down. Wait for the memory modules to cool before handling them. Page Electrostatic Discharge Precautions Chapter 2 Install the Hardware Caution: To ensure proper controller cooling, memory module blanks must be installed in any memory socket that is not occupied.

Electrostatic Discharge Precautions To avoid injury and damage to the equipment, always follow electrostatic discharge ESD precautions t Electrostatic discharge ESD is generated by two objects with different electrical charges coming into contact with each other. The resulting electrical discharge can damage electronic components and printed circuit boards.

Use a suitable ESD wrist or ankle strap. Open the Cover Open the cover to gain access to the inside of the controller. Caution: Do not slide the controller into a rack with the cover open. Riser 2 contains a chassis intrusion detection switch that can be easily damaged when sliding a controller into the rack with the cover open.

Page 46 Remove the Cooling Shroud Remove the cooling shroud to gain access to the memory sockets. Note: For proper seating of the cooling shroud in the chassis, ensure that the cables inside the controller are routed along the chassis wall and secured using the cable securing bracket. Page 48 Hold onto the touch points of the riser. Table 1. Page 49 Locate the slot where the IO card is installed.

For instructions, see the documentation accompanying the IO card. IO cards that are not in full contact with the connector cause unpredictable failures in the Storage Center.

Page 50 Align the riser with the connector and the riser guides. Table 2. Page 51 Figure Page 52 Figure Use the supports provided on the cooling shroud. IO cards that are not in full contact with the connector cause unpredictable failures in Storage Center. Page Close The Cover Figure Close the Cover Perform this procedure to close the cover. Apply labels to the front and back of each controller.

Prerequisites You must have a label maker, or four blank labels and a writing utensil. Steps 1 Create two labels for each controller. This can be accomplished in one of two configurations. Install the controllers on the bottom of the rack and the enclosures above the controllers. Steps 1 Mount the controller s in a rack. See the instructions included with the rail kit for detailed steps. Prerequisites Make sure that the site has the following: V power from an independent source or a rack power distribution unit with a UPS.

A 5U space in the lower 20U of the rack. Steps Mount the SC enclosure into the rack using mounting rails, and optionally secure with brackets to prevent tipping. Warning: When unpacking the 5U enclosure, two people using lift straps are required to avoid injury. Page 57 Mount the Hardware in a Rack Figure Do not remove drives from DDICs. Caution: When the enclosure turns off, the disks continue to spin after losing power. To avoid damage to the disk, wait for the disk to stop spinning approximately 10 seconds after shut down before pulling the disk from the enclosure.

If necessary, remove drive blanks from the enclosures to make room for the drives. Installing a Drive Carrier Module 3 Make sure that the drive carrier is oriented so that the drive faces up and the handle opens from the left. Activating the Anti-tamper Lock 10 When the enclosure is turned on and the drives spin up, make sure that the drive LEDs are on green , indicating normal conditions.

Page Installing 2. Inserting a 2. You will hear a click as the latch engages and holds the handle closed. The following instructions show the installation of the Dell Enterprise Plus drive for reference only. Installing a Drive 1 Insert the hard drive carrier into the hard drive bay.

A series of interconnected enclosures is referred to as a chain. Each chain can contain up to 6 Gbps SAS drives. Subsequent enclosures are connected in series. To achieve redundancy, each SAS chain is made up of two paths that are referred to as the A side and B side.

SAS chains begin with an initiator port. Use a Cat 6 cable for this connection. Note: Storage Center currently runs the 10 Gb ports at 1 Gb speed. Use a Cat. IPC Cable Label b Near the connector, align the label perpendicular to the cable and affix it starting with the top edge of the label. Align Cable Label c Wrap the label around the cable until it fully encircles the cable. Wrap Cable Label d Apply a matching label to the other end of the cable.

Align Cable Label Dell Compellent Locate the scenario that most closely matches the Storage Center you are configuring and follow the instructions, modifying them as necessary. Note: The cabling illustrations in this chapter refer to enclosures as Enclosure 1, Enclosure 2, and so on. The numbers shown in the illustrations may not match the Index number assigned by the System Manager.

Storage Center assigns an Index only after the system is powered up and at least one drive is assigned to a Disk Folder. When cabling a second chain of enclosures, use a second IO card to improve redundancy. When you configure these ports using the IO port wizard, set the port Purpose to Unknown.

Each enclosure is on a separate chain. When two chains are used, IPC is not required. A cabling chain for the SC can contain up to two enclosures of 84 SAS drives each drives total. Each chain must contain the same type of enclosures, such as all SC Do not mix enclosure types in a chain. SC enclosure chains are cabled as follows.

Controller Enclosure 1 Enclosure 2 Figure If an IO card, cable, enclosure, or controller fails, at least one side of a chain continues operating. IPC is optional in this configuration. When connecting different types of enclosures, each type must reside on its own chain. In addition, all enclosures must be 6 Gb enclosures. Note: The maximum storage space is 1 petabyte, regardless of the number or type of enclosures.

Chain 1: Side B 1 Connect controller port 3 to the last enclosure, bottom , port C. Storage Center which assigns an Index only after the system is powered up and at least one drive is assigned to a Disk Folder. When you add a second chain, you should also have a second IO card in the controller. Controller B: slot 6, port 3. Controller B: slot 6, port 1.

Note: Cables that connect one enclosure to another do not need to be labeled. Align Cable Label b Wrap the label around the cable until it fully encircles the cable. Wrap Cable Label c Apply a matching label to the other end of the cable.

Depending on how the Storage Center is configured, the following types of redundancy are available. If a path fails, the server continues to use the remaining active path s. In virtual port mode, all ports are active, and if one port fails the load is distributed between the remaining ports within the same fault domain. Both transport types can be configured to use the same mode or different modes to meet the needs of the network infrastructure. Page Virtual Port Mode Front-End Connectivity Modes Note: Dell Compellent strongly recommends using virtual port mode unless the network environment does not meet the requirements for virtual port mode.

Virtual Port Mode Virtual port mode provides port and controller redundancy by connecting multiple active ports to each Fibre Channel or Ethernet switch.

Servers target only the virtual WWNs. During normal conditions, all ports process IO. When the failure is resolved and ports are rebalanced, the virtual port returns to the preferred physical port.

Improved redundancy: Ports can fail over individually instead of by controller. Ports that belong to the same fault domain can fail over to each other because they have the same connectivity. The following table summarizes the failover behaviors for this configuration.

Controller A fails Virtual ports on controller A fail over by moving to physical ports on controller B. Controller B fails Virtual ports on controller B fail over by moving to physical ports on controller A. A single port fails The virtual port associated with the failed physical port moves to another physical port in the fault domain.

The fault domain determines which ports are allowed to fail over to each other. Requirements for Legacy Mode The following requirements must be met to configure a Storage Center in legacy mode. Multipathing If multiple active paths are available to a server, the server must be configured for MPIO to use more than one path simultaneously. Fibre Channel zoning FC switches must be zoned to meet the legacy mode zoning requirements.

Controller A fails In fault domain 1, primary port P1 fails over to reserved port R1. Controller B fails In fault domain 2, primary port P2 fails over to reserved port R2.

A single port fails The port does not fail over because there was no controller failure. If a second path is available, MPIO software on the server provides fault tolerance by sending IO to the functioning port in the other fault domain. If a second path is available, MPIO software on the server provides fault tolerance. Note: Some ports may not be used or dedicated for replication, however ports that are used must be in these zones. Port Zoning Guidelines When port zoning is configured, only specific switch ports are visible.

If a storage device is moved to a different port that is not part of the zone, it is no longer visible to the other ports in the zone. Steps 1 Connect the FC fabrics. Apply cable labels to both ends of each cable. Note: The cabling instructions and diagrams in this section are valid for a particular IO card configuration. If the controllers you are installing contain different IO cards, adjust the cabling accordingly.

In this configuration, two fault domains are spread across both controllers. The reserved paths provide redundancy for single controller failure. This configuration is vulnerable to switch failure. Page In this configuration, four fault domains are spread across both controllers. To provide redundancy, the primary port and the corresponding reserve port in a fault domain must connect to the same fabric.

In this configuration, there is one fault domain because there is a single FC fabric. Each controller is connected to the fabric by at least two FC connections to provide port redundancy. If a single port fails, its virtual port fails over to another port. If a controller fails, the virtual ports on the failed controller move to ports on the other controller. Page 3 Connect fault domain 2 shown in blue to fabric 2.

Note: To prevent a port or cable failure from blocking access to volumes mapped to a controller, connect two additional fault domains so that each controller has two primary paths to the iSCSI network. Page 3 2 1 Figure In this configuration, four fault domains are spread across both controllers. To provide redundancy, the primary port and the corresponding reserve port in a fault domain must connect to the same network. In this configuration, there is one fault domain for each iSCSI network.

If a single port fails, its virtual port fails over to another port in the same fault domain. If a controller fails, the virtual ports on the failed controller move to ports on the other controller in the same fault domain. Contents Prerequisites.

Hardware Configuration All hardware must be installed and cabled before beginning the software setup process. If server connectivity is through Fibre Channel FC , the FC switches must be configured and zoned before configureing the controller s. Required Software Versions Storage Center hardware has minimum software version requirements.

The ETH 0 port supports system login and access to the management software. Use the lowest serial number for Controller 1. Configure the connection as shown in the following table. The controller may reboot more than once — this is normal. Configure an IP address for the management interface eth0 , specify one or more DNS servers, and set the domain name to which the Storage Center belongs.

Because eth1 is directly connected to the other controller, no default gateway is required. Do not change the eth1 address unless both controllers were factory configured with the same address or the customer requests the change. The eth1 address cannot reside in the same subnet as eth0.

The Storage Center Startup Wizard appears. Enclosure firmware updates may not install with the OS update, since Storage Center does not recognize some new hardware until after the OS is installed. Check the Update Details window for Enclosure updates marked Deferrable, and install them now. Do not interrupt the update after it has started. Firmware upgrades should have installed when the first controller was updated.

Steps 1 Use a supported web browser to connect to the eth0 IP address or host name of controller 1. Note: Messages that appear may differ depending on the browser used. Click Continue to this website in Internet Explorer or add a security exception in Firefox. The Storage Center Login page appears. The wizard contains the following pages. When displayed for a new user, the EULA does not require a customer name or title. The Load License page appears. The Startup Wizard displays a message when the license is successfully loaded.

If standard drives are detected, the Create Disk Folder page appears. See About Secure Data on page If a key management server has not been configured or is unavailable, you can manage FIPS SEDs into a Secure Data folder, however the disks remain in a pending state until the key management server is available.

The key resides on the disk, providing encryption for data written to the disk and decryption for data as it is read from the disk.

Destroying the key makes any data on the disk immediately and permanently unreadable, a process referred to as a crypto erase. This allows the disk to be reused, although all previous data is lost. Note: Because disks that contain user data cannot be moved from a Secure Data folder, Storage Center does not crypto erase disks that contain user data.

When power is removed from the drive, the drive cannot be unlocked without access to the authority credential stored in the key management server. By default, the Startup Wizard selects all available disks.

If enclosures or disks are missing, the issues might be fixed by following Troubleshooting Enclosures on page By default, all disks are selected. To select all disks, click Select All. The Startup Wizard displays a prompt to select disks to designate as hot spares. A hot spare disk is held in reserve until a disk fails, at which point the hot spare replaces the failed disk.

The hot spare disk must be as large or larger than the largest disk of its type in the disk folder. For redundancy, there must be at least one hot spare for each enclosure. In general, System Manager uses the following best practices in designating hot spares: For 2U enclosures, one spare disk for every disk class 10K, 7. The Startup Wizard displays a summary of the disk folder that will be created.

The wizard displays options for redundancy and datapage size. Note: The default managed disk folder settings are appropriate for most sites.

If you are considering changing the default settings, contact Dell Technical Support Services for advice. Selecting this size reduces the amount of space the System Manager can present to servers. Caution: When considering using either the KB or 4 MB datapage settings, it is recommended that you contact Dell Technical Support Services for advice on balancing resources and to understand the impact on performance.

Select this option only for data that is backed up some other way. The disk folder summary appears. The Add Controller page appears. Page 2 Proceed based on whether you use IPv6 for controller addressing. Note: Information displayed in the following figure is for illustration only. The values displayed are unique to each Storage Center. The dimmed box in the preceding figure indicates that the HSN was in the license file making the Controller ID box unavailable.

Use the IP address from the console command for controller show Controller 2. Used for communication between controllers.

The Startup Wizard displays a message that data and configuration information on the second controller is lost and asks for confirmation. Wait for the process to complete and for the controller to reboot, which can take a few minutes. When complete, the Time Settings page appears. Note: Accurate time synchronization is critical for replications.

Dell Compellent recommends using NTP to set the system time. For more information, see: support. The System Setup page appears.

This is typically the serial number of Controller 1. The management IP address is distinct from the controller 1 and controller 2 addresses. If either Thank you all for the replies. Bit of information here and there. I'll take contact with Dell Support too. Browse Community. Turn on suggestions. Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type. Showing results for. Search instead for. Did you mean:. Last reply by ancailliau Unsolved.

Access to documentation about SC Can someone help me? Thanks in advance,. Labels 1. Labels Labels: SC All forum topics Previous Topic Next Topic. Replies 5. Hello ancailliau, Here is the link to documentation for an SC I would like to know if there is an Hardware guide, as with many other Dell servers?



0コメント

  • 1000 / 1000