Skip to content
Wil Mayers edited this page Jan 7, 2018 · 6 revisions

Dell PowerEdge C6420 (Skylake/Kabylake)

Hardware Overview

  • One 2U C6000 chassis can support 2 x 2U high nodes or 4 x 1U high nodes
  • Ships with two 1600 or 2000W PSUs (single PSU is not supported)
  • Minimum of 1 node required; can ship with blanks for other node slots
  • Nodes are dual-socket Scalable Xeon Silver/Gold/Platinum; supports 2 CPUs only (max 160W in certain configs)
  • Chassis supports up to 12 x 6TB 3.5" disks or up to 24 x 1.2TB 2.5" disks (or SSDs); supports 1 NVMe per node
  • 16 DIMM slots; requires 1 DIMM to boot; max of 2TB RAM (16 x 128GB); PCB printing matches slot enumeration
  • Nodes support a range of network options, and have 1 x PCIeG3 16x slot, one x8 mez slot, and 1 x onboard NIC slot

Profile 'Cluster Slave'


BIOS configuration

  1. If you want to use hardware RAID0/1 on your C6320 to boot from, set that up first.
  • Press F2 during boot to enter BIOS menu
  • Choose 'Device Settings' from the main BIOS menu
  • Choose the RAID card to configure (usually the only one listed)
  • Choose the RAID card from the next menu (usually LSI SAS2 MPT or PERC7)
  • Choose 'Controller Management'
  • Choose 'Create configuration'
  • Select your RAID level (usually RAID1 for system disks)
  • Choose 'Select Physical disks' and choose the drives to include
  • Select 'Apply changes' and confirm when prompted
  • Press ESC to exit to the main menu, saving settings when prompted
  • Continue with BIOS settings as below
  1. Power on and press F2 to enter BIOS, note the firmware revisions during boot
  2. Select 'System BIOS' from main menu
  3. Select system defaults
Press TAB to move to FINISH button
LEFT arrow key to move to default and press ENTER to select
Select finish to return to main BIOS menu
  1. From the main menu, select 'System BIOS' again.
  2. Make the following changes to the default settings:
Processor Settings -> Logical Processor = Disabled
Boot settings -> Boot Mode = BIOS
  • Choose the network interface card (NIC) you want to PXE boot from. This will either be:
    1. The onboard 10Gb SFP+ port (listed as onboard NIC XE) or
    2. A 1Gb port installed in the PCI mez slot (listed as the Mezzanine NIC) The NIC you want to PXE boot from must be selected as the first boot device:
Boot settings -> Boot BIOS settings -> BIOS Boot sequence -> Choose NIC to PXE boot from as device 1
Boot settings -> Boot BIOS settings -> BIOS Boot sequence -> Choose boot HDD as device 2

Make the following settings:

Serial communication -> serial port address = COM2
Serial communication -> redirect after boot = Disabled
System Performance -> Select the "Performance" profile
System Security -> Set AC power recovery to "OFF"
Miscellaneous Settings -> F1/F2 Prompt on Error = Disabled

If your C6320 has 1Gb network ports in and you will not be using the onboard 10Gb ports, you can disable them by making this setting:

Integrated devices -> Embedded NIC 1+2 -> Disabled (OS)

Press TAB to highlight the FINISH button and press return; save and exit to main menu

  1. From the main BIOS menu, select 'iDRAC settings'. Make the following change:
Network -> IPMI settings -> enable IPMI-over-LAN = enabled
Thermal - Thermal Profile = Maximum Performance

Select BACK to return to the main iDRAC settings page, and select FINISH to exit, saving the changed settings.

  1. From the menu BIOS menu, write down the Dell service tag printed at the bottom of the screen.
  2. Select the FINISH button from the main menu, exit and reboot.

Upgrading firmware

Three firmware payloads are required which must be compatible with each other. Firmware must be applied in the following order:

  • Chassis manager (CM); once for the C6400 chassis (run the update on the any node)
  • iDRAC (integrated Dell remote access controller); once per node
  • BIOS; once per node

Other component firmware may also be required:

  • Hard disks; usually ship with recent firmware when new, but update may be needed if drives are replaced
  • NIC; usually ship with latest firmware, but may need to be updated for new features/better performance
  • Infiniband; Mellanox HBAs regularly ship with old firmware and should always be updated from Linux

Upgrading using Linux binaries

Firmware for iDRAC and BIOS can be applied from Linux using a BIN file, or from a bootable DOS environment.

  • Linux firmware upgrade is safe for BIOS (ROM is loaded to BMC first then applied on the next reboot), but can damage the machine if a BMC or FCB firmware is interrupted. Take precautions to ensure that power will not be interrupted during firmware update procedure.
  • All DOS firmware upgrades may damage the machine if interrupted while running.
  • Download new firmware from Dell support site; www.dell.com/support; enter service tag and select Linux OS
  • Execute the BMC and BIOS update packages on the nodes.
  • Multiple nodes can be done in parallel.

Updating chassis management (CM) devices

For updating chassis managers, follow this process from ONE node in the chassis ONLY:

  • Download the firmware package, and unzip it. It is usually one file called "cm.sc"
  • Use the following ipmitool command on the nodes to confirm the current CM version installed:
[root@node001[mycluster] ~]# ipmitool raw 0x30 0x12
 01 e5 1b 01 22 01 00 00 00 01 02 00 01 2d 37 ff
 ff 08 c2 00 00 00 08 01 08 10 64 23 fa 01
  • Look at bytes 4 and 5 on the first line to find the current version (e.g. version 1.22 in the above)
  • Put the firmware package on an NFS share that the iDRACs have access to
  • SSH into ONE of the iDRACs in the chassis and apply the update, using the following commands. N.B. these commands assume your update file is called "cm.sc" and your NFS server is 10.11.0.51 sharing a filesystem called "/export/data1":
racadm update -f cm.sc -l 10.11.0.51:/export/data1
  • The update will take just a few seconds to start and output the following if it succeeded:
RAC1066: Firmware update for cm.sc initiated successfully.
  • Wait for at least 10 minutes to allow the chassis to update itself. Fans will be tested during this period, so a large cluster will sound noisy.
  • Use ipmitool to check that the new version has been installed; e.g.
[root@node001[mycluster] ~]# ipmitool raw 0x30 0x12
 01 e5 1b 01 2d 01 00 00 00 01 02 00 01 2d 37 ff
 ff 08 c2 00 00 00 08 01 08 10 64 23 fa 01

Disk drive zoning on C6400 chassis

The C6400 chassis can have a switchable SAS backplane installed, which can allow control over which disk drives are assigned to which of the nodes in the chassis. This is option is more expensive and can be confusing as drive slots do not necessarily match up to the nodes you'd expect them to. The SAS zoning utility is used to configure the backplane and assign drive slots to chassis node sled slots. Requires SAS disk drives and SAS RAID cards in the C6420 servers. It is not possible to assign the same drive slot to multiple nodes. The SAS zoning tool is available from the Dell website, under the support area for the C6420 server.


Hardware support

  • Call Dell on 01344-860456 with the service tag for the machine.
  • Each C6420 in a chassis has its own service tag, and the chassis has a separate tag
  • PSU faults can be opened against the chassis tag or a node tag
  • Use this to convert tags to express service codes
  • Service tags are located on the rear pull handle for C6220s, and on the left hand side of the chassis for C6000

Fault finding may require a DSET report to be generated. Use the latest available DSET revision, and use "RHEL6" or "RHEL7" as the OS type. DSET will not install or run properly if the LSI MegaCLI monitor is installed, so use RPM to remove this first and YUM to install it again afterwards if necessary.


Known issues

  • Early samples show unreliable iDRAC processors, particularly when trying to save settings, update fan profiles, etc.
  • Software BIOS setting tools produce inconsistent results; nodes can lose settings when disconnected from AC
  • Disk drive status LEDs do not always do what you expect, particularly with different RAID controllers in.
  • C6420 nodes ship with display-port VGA connectors; each chassis should (but sometimes doesn't) ship with with an adapter to standard VGA. Adapters that shipped with early samples are poor quality, and only work for 5-10 connections before suffering from bent pins.
  • Differing advice on if individual nodes can be serviced without powering down entire chassis
  • Chassis is deep (>900mm) and will not fit in shallow racks with Infiniband cabling
  • Onboard iDRAC port is shared with an operating-system-accessible NIC. There is no hardware configuration option to prevent this.
  • There is currently no 1Gb NIC option for the onboard NIC slot; these are 10Gb SFP+ only
  • OPA connectors for -F model CPUs are expressed in the onboard NIC slot (when available)
Clone this wiki locally