Frequently Asked Questions (FAQ)
Using a Hot-Spare
A Hot-Spare helps ensure RAID system reliability and uptime. It gives the RAID controller a drive that can be automatically used to rebuild RAID data in the event of another drive problem or failure.
If you have a RAID5 system, consider migrating to RAID6 instead of simply assigning a hot-spare. This provides additional reliability as a second set of parity information is available. There are instances where this is not practical - for example, if your system includes two RAID5 arrays, or perhaps a RAID5 or a RAID1 and the number of additional drives you can fit is limited. In this instance, if you can only fit one additional drive, the use of a Global hot-spare is recommended.
The instructions below are based on a system with an Intel or LSI hardware RAID controller or module, and the Intel RAID Web Console 2 (RWC 2) or LSI MegaRAID Storage Manager (MSM).
Tip: If you have recently replaced a failed hard drive and the RAID array has not automatically started the rebuild process, follow the instructions below. This is likely to happen on an Intel SRCSASRB or Intel SRCSATAWB controller as these do not automatically rebuild onto unassigned drives unless configured to do so.
- Login to RWC2 / MSM (use your Windows Administrator or Domain Administrator password to do this)
- Go to the Physical Tab
- Confirm the slot which is available for fitting a spare drive (slots are usually numbered from 0, starting from the bottom left of the chassis, numbered going up, then across from left to right). If unsure, check with Stone Support.
Typical Drive Slot Layout
- Fit the spare drive.
- Check that the drive is recognised in the Physical drive list.
- Right hand click on the drive. You can then assign the drive as a global hot-spare (suitable for most situations) or dedicate the drive as a hot-spare for a specific virtual drive.
These instructions assume that you have already fitted the hot-spare drive and have rebooted the system to do so, hence performing the configuration using the RAID BIOS. If your system isn't running Windows you can also use this method to perform the configuration; if you are running VMWare consider setting up your VMWare hosts for Remote RAID Web Console configuration.
- Enter the RAID BIOS when prompted (usually CTRL+G on Intel controllers or CTRL+H on LSI controllers)
- Press Start (after selecting the right adapter if you have more than one adapter in the system).
- The system should immediately go into the Logical View, allowing you to see the Physical drives.
- Under Logical view, find the Unconfigured drive - normally marked in blue, and then click on it.
- Then you can use "Make Global HSP" to turn the drive into a global hot-spare, or "Make Dedicate HSP" to dedicate the drive to an array.
- Once you have selected the radio button, click on Go to make the change.
- Systems with an Intel or LSI Hardware RAID Controller or Module.
Use the steps below to configure these system's BIOSes for the best acoustic performance.
- Enter the BIOS by tapping the F2 from power on
- Go into the Advanced menu by using the right arrow key
- Then go into System Acoustics and Performance Configuration
- Set the settings in this menu as per below. Note the 300m or less altitude setting.
Other common causes of fan problems:
- Lack of ventilation: If the system is in a small confined space without adequate airflow the fans will ramp up as the system tries to prevent overheating
- Lid not closed properly: The servers have an intrusion sensor and may ramp up the fans if the lid is not closed properly, depending on model
- Redudandant Power Supply failure or Redundant Power Supply not plugged in: Have one of the systems multiple power supplies in a failed or disconnected from mains state will cause the fans to ramp up
- System repair, reconfiguration or upgrade: A replacement system board, system processors or upgrade options may require a BIOS package re-flash so that the hardware is managed correctly.
- System fault: Check the front system warning symbol. This should be solid green while the server is on. When the system is switched on, it will be orange for a few seconds but should turn green. If it does not, contact Stone warranty service.
- For example, a faulty fan may cause the other fans in the system to ramp up and become noiser. Usually it is not the fans which are now loud which are at fault, but the fan(s) that the BIOS has detected are running slow. Use the system SEL log viewer to determine the cause of an orange or flashing green warning symbol.
- Stone server and workstation systems including the S5520HC, S5520SC, S5520UR and others.
The aim of this article is to assist users that need to install Windows Server. It covers some of the common issues that are faced and shows you how to deploy Windows or Windows Server even on the most difficult of systems.
Downloading Mass Storage Drivers
When installing Windows Server it is recommended to have the latest drivers for your RAID card, RAID module or integrated controller. If Windows does not find a hard disk to install to during setup, you will need to use the Load Driver option supply the drivers.
The drivers are available from the Stone Driver Finder (depending on model) or from the original component manufacturer's web site, such as Intel.
Note that your system may include a motherboard with several integrated controller (and driver) options. Some systems also include an add-in RAID controller or module for which more updated drivers or software will be available on a different page to the motherboard.
If you aren't sure what is included in your Stone system, or what you need, please contact Stone support for help.
Tip: If you download drivers in a ZIP file, make sure you extract the ZIP file to the pen drive, for use with Windows Setup. Windows Setup won't look inside ZIP files for drivers. It's also worth knowing that some component manufacturers don't supply Windows Server specific drivers. Instead, they provide drivers for the equivalent desktop operating system that shares the same platform underneath. For example, Server 2016 might need to use Windows 10 x64 drivers, and Server 2012R2 might need to use Windows 8.1 x64 drivers, depending on how the component manufacturer packages them.
Plan the Right BIOS Boot Mode for your System i.e. EFI / Legacy
Legacy mode is (as of 2017) still the default BIOS mode for many new servers. However there are more and more situations where UEFI mode (also known as plain EFI) mode is required:
- If you plan to use a TPM 2.0 module, for example as part of a Bitlocker deployment.
- If your organisation stipulates that Secure Boot must be enabled as part of a corporate policy. (Note that not all Server BIOS support Secure Boot).
- If your boot volume is greater than 2TB in size.
All three of these situations require UEFI mode. It's worth pointing out that you can install Windows in Legacy mode on a volume greater than 2TB. However, you can only use the first 2TB of the disk. This means that, for example on a 3TB disk, 1TB will go wasted. These limits are caused by historical limits to Master Boot Record (MBR) partitioning, which has a 2TB limit. GUID Partition Table (GPT) partitioning does not have this limit, but GPT volumes can only be booted using UEFI BIOS.
You can install Windows onto a separate, smaller volume in Legacy BIOS mode, and then access the entire 3TB volume as a secondary volume. This is possible because Windows can still create a secondary volume using GPT, and access more than 2TB, but it just won't be able to boot from it.
The following are not reasons for UEFI booting your server:
- Deploying virtual TPM, or vTPM services to guest operating systems
- If your boot volume is less than 2TB in size (secondary data volumes that are not used for booting can be any size, including over 2TB).
BIOS Boot Facilities
||Supports GPT Partitioning
||Supports MBR Partitioning
||Supports Secure Boot
||Boot from USB:NTFS
||BIOS Dependant; Rufus provides bootloader
||Supports TPM 1.2
||Supports TPM 2.0
||Supports Server 2016
BIOS Disk Limits
||Can Boot from Disk > 2TB
||Second disk can be > 2TB
||Yes, but only first 2TB accessible due to MBR limits
||Yes, as second disk can be GPT but not booted from
||Yes, as UEFI can boot GPT disks.
Note: Read below for other BIOS settings, and on setting up your RAID volume, before making your final BIOS UEFI/Legacy mode selection.
Changing the BIOS Mode
Intel E3 Platforms such as S1200V3RPL
- Go into the BIOS using the F2 key as the system starts
- Go to the Boot Options Screen
- For EFI Mode:
- Under EFI Optimized Boot, set this to Enabled
- Under Use Legacy Video for EFI OS, set this to Enabled
- For Legacy Mode:
- Under EFI Optimized Boot, set this to Disabled
Note: EFI optimised boot Enabled only supports booting EFI / UEFI devices. However, disabling EFI optimised boot allows the booting of both EFI and Legacy devices. If you want to be sure which mode your Server is booting in for the installation of Windows, prepare your USB installation media accordingly.
Intel E5 Platforms such as S2600WTTx
Setup > Boot Maintenance Manager > Advanced Boot Options > Boot Mode
- For EFI Mode:
- Under Boot Mode, set this to EFI
- (Depending on the BIOS, if offered, under Use Legacy Video for EFI OS, set this to Enabled)
- For Legacy Mode:
- Under Boot Mode, set this to Legacy
Other Recommended BIOS Settings
- Disable Quiet boot from the main BIOS. This makes sure you see all of the diagnostic messages as the system starts.
Changing the Quiet Boot BIOS Mode - Intel E3 Platforms such as S1200V3RPL
- This is available on the Main page, which loads as soon as you go into BIOS setup.
- Change Quiet boot to Disabled
Changing the Quiet Boot BIOS Mode - Intel E5 Platforms such as S2600WTTx
- Select Setup Menu, then Main
- Change Quiet boot to Disabled
[Upgrading the Firmware on your RAID Controller]
This is normally done in the factory when your server or workstation was assembled. It may be a good idea to upgrade the firmware on your RAID controller if you are upgrading or reinstalling an older system, especially if you are installing a newer operating system, for example Server 2016 instead of Server 2012R2. In this situation, also consider installing the latest motherboard firmware update package.
Firmware updates are usually easiest done through the EFI Shell.
Contact Stone support for further help.
Setup your Boot Volume RAID Array
Again, this is normally done in the factory. However, example steps for creating a RAID 1 Array with two drives for the 12Gbit Series controllers from Intel/Broadcom/LSI are shown below.
- Ensure Quiet Boot is turned off in the Main BIOS
- Use CTRL + R when prompted to go into the RAID BIOS Console (CTRL + G on older controllers)
Note: You might need to turn off UEFI BIOS mode to get access to the RAID BIOS Console, on some systems. If you don't see the option for the RAID BIOS console during POST, turn off EFI Optimised boot / BIOS UEFI mode, complete the work in the RAID BIOS console, and then put the BIOS back to the previous setting.
- On the VD Mgmt page, confirm that the are no virtual drives present - look for "No Configuration Present !". The system should also show Virtual Drives: 0 on the right.
- Whilst the controller, or No Configuration Present is highlighted, press F2 to bring up the operations menu. Then highlight Create Virtual Drive and press Enter.
- Under RAID Level, Ensure RAID 1 is selected
- Use the arrow keys to select the drives one by one. On each drive, use the space bar to add the drive that you want to the array. It should have a cross "X" in the ID checkbox.
- Highlight Advanced and press Enter
- Use the Down Arrow to highlight Initialize
- Press the spacebar to ensure the new virtual drive is initialised when created. Accept the warning message by highlighting OK using the arrow keys, and pressing Enter.
- Highlight OK again (in the Create Virtual Drive-Advanced menu) and press Enter, to accept the configuration.
- Highlight OK on the Create New VD menu, and press Enter.
- Wait for the virtual drive to be initialised. The default is for a fast initialisation, which should take less than a minute irrespective of drive size.
- Press ESC to exit the RAID BIOS console, confirming by highlighting OK and pressing Enter.
Prepare your Installation Media and Boot From It
Ideally, prepare your installation media for direct compatibility with how you intend to use the server.
- If you plan to use UEFI BIOS mode for Windows Server, prepare your installation media for UEFI mode. You may also need to disable Secure Boot.
- If you plan to use Legacy BIOS mode for Windows Server, prepare your installation media for Legacy BIOS MBR mode.
Whereas some tools like ISO2USB produce media that is usually bootable by both UEFI and Legacy mode, this isn't helpful, as you can't be sure which mode the system has booted in.
This is especially important on some server platforms, such as the S1200V3RPL, which support EFI Optimised Mode Enabled (EFI only) and EFI Optimised Mode Disabled (EFI and Legacy support).
Preparing Installation Media for Legacy BIOS
- Set your main BIOS to the correct mode.
- Use Rufus to prepare your USB pen drive from your installation ISO, and ensure that you select MBR partition scheme for BIOS or UEFI (as this is the closest option)
- You must use the F6 Boot Menu option to boot the system from your pen drive and select the non-UEFI option. [Depending on the BIOS, you may be presented with a UEFI boot, which is not what you want when trying to install Windows in Legacy mode].
Tip: Instead of using the F6 boot menu, you can go into the BIOS Setup using F2, and use the Boot Manager menu to select the boot device.
Preparing Installation Media for UEFI BIOS
- Set your main BIOS to the correct mode.
- Temporarily disable Secure Boot in the BIOS.
- Use Rufus to prepare your USB pen drive from your installation ISO, and ensure that you select GPT partition scheme for UEFI.
- BIOS dependant - consider disabling Secure Boot until you have completed Windows setup. Not all Server BIOSes implement Secure boot.
- You should use the F6 Boot Menu option to boot the system from your pen drive and select the UEFI option. [Depending on the BIOS, you may be presented with a non-UEFI boot, which is not what you want when trying to install Windows in UEFI mode].
Tip: ISO2USB is also not suitable for use some quite a lot of modern installation media, such as the latest distributions of Server 2012R2, Windows Server 2016 and Windows 10, because of the size of the files in the sources directory. These files can now be more than 4GB in size, which is greater than the maximum file size permitted by FAT32. ISO2USB only supports FAT32. To get around this issue, use Rufus, as this supports NTFS.
Complete the Installation of Windows
- Setup Windows, suppling the Mass Storage driver if required.
- Example of the Setup disk layout on a UEFI system for GPT partitioning below. An 80GB partition has been created for the installation. Windows Setup automatically creates the other partitions it needs. The remaining 7.3TB will be available for a single large data volume.
- Once Windows is installed, you can use the MSINFO32 program to confirm that the system is operating in the right BIOS mode.
- If you are using UEFI mode, you may wish to re-enable Secure Boot in the BIOS after the operating system is installed (depending on BIOS availability).
- If you have installed Windows in UEFI mode, you should also be able to use more than 2TB on space on your boot volume, depending on the size of your RAID array.
Installing Updated Drivers and Additional Management Software
- If you didn't need to load a Mass Storage driver to get Windows installed, you may still benefit from performance and reliability enhancements by installing the latest driver anyway. Some drivers such as Intel/Broadcom/LSI RAID can be updated using Device Manager. Other drivers, such as the Intel RSTe RAID, require you to run the Setup program.
- Again, if you have an Intel/Broadcom/LSI RAID controller or module, it is useful to install the RAID Web Console (Intel) or MegaRAID Storage Manager (Broadcom) software so that you can monitor the RAID array health and setup email alerting.
- Always install the latest Windows updates before putting the server into production use.
Below are some common problems and their solutions. If you are experiencing problems installing Windows on your Stone server or workstation, please do not hesitate to contact Stone support for further help.
I can't Access the RAID BIOS Console any More
- Probable Cause 3: If you see the CTRL + R option to go into the RAID BIOS console, but it is ignored, use the F6 boot menu option after pressing CTRL + R, and then choose the RAID Card BIOS or RAID Card Setup from the list of bootable devices.
I can't Boot the System from my Installation Media
- Probable Cause: Your BIOS must be configured to support the partitioning method on the pen drive.
- Solution 1: Use Device Manager on another PC to check the partitioning method of the volume on the pen drive. If it is partitioned using the GPT method, a BIOS configured for Legacy mode only cannot boot this.
- Solution 2: You may need to turn off Secure boot to enable the booting of some Rufus created pen drives.
I re-created the RAID Array but the Existing Partition Layout or Information is Still There
- Probable Cause: You skipped the initialisation phase when configuring the RAID array.
- Solution: Either create the RAID array again and use the Advanced Creation options > Initialize checkbox, or simply perform a fast initialisation of the existing RAID array. Note that this will delete everything on the virtual disk.
To Perform a RAID Array Fast Initialisation
Note: This procedure will delete everything on the virtual disk. If there is data you need to keep, ensure this is backed up and checked on a completely different virtual disk or controller.
- Press CTRL+R to go into the RAID BIOS Console.
- On the VD Mgmt page, highlight the virtual drive that you want to erase.
- Press F2 to bring up the operations menu.
- Choose Initialization, and then Fast Init.
- Accept the warning that you want to destroy the contents of the virtual drive, by highlighting OK and pressing Enter.
- Wait for initialisation to complete. It shouldn't take more than a minute. Use the F5 key to refresh progress.
- Exit the RAID BIOS Console by pressing ESC.
Windows says It Can't be Installed onto the Partition
- Probable Cause: The partitioning mode on the disk is incompatible with the mode that you booted the installation media in.
- Solution: Either boot the installation media in opposite BIOS mode, or delete the partitioning information from the disk using Diskpart, or by re-initialising the RAID array. Note that re-initialising the RAID array will delete everything on the array.
Drive 0 is split into More Than One Section of Unallocated Space
- Probable Cause: If your boot disk is larger than 2TB in size, you must use GPT partitioning. GPT partitioning requires UEFI booting.
Note: All of these solutions require reinstalling Windows, and likely, the loss of any data on your boot volume/disk. Backup any data first.
I installed Windows but the RAID Controller is Not Listed As A Bootable Device
- Probable Cause 1: If using UEFI Boot mode, you may not see the RAID Controller as a bootable device.
- Solution: When in UEFI mode, the BIOS might not list the RAID Controller as a bootable device. Instead, it detects that a Windows Boot Manager partition exists on the virtual disk, and shows Windows Boot Manager in the BIOS boot order. If using UEFI BIOS mode, check for a Windows Boot Manager option and make sure it is at the top of the boot order.
- Probable Cause 2: Your BIOS isn't set to the right mode when compared to how Windows has been installed.
- If your BIOS is set to EFI Optimised Boot Enabled, or UEFI Boot, then it means that your partitioning is in Legacy MBR mode. Whilst UEFI can boot some MBR partitions, this is less compatible. Follow the solution steps for the problem above.
- If your BIOS is set to Legacy Boot, then it means your partitioning is in GPT mode. Legacy mode BIOS cannot boot GPT partitions.
Using Diskpart to Clear the Disk
Note: This procedure will delete everything on the virtual disk. If there is data you need to keep, ensure this is backed up and checked on a completely different virtual disk or controller.
- After booting from your installation media, get to the stage "Where do you want to install Windows?". This lets you confirm if any Mass Storage drivers are needed - if no Drives are shown, load the Mass Storage driver.
- Press Shift and F10 together to get the setup Command Prompt.
- Type in Diskpart and press Enter
- As the DISKPART prompt, type list disk and press Enter
- Note the sizes of the disks. This example also shows that Disk 0 (the 8TB RAID Array) has been formatted using GPT partitioning, whilst the 29GB USB pen drive, containing the installation media, does not. This contributes to why Windows will not install - the installation has been started in Legacy mode, but the disk has been partitioned using GPT, which is only supported using UEFI mode.
- Select the disk to be erased, by typing in select disk and the disk number, for example, select disk 0.
- Use the clean command to remove all partitions, partitioning information and boot signatures.
- Close the command prompt.
- In Windows Setup, hit Refresh.
Note: This example shows the removal of GPT partitioning to allow a legacy mode installation to continue. However, as the disk is 8TB in size, in the real world you would want to proceed with a UEFI / GPT installation by making sure your installation media was setup properly and by booting the installation media in the right mode.
The System Boots to the EFI Shell after I Installed Windows
- Probable Cause 1: Incorrect BIOS Boot order
- Solution: Check the BIOS boot order. The EFI Shell should be one of the last BIOS boot entries. Move the Intel RAID controller, or the hard disk, or Windows Boot Manager to be above the EFI Shell in the boot order.
Note: When in UEFI mode, the BIOS might not list the RAID Controller as a bootable device. Instead, it detects that a Windows Boot Manager partition exists on the virtual disk, and shows Windows Boot Manager in the BIOS boot order.
- Probable Cause 2: Missing or offline RAID virtual drive.
- Solution: Check the RAID information displayed during POST to make sure the Virtual Drive is online. If drives or the entire array is missing, check drive seating or cabling.
I have Installed Windows and have my C Drive. But I can't make the D data Drive fill the rest of the Drive.
- Probable Cause: If you can't make the C drive go beyond 2TB, or cannot create a second data volume in the unallocated space on disk 0, the system has been installed with an MBR boot volume, and the system is likely running in Legacy mode.
- Use MSINFO32 to confirm the BIOS boot mode.
I enabled EFI Optimised Boot / UEFI boot, and can no longer get into the main BIOS Setup using F2.
- Probable Cause: Your motherboard BIOS does not correctly support your RAID controller.
- As a temporary fix, turn off or shutdown the system, disconnect all the disks from the system, ideally by sliding them out by a few centimetres. This ensures you can put them back and reconnect them in the same order. Turn the system on to gain access to the BIOS.
- Check for an updated motherboard BIOS firmware.
- Check for updated RAID controller firmware.
- Be careful about changing the BIOS EFI Optimised / UEFI boot mode to Legacy, as this may prevent your operating system from booting. If you need to make the change to complete firmware updates, put the BIOS settings back before then attempting to boot Windows again.
I Deleted the device EFI Boot Option from the BIOS, How Can I add this Back In?
- Problem: You deleted an EFI Boot option from the BIOS, and the system no longer starts Windows.
- Solution: Use the Add EFI Boot Option facility.
Adding an EFI Boot Option
This allows you to specify a bootable EFI device, useful if you have changed your motherboard or otherwise lost your boot entries.
- Go into the BIOS using F2.
- Choose the option to Add an EFI Boot Entry
- Add the boot option label, for example, Windows Boot Manager.
- Select the File System - you should see the an Intel RAID Controller as a PCI device, alternatively you will also be able to select any available USB devices here.
- Add in the Path for Boot Option - this is normally \EFI\BOOT\BOOTX64.EFI for x64 Windows.
- Select Save to Add the entry.
Tip: If you don't have the option in your BIOS for "Add EFI Boot Option", then either your motherboard is in Legacy mode, or all available EFI bootable devices are already added as boot options.
- Stone Server and Workstation Products.
Managing Hard Drive Failures in Servers
When a hard drive fails in a server, it is normally recommended that you obtain a warranty replacement hard drive and do not re-use the old hard drive in the RAID array. When a hard drive has been marked as failed, it is normally due to a defect such as a large number of bad blocks or other malfunction. While the hard drive may come back online it should not be relied upon.
The only exception to the rule of not re-using a hard drive is where the drive failure has left the RAID array in a failed state. This should only happen when you are using a RAID 0 array (which is not recommended) or if the system has already suffered a drive failure. In this scenario, when the array has failed, it is recommended to attempt to bring the last failed drive back online. If you succeed in bringing the RAID online, please take a full backup as soon as possible but ensure that you do not overwrite your last previous full backup. A bare metal or system state backup should be used. When this has been completed, the drive should then be replaced and the RAID array rebuilt, and then the system restored.
In the event of any hard drive or RAID system failure please contact Stone Support for warranty service. If your system is outside of warranty, we may be able to offer an out of warranty chargeable repair.
Example of a Faulty Drive
The RAID Web Console example below shows a drive with media errors (bad blocks) as well as a SMART predictive failure count. This drive should be replaced. Note that some low end software RAID controllers (such as the Intel ESRT-2 controller) don't preserve media error counts or predictive failure counts between reboots.
General Recommendations: Always use RAID management software such as the Intel RAID Web Console, to manage your RAID arrays where possible. You can then use features such as email alerting to give you prompt notice of any issues, and also to enable you to control the hotspare(s) available and rebuild process.
- Server and workstation systems running Firmware or Hardware RAID
Hardware RAID Controller
The Intel RS3DC080 is a hardware RAID controller, with an Avago / LSI Chipset. This is a PCI Express based device and is compatible with a number of Intel server motherboards.
This module is based on the the first generation of Avago 12Gbit (12G) SAS controllers and is the sucessor to the RS25 range of produdcts. It is backwards compatible with 6GB SAS and 6GB SATA devices.
The module has a warning buzzer as per previous products and two Mini-SAS HD SFF-8643 connectors for drives. Drive LED Management is normally performed through the sideband connection in the data cable.
- PCI Express 3.0 x8 interface
- 1GB Cache
- Supports 8 drives directly connected
- Supports up to 128 drives through expanders
- LSI 3108 Chipset
- Supports SAS and SATA drives
- Supports RAID modes 0,1,5,6,10,50,60.
This module is supported by the same LSI Hardware Family Driver as per the RS2BL080 and RMS25CB080, upgraded to support the new chipsets. Operating system migration between controllers is only possible if the new model controller has been present when Windows is booted, or if the Windows inbox driver supports the card. Only the Windows 2012 R2 or newer Inbox LSI Driver supports the RS3DC080 in emergency situations.
RAID Web Console 2 version 15 or later should be used to perform management of this controller from within Windows.
New Connector Standard
The mini-SAS HD Connector SFF-8643 supports 12Gbit/Second transfers. 12Gbit/second transfers also requires compatible devices and backplanes. The use of older devices or backplanes will result in a lower connection speed. Cables are available to connect SFF-8643 cards or modules to older SFF-8086 6Gbit/second backplanes.
Revised RAID BIOS
To get into the new RAID BIOS, unlike previous generations which used CTRL+G, use CTRL+R during system POST.
The new 12G Generation of LSI / Avago cards comes with a revised RAID BIOS which reverts from a GUI to a more text-based look. Most operations that were possible in the previous generation as still possible. Note the intructions on the bottom of the screen on how to move between tabs (CTRL+N / CTRL+P) and how to open the properties of an element, for example by pressing F2, for Operations.
- ISRRAI-167 / ISRRAI-175 - Intel RAID Controller RS3DC080 8 Port 12GB SAS 6GB SATA PCIe 3
Hardware RAID Module
The Intel RMS25CB080 is a hardware RAID controller. Unlike PCI Express Slot based controllers, this controller is described as a "module" and connects to a proprietary interface on compatible Intel Server motherboards.
This module is a second generation SAS/SATA 6Gbit controller and is the successor to the RS2BL080 range of products. The RS2BL080 was a PCI Express Card based solution and the new module is designed to provide next generation performance at a lower price.
The module has a warning buzzer as per previous products and two SFF-8087 connectors for drives. There is no IPMB/I2C connector to connect to an enclosure or backplane; rather drive LED management is done through sideband capable cables such as HPBLEA-179.
- PCI Express 2.0 interface via Module connector (PCI Express 3.0 from revision G35316-610 or newer).
- 1GB Cache.
- Supports 8 drives directly connected.
- Supports up to 128 drives through expanders.
- LSI 2208 Chipset.
- Supports SAS and SATA drives.
- Supports RAID modes 0,1,5,6,10,50,60.
RAID Module Connector
Compatible motherboards include the Intel S1200V3RPL which has the module connector roughly in between the memory slots and the PCH. The approximate module location is shown below.
This module is supported by the same LSI Hardware Family Driver as per the RS2BL080, upgraded to support the new chipsets. Operating system migration between controllers is only possible if the new model controller has been present when Windows is booted, or if the Windows inbox driver supports the card. Only the Windows Server 2012 or newer Inbox LSI Driver supports the RMS25CB080 in emergency situations.
RAID Web Console 2 version 13 or later should be used to perform management of this controller from within Windows.
- ISRRAI-151 - Intel RAID RMS25CB080 8 Port 6Gbps SAS/SATA RAID *SIO Module* 1Gb LSI2208
What is the difference between Enterprise and Consumer Desktop hard drives?
Enterprise and Consumer (or ordinary desktop) hard drives are different in several key ways.
We do not recommend the use of desktop hard drives in servers or mission critical systems at any time.
Desktop, or Consumer Hard Drives
The drives are suited to desktop workloads. They are not designed for heavy use, or 24x7 access. Cost optimised desktop drives are suited to single usage (i.e. there is one user using that machine, as opposed to servers where multiple access can be requested at the same time) and lower power draw.
Desktop drives do not feature RAID protection features such as TLER (time limited error recovery).
Enterprise Hard Drives
Enterprise SATA and all SAS drives are designed for heavier usage and are built or tested with higher reliability in mind. Manufacturers of these drives usually quote higher reliability in the form of Mean time Between Failures (MTBF) and certify them for 24x7 access.
Enterprise drives feature TLER. In the event of a media or other problem, the drive will only attempt to resolve the problem internally for at most 6 seconds. At the end of that time, the drive will hand over error management to the RAID controller. The drive will not go offline to complete "heroic recovery".
This prevents the situation where a desktop drive could retry almost indefinitely to read the data. This would keep the RAID controller in a timeout state where it could not easily determine the state of the drive. This would introduce performance problems, as well as possibly eventually forcing the entire drive out of the RAID array even if it is just one sector that is damaged.
Enterprise drives will attempt to stay online while the RAID controller can either remap the sector, or notify the user of a need to replace the drive at the nearest opportunity. Data remains protected especially if you are using RAID 6.
Note: Seagate Constellation, or Seagate NS drives are example of Seagate Enterprise drives. Western Digital RE and RE4 drives are example of Western Digital's equivalent. WD Caviar Black drives feature a more reliable mechanism but are still designed for desktop use and do not feature TLER.
- All server and desktop systems capable of RAID functionality.
Updating RAID Controller or Module Firmware through RAID Web Console
If you use the RAID Web Console / MegaRAID Storage Manager software to perform a RAID Controller update, you may be given the option of an online firmware update. This is a new feature that has been made available in RAID Web Console 2 Version 15 and newer.
What Does This Feature Do?
Normally when you flash the firmware through RAID Web Console, the firmware is uploaded to the controller, and then saved into the flash memory. However, the firmware upgrade does not actually take effect until the system is restarted. The online firmware update attempts to restart the controller with the updated firmware without needing to reboot the system.
Is this Recommended?
In most situations, no. This should definitely not be attempted if your operating system is running from the RAID volumes hosted by the controller, or if you have any mission critical system functions hosted by that controller, including virtual machines, backup volumes or file shares. There will be a brief loss of controller access, and Windows Blue-screen crashes have been observed when using the online firmware update feature.
General Guidelines for Updating a System
- Uninstall any old RAID Web Console software.
- Upgrade the Hardware Driver; ideally with a reboot before proceeding to the next stage, however this is not usually necessary.
- Install the latest RAID Web Console software.
- Use the latest RAID Web Console to then flash the controller or module firmware.
- Do not choose the option of an online firmware update.
- Systems with an Intel or LSI Hardware RAID Controller or Module.
RAID Capacitors and Smart Cache Batteries
These batteries are used to protect the contents of a hardware RAID card's write cache from being lost, in the event of a power-outage.
- If you are using a software RAID controller, or a host bus adapter (HBA) that does not have a cache buffer, this article does not apply to you.
- If you have a hardware RAID controller, such as the Intel RS3DC080, which features a hardware cache buffer - typically 512MB to 2GB in size, then this article applies.
Some RAID capacitors or cache batteries can be particularly expensive. In additional, many cache batteries have a finite life and you should expect to replace these every 3-5 years or so.
Hardware RAID cards can be configured to use their buffer in one of two ways:
- Read only cache (also known as write-through, or write-through reads)
- Read and write cache (also known as write-back)
In read and write cache mode, the contents of the write cache will be lost, i.e. not committed to disk, in the event of an unexpected power outage. The optional write cache battery or capacitor can protect the contents of this write cache for a limited period of time, usually 48 hours or less. When power is restored, the RAID controller will commit the contents of the cache to disk.
However, your operating system will still have suffered from an unclean shutdown and there is no guarantee that your file system or application state will be any better than if a RAID cache battery was not fitted.
The above diagram shows that the RAID cache battery only protects a portion of the data flow from application to disk.
- Always have a fully tested and configured UPS system, that will shutdown your system when the battery reaches a low level. There must be enough run time at this point to shutdown or pause any virtual machines, and then to shutdown the host. Depending on your configuration, the total shutdown time is likely to be around 8 minutes.
- If data security and reliability is of paramount importance, and performance is not an issue, always leave the write cache feature of your virtual drives off, as well as disabling the actual hard drive cache.
- In most situations, there are performance needs and the write cache feature of the RAID controller is beneficial. In this situation, in addition to the above UPS, a RAID cache battery gives a small additional level of protection whilst maintaining performance, but bear in mind it only protects one portion of the data flow from application to disk.
- If you are aware of a UPS fault, or do not have a UPS, and do not have a RAID battery or capacitor, then whilst the fault remains you should turn off the write cache feature of your virtual drives, using the Intel RAID Web Console.
- Always test UPS batteries regularly. UPS batteries last on average for between 3 to 5 years, depending on the number of charge/discharge cycles, and temperature.
- Always ensure you have adequate backups and a disaster recovery plan.
How to Turn off Write Caching at the Virtual Drive Level
- Open Intel RAID Web Console / LSI MegaRaid Storage Manager
- Login to the system using your Windows Administrator username and password
- Go to the Logical tab
- Note the existing Write cache policy settings on the right hand side
- For each virtual drive, right hand click on the virtual drive and then left click on Set Virtual Drive Properties
- You have three options under Write Policy:
- Write Through - no write caching - this is the safest setting, but also the slowest
- Always Write Back - write caching enabled in all circumstances
- Write Back - write caching only enabled when a working battery or capacitor is fitted
- You can also disable the individual hard drive cache by changing the Disk Cache Policy to Disabled.
- Click OK to accept the changes.
Disabling Write Cache at the Operating System Level
Many applications or systems attempt to turn off disk caching to provide security in the event of a power loss. For example, a Windows Domain Controller will turn off write caching where it can.
However, when a Hardware RAID card is fitted, this does not guarantee that the hardware write cache either on the controller or the end disks will actually be disabled. Do not rely on this feature to provide data integrity in the event of an unplanned power loss.
We recommend that you turn off Automatic Windows Updates, both on Host Virtualisation servers, and also the Guest servers. Systems should be patched regularly as part of your system maintenance plan, rather than automatically. This ensures that the overall shutdown time when the UPS battery is low, is relatively consistent, ensuring a clean shut-down.
- All Servers with a Hardware RAID Controller
What Is JBOD Mode?
JBOD stands for "just a bunch of drives". JBOD mode passes through physical disks so that the operating system or host can see each individual drive. This is the opposite of a normal RAID controller, which groups physical disks together to form a single, often larger or fault tolerance, virtual drive.
JBOD mode is useful in some software storage solutions, such as Storage Spaces Direct, which require direct access to individual disk drives, rather than a RAID array.
Do All SAS Adapters Support JBOD Mode?
Not all adapters (HBAs) do, but many support it to a varying degree.
- Older 6G SAS RAID Adapters generally do not have a JBOD mode. Disks can only be presented as virtual drives, even if there is only one physical disk in the virtual drive.
- 12G IT/IR (initiator target/integrated RAID) are usually lower cost HBAs (host bus adapters) that support basic RAID levels of 0 and 1 (as virtual drives) using integrated RAID (IR) functionality. They often support initiator target mode too which technically is a JBOD mode solution, however many software storage vendors do not like the RAID facility being available on the same adapter at the same time. These cards often don't support turning off IR mode completely.
- 12G RAID Cards - most modern 12Gbit RAID cards support either RAID mode, with the option to present some drives individually (you can mark the drives you want to appear as a JBOD disk) or JBOD personality mode where all drives are presented as individual physical disks by default.
How Do I Enable JBOD Mode on Intel 12G SAS Adapters?
This involves changing the personality of a 12Gbit RAID adapters to turn off all RAID functionality and only enabling JBOD mode. The example below is from an Intel RMS3AC160 adapter but the instructions apply to most other RAID modules and RAID PCIe cards in the same series.
Note: Enabling JBOD personality mode will destroy and RAID volume on the disks. You should be able to move JBOD disks between JBOD personality adapters without data loss, but always make sure the HBA is in the right personality mode first, and ideally all adapters sharing disks should be at the same firmware level. In a new deployment, always deploy the latest HBA firmware before connecting the disks.
- Go into the RAID BIOS during the motherboard BIOS post. This is usually done by pressing CTRL + R.
- Use CTRL-N, or CTRL-P to navigate through the different tabs to go to the Controller Management (Ctrl Mgmt) tab.
- Note the option on this screen to "Enable JBOD". Enabling this option allows you to select individual drives from the PD Mgmt tab, and create or present the drive in JBOD mode. However, RAID is still enabled on the adapter, and the adapter's "personality" is still RAID.
- If you just upgraded the HBA firmware, it is recommended to use the option to reset the controller to defaults. To do this, tab to the Set Factory defaults option and press Enter.
- To enable JBOD mode, you need to go to the second settings screen. To do this, tab to the < Next > option.
- The Personality Mode setting is the default option on this screen.
- Press Enter to see the list of personalities. Use the arrow keys to select JBOD mode, and press Enter again.
- Tab until Apply is highlighted, and press Enter to save the setting.
- Press ESC to exit the RAID BIOS, and then use CTRL + ALT + DEL, or the system reset button, to reboot when prompted.
- When you reboot the system and go back into the RAID BIOS (using CTRL + R), you will see that the layout of the BIOS has changed slightly. The adapter is no longer described as a RAID adapter, but now as a JBOD adapter.
- This may cause you to need to reinstall the HBA driver in Windows. The Windows Server 2016 inbox driver is suitable for all 12G adapters from Broadcom/Avago as of summer 2016.
- In the physical drives screen you can see all of the "J-Online" disks, meaning that these physical disks are being presented out as physical disks.
- It is possible to delete the drive from being presented as a disk, but using the F2 operations option when a drive is highlighted, and choosing "Delete JBOD". This is not recommended, because as of time of writing, this cannot be re-added as a presentable drive, in the RAID BIOS. However, on rebooting, the drive appears to be automatically re-presented.
- RAID Web Console shows the JBOD drives below. Drives can be removed from being presented by right clicking the drive and selecting "Delete EPD". You can re-add the drive by choosing "Make EPD". Data should be preserved on the physical disk.
Note: If you are using an Intel server platform that uses a motherboard such as the S2600WTTY or S2600WTTYR and cannot see or access the RAID BIOS options, first of all ensure that Quiet boot is turned off the main BIOS. If this does not help, ensure the BIOS is set for Legacy boot instead of UEFI boot. (Setup Menu > Boot Maintenance Manager > Advanced Boot Options > Boot Mode
- Intel (Avago/Broadcom/LSI) 12G SAS RAID Adapters
There are several different RAID levels available to servers (and even desktop PCs) with today's technology. These offer varying features to either enhance performance or reliability, or sometimes a combination of both.
Use the guide below to choose the RAID level that suits your needs. All quoted usable capacities are approximate.
||Improved read speeds
||Acceptable for non-demanding uses
||Improved read and write
||Superseded by RAID 6
||Good, with modern controllers
||Suitable for most applications, can suffer from long rebuild times though
||Good performance, good rebuild times but high cost.
||Capacity loss/expense, not used in most situations
||Capacity loss/expense, not used in most situations
||Single drive performance
||Used with an upstream RAID controller
RAID 0 - Stripe
RAID 0, also known as striping leads to improved performance as the workload is shared between two or more drives. However it has no fault tolerance. If any of the member disks develop a problem the RAID may fail and data corruption or loss is likely.
Example: Two 500GB drives in a RAID 0 gives a usable capacity of 1TB (1000GB).
RAID 1 - Mirror
The mirror has improved fault tolerance over RAID 1. Each drive is a duplicate copy. The system can benefit from improved read speeds with a controller which can read alternate blocks from each drive at the same time. Write performance is the same as a single drive, since write operations must be duplicated.
With RAID1 the capacity of the RAID is halved.
Example: Two 500GB drives in a RAID 1 gives a usable capacity of 500GB.
RAID 5 - Stripe with Parity
RAID 5 used to be the common RAID standard for servers. Write and read performance is improved with each additional disk or spindle that you add to the RAID. You lose the capacity of one drive, since one drive contains parity information. (Technically, the parity information is distributed across all of the drives, however you do loose one drives worth of overall capacity). You need a minimum of 3 hard drives for a RAID 5.
The disadvantage of RAID5 is that if you suffer an outright hard drive failure, the remaining drives need to be in perfect working order with no bad blocks. RAID6 gets around this problem.
RAID 5 is computationally expensive as parity must be calculated. Therefore it is slower than RAID 0.
Example: Three 500GB drives in a RAID 5 gives a usable capacity of 1TB (1000GB).
RAID 6 - Stripe with double parity
RAID 6 takes RAID 5 one level further with two lots of distributed parity. You need a minimum of four hard drives and will lose the capacity of two. RAID6 has even more computational requirements over RAID5 and for this we recommend a dedicated hardware RAID controller that has the offload capabilities.
RAID 6 can suffer two hard drive failures and preserve the data. Typically if you have one outright hard drive failure, you can still rebuild the RAID array even if bad blocks are encountered on other drives during the rebuild, as long as the bad blocks do not appear in the same location on multiple drives. RAID 6 has improved reliability against RAID5 and is recommended for all general application and file storage requirements, including VMWare datastores.
Example: Four 500GB drives in a RAID 6 gives a usable capacity of 1TB (1000GB).
Example: Five 500GB drives in a RAID 6 gives a usable capacity of 1.5TB (1500GB).
When a drive fails in a RAID6 array the entire RAID6 pack needs to be rebuilt. This involves reading the contents of all remaining working disks to write the contents of the newly fitted replacement disk. This can lead to degradation in overall system performance for an extended period of time whilst the rebuild is carried out.
In this situation, RAID 10 can give improved rebuild times and greater overall performance, but with the negative impact of increased cost or reduced capacity.
RAID 10, RAID 50 and RAID 60
These RAID levels take smaller stripes (RAID 0) and then RAID these together either with RAID 0, RAID 5 or RAID 6.
Because of the increased (doubled) hard drive count, performance can be impressive. However high cost is a negative factor with these RAID levels. RAID50 can only withstand a fault in one stripe at a time so if you need an ultimate performance storage system we recommend RAID6.
Example: Six 500GB drives in a RAID 50 gives a usable capacity of 2TB (2000GB) (RAID 5 of three 1GB stripes)
Example: Eight 500GB drives in a RAID 60 gives a usable capacity of 2TB (2000GB) (RAID 6 of four 1GB stripes)
Just a Bunch of Drives refers to a collection of drives which are either not configured in a RAID - so the individual drives are accessible as separate volumes - or it can refer to storage array units which have no inbuilt RAID functionality, but are designed to connect to an upstream RAID controller. For example, you may have a SAN array which will have an inbuilt RAID controller. Capacity can be expanded on some models by adding a JBOD array. The JBOD has no RAID functionality but the drives will seen, managed and incorporated into a RAID configuration by the main controller in the upstream SAN array.
RAID 5 vs. RAID6
As above, RAID6 gives an additional level of protection against drive failures, due to the extra copy of parity information. However, this can come at the cost of performance as RAID6 places a greater load on the RAID controller.
Using a powerful hardware RAID controller (such as the second generation SAS 6G controllers) can provide good RAID6 performance and this is situation RAID6 is strongly recommended. Some customers have historically used RAID5 plus a hot-spare disk and this can be turned into a reliable RAID6 setup without loss of capacity or increase in cost.
Recommendation: Contact Stone support for more information on deciding which RAID level is right for you. Some specific applications may request RAID0 for application log files; if this is the case this should be a separate volume to your operating system, which should never use RAID 0.
- All products capable of RAID functionality