Installation, Drivers and Software

How to Set Up Remote Management on Servers with Management Modules (RMM4)

Selected models of servers come with or can be upgraded to include a remote management module, such as the Intel RMM4. Use the instructions below to setup your remote management module.


You will need the following:




To connect to the management console, use a web browser and connect to the IP address or hostname you specified.

For example:



You will need to have Java installed to use the remote KVM (keyboard/video/mouse) and media redirection.

Getting More Help

For more information, see the Intel RMM4 user guide or contact Stone support.

Applies to:

How to Install Servers with Volumes Larger than 2TB

Booting from and Using UEFI / GPT Partitions

If your RAID or disk volume is larger than 2TB, you will need to use one of the following methods to use all of the capacity and to be able to boot from the volume:

For more information, please see attached the Intel GPT white paper, available here, or attached.

Alternatively: This subject is covered in depth, here.

Applies to:

Windows Deployment Services (WDS) May not Import Realtek LAN Drivers


Newer versions of the Realtek LAN Driver, from around mid-2014 onwards, may fail to import into Windows Deployment Services, either WDS 2008R2 or WDS 2012.

WDS reports that the package addition failed. However, no actual error code or meaningful reason for the failure is displayed.


The root cause appears to be a problem with the WDS Jet Database and either the size of the Realtek LAN driver, or some particular information inside the driver INF. If you use the WDS Powershell utilities to try and import the driver an error code is return which indicates that a database error is being generated, with a record being "too big".


Server 2012

There is one full resolution for this issue and three alternatives should your situation require it. However with the release of the April 2016 Realtek driver, only the full resolution should be required.

Server 2008R2

As of April 2017, some x64 versions of the Realtek driver, even the WinPE version, do not import into WDS for Server 2008R2. There is no resolution for this apart from upgrading your Server infrastructure or the work-arounds below.

Other Work-Arounds

  1. If using MDT, import the driver directly into MDT and then rebuild your Lite-Touch image.
  2. Alternatively, if supported by the hardware you have, use an older version of the driver, such as this one from August 2013 for Windows 7.
  3. Or, manually add the driver to the boot WIM file manually for situations where you are capturing or deploying images.
  4. Finally, for network based installs, whilst you can add the driver to the boot WIM file, you could to manually deploy the Realtek LAN driver to the new installation. Note however that joining the machine to the network may not be possible until this has been done. This method is not suited to large deployments.

Overview of Adding a Driver to a WIM file

  1. Download and Install the latest Windows Assessment and Deployment Kit (ADK) to your PC. This can be installed on the WDS server however its not normally recommended, especially if the server is a domain controller. As of 13/4/17, the latest ADK is 1703 for Windows 10 (the Windows 8.1 Update ADK is here).
  2. Copy the boot WIM file to your ADK PC. Always keep a backup copy of the original WIM file.
  3. From the Deployment Tools command prompt, mount the WIM file to a folder. The destination folder must exist, and should be empty:

Imagex /mountrw "d:\wimwork\boot.wim" 1 d:\output

  1. Now add all of the drivers that you want. Put all of the drivers in a Drivers folder as below:

Dism /Image:d:\output /Add-Driver /Driver:d:\drivers /Recurse /ForceUnsigned

  1. (The /ForceUnsigned switch will allow all drivers to be added, whether signed or not).
  2. Now commit the changes back to the boot WIM file:

Imagex /unmount /commit d:\output

  1. Now you can copy the WIM file (d:\wimwork\boot.wim in the example) back to the WDS server and test to see if the driver addition was successful.

Note: Always ensure that the finished, built machine has the latest Realtek network driver on it, before handing it over to the user. The temporary 2013 Realtek network driver should not be left running on deployed machines.

Applies to:

Intel RAID Users Guide

Intel RAID Manual

Intel have produced a manual which covers the setting up and basic maintenance of their RAID systems, primarily those based around the LSI RAID stack. This includes controllers such as the RMS25CB080 and RS2BL080.

The manual can be downloaded here, or attached.

Applies to:

Technical Product Specifications (TPS) for Common Intel Server Boards and Systems

Technical Product Specification Documents

Intel produce these TPS documents for each family of server or workstation motherboard. Use these documents to:


Note 1: Intel server boards often have slightly different names for the number of LAN ports that they have. For example, the S2400GP2 has two LAN ports, and the S2400GP4 has four LAN ports, and they both belong to the S2400GP Family, for which you can use the S2400GP Technical Product Specification document.

Note 2: Always check the Intel web site for updated documents.

Intel Server Systems

Intel Server Systems include Intel server or workstation motherboards. The second part of the Intel Server System model code indicates which motherboard is fitted. For example, the R2000GZ series of Intel Server system indicates that it includes an S2600GZ motherboard. The R2000GZ series includes different 2U models with different drive bay configurations.

Spares / Accessories List and Configuration Guide for Server Systems

These guides are available for Intel Server systems to allow research into the spare parts and accessories that are available for that model family.

S2600GZ / GL - R1000GZ/GL Server System and R2000GZ/GL Server System - Spares / Accessories List and Configuration Guide

S2600WT - R1000WT Server System and R2000WT Server System - Spares / Accessories List and Configuration Guide

Applies to:

Stone Elysium SF Documentation, Drivers, Tools and Resources

Documentation, Drivers, Tools and Resources

Intel Technical Product Specifications

Intel Diagnostics

Stone Diagnostics and Instructions

Elysium Driver Package

BIOS Updates and Other Drivers:

Applies to:

Azure Stack HCI Trusted Enterprise Virtualisation Solution Brief - Stone Computers Elysium SF (Hybrid Acceleration Series)

Azure Stack HCI Trusted Enterprise Virtualisation Solution Brief

Virtualization-based security (VBS) is a key component of the security investments in Microsoft Azure Stack HCI to protect hosts and virtual machines from security threats.

The attached solution brief illustrates how you can leverage VBS with Microsoft Azure Stack HCI and the Stone Computers Elysium SF (Hybrid Acceleration Series)

Applies to:

Stone Equinox / Storage Spaces Direct (S2D) Hyper-V Cluster UPS Shutdown Script

Stone Equinox / Storage Spaces Direct (S2D) Hyper-V Cluster UPS Shutdown Script

Storage Spaces Direct (S2D) nodes provide disk resiliency to each other as part of the storage network.

When an extended power outage occurs that exceeds the available battery runtime, it is important that a managed shutdown process is used:

A software package is now available to provide this shutdown facility. It is compatible with all UPS software that supports shutdown via a script.


Script Installation Process


A. Install your UPS Software as normal

Its a good idea to install your UPS Software first, as depending on the type of software installed, the software will be configured for you by the installation script.

B. Manual creation of a domain service account. Optional: If you do not follow this step to create a S2D Management domain account called svcUPs, the installation script will do it for you.

Installation on each S2D Host

Note: If Eaton Intelligent Power Protector (IPP) or Eaton Intelligent Power Manager (IPM) is running, the script will offer to configure the Eaton settings for you. See Configuring the UPS Software.

Verifying Installation

Check for a scheduled Task called UPShutdown in the \MANAGEMENT folder, in Task Scheduler

Configuring the UPS Software

Automatic Configuration

You should install your UPS software before running the UPS Shutdown script installer, as the script installer may be able to configure your software for you.

After installation, we recommend that manually check your UPS Software shutdown settings

Software Supported for Automatic Configuration

Manual Configuration / Verifying your UPS Software Shutdown settings

Reminder: You must configure the UPS software to use the shutdown script (for example C:\_STONE\S2DUPSShutdown\StartShutdown.CMD) and a Shutdown Duration of at least 600 seconds

Testing the System

You should arrange full systems downtime to test the shutdown sequence. You can then test the software either from the UPS software, or by running StartShutdown.CMD from an Administrative command prompt

Recommended UPS Hardware Installation

Shutdown Settings

All of the settings in ShutdownControl.txt / ShutdownControl.PS1 are shown below.


# Stage 1 - Shutting down Local VMs on this host Only
#         - a) Clustered VM roles on this node only
#         - b) Non-Clustered VM roles on this node
[int]$LocalVMsStaggeredDelay = 5 # maximum number of seconds in between starting each VM shutdown
[int]$MaximumTimetoStartShuttingdownLocalVMs = 120 # maximum number of seconds to start shutting down local VMs. $LocalVMsStaggeredDelay might be reduced if you have lots of VMs
[int]$SaveLocalVMs=$true # $true to save local VMs, or $false to shut them down

# if $SaveLocalVMs=$true
# VMs above this threshold will be shutdown, instead of being saved

# Stage 2 - Start shutting down the Global cluster
[int]$StartClusterShutdownDelay=120 # the amount of time before shutdown-cluster is issued
# The local UPS script on each node should have already started the shutdown the local VMs

# Stage 3 - Shutdown of this Node - in S2D, this needs to happen after all Cluster IO has ceased
$ClusterShutdownTimeout = 240 # this it the time in seconds, after Stage 2 has been started, that we will wait for the cluster shutdown job to complete before testing to see if we are ready to turn off the node 

# Total Shutdown time should be:
# The greater of:
# $MaximumTimetoStartShuttingdownLocalVMs + time to save or shutdown last VMs
# $StartClusterShutdownDelay + $ClusterShutdownTimeout
# Then a final optional checks for LocalClusteredVMs, LocalNon-ClusteredVMs, and then the Cluster itself
# Then issuing local Host Shutdown

Applies to:

Stone Branded Products -> Servers and Workstations -> Installation, Drivers and Software