Selected models of servers come with or can be upgraded to include a remote management module, such as the Intel RMM4. Use the instructions below to setup your remote management module.
You will need the following:
To connect to the management console, use a web browser and connect to the IP address or hostname you specified.
You will need to have Java installed to use the remote KVM (keyboard/video/mouse) and media redirection.
For more information, see the Intel RMM4 user guide or contact Stone support.
If your RAID or disk volume is larger than 2TB, you will need to use one of the following methods to use all of the capacity and to be able to boot from the volume:
For more information, please see attached the Intel GPT white paper, available here, or attached.
Newer versions of the Realtek LAN Driver, from around mid-2014 onwards, may fail to import into Windows Deployment Services, either WDS 2008R2 or WDS 2012.
WDS reports that the package addition failed. However, no actual error code or meaningful reason for the failure is displayed.
The root cause appears to be a problem with the WDS Jet Database and either the size of the Realtek LAN driver, or some particular information inside the driver INF. If you use the WDS Powershell utilities to try and import the driver an error code is return which indicates that a database error is being generated, with a record being "too big".
There is one full resolution for this issue and three alternatives should your situation require it. However with the release of the April 2016 Realtek driver, only the full resolution should be required.
As of April 2017, some x64 versions of the Realtek driver, even the WinPE version, do not import into WDS for Server 2008R2. There is no resolution for this apart from upgrading your Server infrastructure or the work-arounds below.
Imagex /mountrw "d:\wimwork\boot.wim" 1 d:\output
Dism /Image:d:\output /Add-Driver /Driver:d:\drivers /Recurse /ForceUnsigned
Imagex /unmount /commit d:\output
Intel have produced a manual which covers the setting up and basic maintenance of their RAID systems, primarily those based around the LSI RAID stack. This includes controllers such as the RMS25CB080 and RS2BL080.
The manual can be downloaded here, or attached.
Intel produce these TPS documents for each family of server or workstation motherboard. Use these documents to:
Intel Server Systems include Intel server or workstation motherboards. The second part of the Intel Server System model code indicates which motherboard is fitted. For example, the R2000GZ series of Intel Server system indicates that it includes an S2600GZ motherboard. The R2000GZ series includes different 2U models with different drive bay configurations.
These guides are available for Intel Server systems to allow research into the spare parts and accessories that are available for that model family.
S2600GZ / GL - R1000GZ/GL Server System and R2000GZ/GL Server System - Spares / Accessories List and Configuration Guide
S2600WT - R1000WT Server System and R2000WT Server System - Spares / Accessories List and Configuration Guide
Intel Technical Product Specifications
Stone Diagnostics and Instructions
Elysium Driver Package
BIOS Updates and Other Drivers:
Virtualization-based security (VBS) is a key component of the security investments in Microsoft Azure Stack HCI to protect hosts and virtual machines from security threats.
The attached solution brief illustrates how you can leverage VBS with Microsoft Azure Stack HCI and the Stone Computers Elysium SF (Hybrid Acceleration Series)
Storage Spaces Direct (S2D) nodes provide disk resiliency to each other as part of the storage network.
When an extended power outage occurs that exceeds the available battery runtime, it is important that a managed shutdown process is used:
A software package is now available to provide this shutdown facility. It is compatible with all UPS software that supports shutdown via a script.
A. Install your UPS Software as normal
Its a good idea to install your UPS Software first, as depending on the type of software installed, the software will be configured for you by the installation script.
B. Manual creation of a domain service account. Optional: If you do not follow this step to create a S2D Management domain account called svcUPs, the installation script will do it for you.
Check for a scheduled Task called UPShutdown in the \MANAGEMENT folder, in Task Scheduler
You should install your UPS software before running the UPS Shutdown script installer, as the script installer may be able to configure your software for you.
After installation, we recommend that manually check your UPS Software shutdown settings
Software Supported for Automatic Configuration
Manual Configuration / Verifying your UPS Software Shutdown settings
You should arrange full systems downtime to test the shutdown sequence. You can then test the software either from the UPS software, or by running StartShutdown.CMD from an Administrative command prompt
All of the settings in ShutdownControl.txt / ShutdownControl.PS1 are shown below.
# Stage 1 - Shutting down Local VMs on this host Only
# - a) Clustered VM roles on this node only
# - b) Non-Clustered VM roles on this node
[int]$LocalVMsStaggeredDelay = 5 # maximum number of seconds in between starting each VM shutdown
[int]$MaximumTimetoStartShuttingdownLocalVMs = 120 # maximum number of seconds to start shutting down local VMs. $LocalVMsStaggeredDelay might be reduced if you have lots of VMs
[int]$SaveLocalVMs=$true # $true to save local VMs, or $false to shut them down
# if $SaveLocalVMs=$true
# VMs above this threshold will be shutdown, instead of being saved
# Stage 2 - Start shutting down the Global cluster
[int]$StartClusterShutdownDelay=120 # the amount of time before shutdown-cluster is issued
# The local UPS script on each node should have already started the shutdown the local VMs
# Stage 3 - Shutdown of this Node - in S2D, this needs to happen after all Cluster IO has ceased
$ClusterShutdownTimeout = 240 # this it the time in seconds, after Stage 2 has been started, that we will wait for the cluster shutdown job to complete before testing to see if we are ready to turn off the node
# Total Shutdown time should be:
# The greater of:
# $MaximumTimetoStartShuttingdownLocalVMs + time to save or shutdown last VMs
# $StartClusterShutdownDelay + $ClusterShutdownTimeout
# Then a final optional checks for LocalClusteredVMs, LocalNon-ClusteredVMs, and then the Cluster itself
# Then issuing local Host Shutdown