Storage Class Memory SCM

Harshada Wadekar
3 min readSep 14, 2021

Storage Class Memory (SCM) is a new tier of memory/storage that is at the top of DRAM (Dynamic Random Access Memory) and at the bottom of NAND flash. Figure below illustrates the position of SCM in the storage hierarchy. SCM, or as it is sometimes known, Persistent Memory (PMEM), offers fast non-volatile memory to the processor, with speeds slightly below DRAM but still vastly above those of even the fastest NAND Flash storage, while at the same time having capacities at the scale of these NAND Flash drives.

Position of SCM in memory heirarchy

Importance

IT system architects have always wanted to get primary data access latencies as low as possible while knowing the DRAM capacity and cost constraints and limitations. NAND SSDs started to shift their interface from SATA and SAS (Serial Attached SCSI ) to NVMe (Non-Volatile Memory express) which uses the PCIe (Peripheral Component Interconnect express) bus in servers as its transport. SSDs using NVMe have access latencies as low as 100 microseconds. However, that is still slower than DRAM. The need for a new technology to cover the market demands was becoming essential. SCM is taking place in the market to fill that gap.

SCM is a persistent memory that acts as a compromise between SSD and DRAM features. Even though both DRAM and NAND SSDs are made of solid-state chips and both are under the umbrella of solid-state storage, they have completely different roles. In the past, data storage and memory had the same definition; now, data storage is defined as the long term and memory (RAM) is defined as short term. SSDs are mainly used for storage and RAM is used to perform calculations and operations from the storage retrieved from the primary storage.

Use Cases for SCM

1. Tiering at the Storage-Side: SCM can be integrated with the storage array alongside NAND Flash drives by standardizing NVMe bus technology for both. In this case, the SCM acts as an even faster “Tier 0” inside the array to handle the hottest data and relegate the rest to SSD. While there will still be network delays, NVMe-over-Fiber technology exists to alleviate this issue.

2. Caching at the Storage-Side: SCM can be used as a basic read cache for the external storage array, taking advantage of its incredibly strong random read and write IOPS capabilities to accelerate the performance of the array. However, the network delay between the compute side and storage side will mean that we can’t take full advantage of SCM using this method.

3. Persistent Memory at the Server-Side: Also known as PMEM, this is the use case with the least development but biggest potential.

It comes in two modes:
a. SCM can plug directly as an NV-DIMM, extending the capacity of the existing DRAM in a configuration known as Memory Access Mode.
b. Another approach uses SCM to create a second tier of memory next to DRAM, similar to the tiered storage concept. In this case, called App Direct Mode, Direct Access Mode (or DAX), the application can directly access the non-volatile SCM, thereby having extremely low latencies.

Conclusion

SCM technology seeks to take the best from both worlds by being a non-volatile persistent storage that is faster than SSD but slower than DRAM, and costs more than SSD but less than DRAM. Tech companies and customers can use SCM as a much faster SSD or a bigger DRAM. SCM can scale higher than DRAM, reaching up to 15 or 20 times the capacity per die at a lower cost, without the need to add more CPUs and more memory slots, and without incurring the huge power costs of DRAM that follow. However, the most important advantage that SCM has is granularity: SCM can be both byte-addressable (like DRAM) and block-addressable (like NAND SSD). Therefore, the processing time to convert from block to bytes will not be a burden as before. SCM has huge repercussions for the place and use of SCM in the data center. It can truly unlock their remarkable potential to handle different workloads and demands of the market.

--

--