SSDs used to be underestimated. Low reliability, high cost, lack of volume. Accordingly, hard disk drives and RAID arrays of varying degrees of complexity have been used more frequently to achieve higher speeds and increased reliability. Now the situation has changed noticeably, so flash memory is becoming more and more common. In personal computers, the main tip is already – SSD installation. But in the corporate segment, things are noticeably more complex and more flexible solutions are required.
Flash memory is being used more and more here, though. For 2024, SSDs are projected to account for over 42% of the server segment on a per volume basis. And will continue to grow. So, it’s a good time to talk about SSDs for the server.
Emphasis on performance
The disk subsystem, even built with RAID, is the bottle neck of the system. HDDs are slow, despite significant growth in volume, the speed of the drives has barely increased over the years. Accordingly, it becomes difficult to use for data caching, fast deployment of virtual machines and in systems with a predominance of random accesses. Using RAID to increase speed doesn’t give serious results either. Therefore, the use of SSDs in servers has become an inevitability.
Thanks to flash memory, owners of server hardware were finally able to breathe easy, because it became easier to deploy productive disk subsystems, which have become quite reliable, allow you to work with random access and are not a bottle neck in the platform. Modern SSDs have gradually started to surpass HDDs in everything.
HDDs are currently are only suitable as storage or in systems where the speed of the disk subsystem is not very important. But even in such servers prefer to place the operating system and the main applications on solid state drives, otherwise booting will take forever and launching any application will send the employee for a smoke break.
Of course, SSDs for server can noticeably speed up the system, but the price is still noticeably higher than hard disks, so the investment is still substantial. So, let’s talk about the problems of choice.
Which SSD for server to choose?
Solid-state drive – storagedevice based on flash memory. There are different types, such as, based on differentх methodsmethods of connecting the cells:
NOR is a two-dimensional matrix of conductors, withone link per intersection;
NAND is a two-dimensional matrix of conductors simply the transistor is replaced by a column of cells placed in series.
Modern “memory reservoirs” more often use the second option, because it is better in many ways:
high recording density;
memory erasure in blocks is done immediately, in NOR it is necessary to zero all bytes of the block first;
lower power consumption.
NAND in 2025 is more feasible. NOR-SSD is not visible on the market now, not even in the plans, so it is better to lead the narrative about the current direction.
So, it’s worth looking at the existing classic NAND types (we’ll look at 3D NAND later):
Flash memory type | SLC | MLC | TLC | QLC |
Bits per cell | 1 | 2 | 3 | 4 |
TBW | 100k. | 3 thou. | 1 thou. | 0.5 ths. |
Reading time | 25μs | 50μs | 75μs | 110μs |
Recording time | 200-300μs | 600-900μs | 900-1350μs | 1500 and more μs |
Erase time | 1.5-2ms | 3ms | 4.5ms | 7ms |
So, let’s decipher:
TDW – number of write/erase cycles of the memory cell, after the specified number of cycles thelink islikely to “die”;
all time values are for one cell. μs – microseconds, ms – milliseconds.
Okay, now it’s time to talk about each type separately.
SLC
Has one bit per cell, which reduces the load on the cell, consequently the link lives much longer. In addition, the single-level structure results in lower latency and higher read/write speeds.
Probably realized that the best type of NAND-memory is NAND-memory. Reliable, productive. Expectedly, the most expensive. The cost can exceed the price of “younger” brethren times.
For example, the Transcend 500TS64GSSD500 drive is built on SLC . The cost for 64 GB will be approximately 45,000 rubles. Not cheap. So it’s rare to see even in servers. There are more convenient alternatives that are practically as good as anything. Plus, the low recording density will make itself felt. Here you have to allocate each bit to a separate element, respectively, the physical size of the device will grow with the number of them. Accordingly, with large memory capacities combined with compactness, there is no need to speak.