A brief solid state storage system history and update

Persistent memory is nothing new to the data storage industry.  Mainframes used small amount of persistent memory for decades.  But it was extremely expensive and capacities where quite small in the byte range.  Once engineers figured out how to make larger capacities of persistent memory in the early 2000's and we started seeing solid state drives (SSDs) on the market around 2009 or so.  These drives gave a boost in performance for most applications but did not harness the really speed and durability of memory because SSDs were put in a hard disk drive (HDD) form factor and made to communicate like hard disk drives which are much slower devices.  So even through the average SSD performed magnitudes faster this performance was slowed by the SCSI protocol stack in the operating system and a translation layer used in the SSD to translate SCSI protocol language into memory language.  In some early systems the firmware in some storage controller systems used to manage and communicate with HDDs would throttle performance to slow down the rate at which data was stored in order not to overwhelm HDDs.  This firmware was soon bypassed in later storage systems to add performance and raise the rate data is stored to SSDs. While this is happening to traditional arrays a whole new modern design for all flash arrays were being architected and build from the ground up to only communicate with NAND flash media.  Performance is better with these modern designs but they still communicate with a SCSI protocol and translation later today.

The next generation in solid state storage technologies coming on the market, NVM and NVMe for PCIe based NVM SSDs, and persistent memory NVDIMMs can in some use cases be a magnitude faster than SCSI based NAND flash SSDs.  There needs to be more study and measurement in how well PCIe NVM based SSDs performance with today's application workloads.  Early studies have shown some promise for certain applications and not so much others but we will talk about that in a later blog.


Over 80% of the world's data is stored as secondary storage.  PCIe based NVM technologies are coming on the market to be the new primary storage and flash becoming cheaper and denser (32TB, 64TB, and beyond). This is an opportunity to use NAND flash SSDs as secondary storage for backup/recovery, near line, and active archive storage.  In some cases to enhance the use of tape libraries or even an extend option for active archive.  Before we know it all levels of storage from primary (hot) to archive (cold) storage will be using some form of solid state technology.  This will change the landscape of how the datacenter infrastructure looks and operates to access, store, and protect data.


Share:
Read More

The Network is the Computer

A storage infrastructure transformation is coming to the datacenter and it is more than the simple consolidation of storage companies we are currently seeing.  The datacenter will look nothing like it does today it will be gone and a new way of storing, moving, and computing data will be here.  Scott McNeilly predicted that the "network is the computer" will finally arrive and have a seat at the table.  This means no more servers, no more storage as we know it today.  As the large companies are consolidating there are many little armies of startups that are being backed by large unexpected forces that will create this disruption that is coming.  Things are about to get very ugly and beautiful at the same time.  Choose your next infrastructure refresh wisely.  This is not for the faint at heart.

Software to support this intelligence has already started in this transformation in the form of software defined storage (SDS) and software defined infrastructure (SDI).  This is forcing hardware designer to think a different way to support these new distributed infrastructures.  The talk of traditional enterprise applications on monolithic hardware platforms are being transformed into modern applications that take advantage distributed architectures, such as container environments, supporting microservices, machine learning, data analytics, etc...

It's not just the cloud service provider (CSP) currently going through this transformation but large enterprise companies in finance, oil and gas, life sciences, healthcare, retail, and others surprisingly have started too.

This blog will address what my friends and I see in the industry from new science to support a new way of thinking to new software concepts for the next generation datacenter as this transformation unfolds in various ways.  Stay tuned.....

Share:
Read More