Software-defined storage is
latest buzz in storage industry. It gives rise to questions such as how
software-defined storage is different from storage virtualization? Is this just
old wine in new bottle?
Let us evaluate.
Storage started evolving since it
came out of traditional computer systems and started getting deployed as
external device. JBOD was simplistic external storage in early days. However,
it came with couple of disadvantages: disk size was fixed and there was no protection
against disk failures. This disk failure protection was then developed inside
computer system as software in terms of RAID. However, since the computer
system itself was vulnerable to failures, the disks in JBOD were still
vulnerable to computer system failures resulting in data corruption. Next
solution pushed the RAID software to HBA adaptors and then directly inside
JBODs which were made more robust, failure tolerant and resistant. Thereafter
all advanced features such as replication, snapshot, and deduplication started
getting into JBOD converting them into sophisticated high end hardware storage arrays.
Though hardware storage arrays these days have many advance
features, these features are applicable only at disk or logical unit (LU)
level. Hardware arrays are not aware of contents within. This unawareness about
content sometimes gives rise to inefficiency. E.g. many a times LUs of larger
size are created for an application which doesn’t use that much of LU at least
in beginning. Typically space required by application grows gradually. Till the
time application uses full space of LU it is underutilized. Another example of
inefficiency is storage arrays if capable of deduplication, may not be able to
apply the dedup algorithm efficiently because contents of storage are not
known.
In recent days this provisioning
solved the space efficiency problem but no other problems such as object
specific replication, deduplication, and snapshotting. In recent past cloud
computing gave rise to need of multi-tenant storage. Multi-tenant storage with
space efficiency still remains a challenge with traditional storage arrays.
Software-defined storage has evolved
to address these problems by breaking LU as operational boundary. With
software-defined storage, storage arrays will be able to receive content
information and policies and apply them as appropriate.
Let us take an example of
hypervisor application. Virtual machines are objects of hypervisor ecosystem
and undergo various transitions during their life time based on application
they are running. Apparently, data associated with VMs also needs to take part
in this transition such as replication, snapshotting, cloning, and migration.
If VM data is sitting on software-defined storage then these all requirement can
be set as policies of data specific to VM.
Software-defined storage seems to
be really addressing the issues that were unaddressed till date; hence it is
not just old wine in new bottle.
Write to us at smm@calsoftinc.com
Contributed by: Tejas Sumant |
Calsoft Inc.

No comments:
Post a Comment