The Software Defined Storage Revolution is Here: Discover Your Options

The Software Defined Storage Revolution is Here: Discover Your Options

Here are some of the highlights of this session I attended

3 different types of SDS

  1. Retrofit SDS
    1. SDS in name only
    2. Not Scale Out or pay as you grow
    3. Can’t do Hyperconverged
  2. Vertically Integrated
    1. Ease of Use
    2. Managed by VM Admin
    3. Optimize resources at signal platform level
    4. Homogenous VM and heterogeneous workload
  3. General Purpose
    1. Flexibility, performance, efficiency paramount
    2. Managed by VM/Storage admin
    3. Optimize resources across Data Center
    4. Heterogeneous VM/OS, heterogeneous workload

VMware vSAN Design Point

  • VM-Centric management
  • Integrated with vCenter
  • x86 Server independent

vSAN Feature Progress

  • Native Security
  • Always-On Protection
  • Enhanced Stretched Clusters
  • Cloud Analytics
  • Intelligent Operations
  • Higher Performance
  • VUM Integration

vSAN Deployment Options

  • Dell EMC VxRail Appliance
  • VMware vSAN Ready Nodes

Dell EMC ScaleIO

  • Abstract – Pool – Automate
    • Abstract the local storage out of each server including, HDD, SSD and NVMe
    • Pools all the storage resources together, leaving no resources standard
    • Automatically allocates and balances resources based upon each application need
  • ScaleIO Data Client and Data server running on any x86 server
    • SDC servers the I/O request of the resident host application
      • Block driver
      • Installed in the hypervisor kernel
    • Deploy ScaleIO your way
      • Traditional Two-layer server and storage
      • One layer co-resident
      • Modern and traditional mixed
    • Consume ScaleIO Your Way
      • Build
        • ScaleIO software
      • Buy and Build
        • ScaleIO ready Node
      • Buy
        • VxRack System FLEX
      • ScaleIO is ideal for
        • Array Consolidation
        • Private Cloud
        • OpenStack/DevOps/Containers

 

Advertisements

ScaleIO Fundamentals – ScaleIO Data Sever (SDS) and ScaleIO Data Client (SDC)

The next part of ScaleIO that I want to cover is the ScaleIO Data Server (SDS) and the ScaleIO Data Client (SDC).  These components provide the storage as well as access the storage in ScaleIO.  You could consider the SDS as storage and SDC as a host, but they can reside on the same server.

The ScaleIO Data Server (SDS) – A software daemon that enables a server in the ScaleIO cluster to contribute its local storage to ScaleIO Storage. An instance of the SDS runs on every server that contributes some or all of its local storage space, which can be HDDs, SSDs, or PCIe flash cards to the aggregated pool of storage within the ScaleIO virtual SAN. The SDS manages the capacity of a single server and performs the Backend I/O.  You have SDS only nodes, where the node just serves out storage, or you can have a node that is both a SDS and SDC.

The ScaleIO Data Client (SDC) – A lightweight block device driver that gives the capability to access ScaleIO shared block volumes to applications. The SDC runs on the same server as the application.  The SDC communicates with other node over TCP/IP-based protocol and it is fully routable. When installed in a server, it presents ScaleIO volumes as block devices to any application on the server. Note that the SDC is a Block device driver that can run alongside other Block device drivers. The only I/Os in the stack that are intercepted by the SDC are those I/Os that are directed at the volumes that belong to ScaleIO. There is no interference with I/Os directed at traditional SAN LUNs that are not part of the ScaleIO. Users may modify the default ScaleIO configuration parameter to allow two SDCs to access the same data. The SDC is the only ScaleIO component that applications see in the data path.

On the next post I will go into some more of the fundamental terms and start getting deeper into the technology.

ScaleIO Fundamentals – Metadata Manager (MDM)

Over the past month, I have been working with ScaleIO and wanted to share some knowledge.  The SDDC is really making waves in the industry and Software defined storage is a big part of that.  I am going to start with the Fundamentals of ScaleIO and work my way up.  Metadata Manager is a good starting point as it is the control component.

Metadata Manager (MDM) – The Metadata Manager (MDM) configures and monitors the ScaleIO system. It contains all the metadata required for system operation. MDM is responsible for data migration, rebuilds, and all system-related functions. No user data passes thru the MDMs.  Three or more instances of MDM run on different servers for high availability. The MDM can also be configured in Single Mode on a single server or in Cluster Mode for redundancy.

When you have a MDM cluster you have the options of 3 members on 3 servers or 5 members on 5 servers. Below are the key MDM members in a cluster setup.

MDM – Server that has the MDM package installed.  MDMs have a unique ID and can have a unique name as well.

Master MDM – The MDM in the cluster that controls the SDS and SDC.  The Master contains and updates the MDM repository, the database that controls the SDS config, and how data is spread across the SDSs in the system

Slave MDM – MDM in a cluster that takes over for the Master if needed.  A 3 node cluster has 1 Slave and a 5 node cluster has 2.

TieBreaker – MDM where the only role is to help determine which MDM is the Master.  A 3 node cluster has 1 TieBreaker and a 5 node cluster has 2.  There will always be an odd number of MDMs in a cluster to make sure there is a majority in electing the Master.

Standby MDM – MDM specified as a standby to a cluster.  When it is needed in the cluster it can be promoted to a Manager MDM or a TieBreaker.

Manager MDM – MDM that is a Master or a Slave in the cluster.  Manger can also be a Standby MDM.

The next blog will cover ScaleIO Data Client (SDC) and ScaleIO Data Server (SDS).

ScaleIO 3.0

The software defined data center continues to evolve with Storage, compute, and networking.  ScaleIO has become a leading SDS solution and is continuing to improve.  Below will cover some enhancements of ScaleIO 3.0.

Storage Efficiency

  • Space Efficient layout
    • 4k granularity disk layout – optimized for SSD
    • Optimized for SSD and leverage NVDIMM’s
    • Snaps now on equal footing with AFA when it comes to efficiency, management and performance
  • Compression
    • Variable size compression algorithm based on LZ4
    • Optimized for SSD and leverages NVDIMM’s
    • Can be turned on / off at a pool or volume level

Volume Migration

  • Zero application impact
  • Non-disruptive storage-side volume migrations moving volumes between different Storage Pools and Protection Domains
  • Entire VTree can be non-disruptively migrated

Full Support for Dell EMC PowerEdge 14G and NVMe Drives

  • ScaleIO 3.0 takes full advantage of the latest Dell hardware
    • NVDIMMs
    • NVMe
  • ScaleIO Ready Nodes
    • R740XD and R640 Support
    • SSD, NVMe and HDD Options. Hybrid to be released later.

VMware vVols Support

  • Tie VMs more directly to storage Volumes which provides simplified management, less resource contention, and the ability to leverage storage side features like snapshots
  • Requires ESXi 6.0 and newer
  • VASA service runs in a VM
    • Mapping database is stored in a ScaleIO Volume
    • HA is handled by VMware
    • VASA services presents an API to VMware and talks directly to ScaleIO MDM
    • Uses existing ScaleIO SVM Template and adds services to it

Snapshot enhancements

  • Snapshot count increased by 4x
    • 31 to 127 for current medium granular layout
    • Fine granular layout snapshot count is increased 8x – 255 snapshots
  • Volumes can be reverted to any snapshot within the vTree
  • Volumes can be refreshed in place
  • Snapshot can be deleted anywhere in the vTree without affecting other snapshots in the vTree
  • Automated snapshot management
    • Set snapshot creation and retention policy and ScaleIO will automatically manage the snapshots
    • Snapshot management works with CG as well

AMS with OS/FW patching and Storage Only Support

  • AMS handles Storage SW, OS and Firmware
  • VMware support now for 6.5
  • All components upgrade in a rolling manner
    • ScaleIO Software
    • SVM Operating System
    • VMware ESX / RHEL OS
    • Firmware for hardware components

I will be working increasingly with ScaleIO in the future so expect more posts!!

EMC World 2016 Series – ScaleIO 2.0

EMC World is less than 3 weeks away and this will be my first year attending!!  I wanted to do a short series on some of the recent announcements that most likely be covered at EMC World.   The first is ScaleIO 2.0

ScaleIO 2.0 – Enriched Security

  • IPv6 Support
    • ScaleIO can be used with either IPv4 or IPv6
    • Supports all IPv6 addresses in the same cluster
    • Enabled for both internal and external components
  • AD/LDAP and secured LDAP
    • Authenticate against Active directory
    • Mapped to ScaleIO roles
    • You can do Native Authentication, LDAP or both
    • Local users can be removed
  • Software Component Validation
    • Certificates generate at deployment
    • Certificate based when adding – Trust established, when reconnecting challenge
    • Central Management
  • Secure Connections (SSL)
    • MDM uses SSL certificate to communicate with all external components
    • Self-signed certificate
    • May be enabled and deployed automatically at install

ScaleIO 2.0 Enhanced Resiliency

  • In-flight Checksum
    • Prevents invalid data from being written to the system
    • Calculate and validate checksum on application data
    • Checksum us calculated/validate when
      • IO enters or exits the SDC
      • IO is read or written to the SDS device
    • Protects against changes in transit
      • Disks maintain their own checksum
    • Enabled or disabled at Storage Pool Level
    • 16-bit per 512B block
  • 5 Node MDM Cluster
    • 5 MDM cluster members
    • Three options
      • 5-node: 3 MDMs + 2 Tie Breakers
      • 3-Node: 2 MDMs + 1 Tie Breaker
      • 1-Node: Single Node
    • Option for multiple standby MDMs
    • MDMs now have a unique ID and Name
  • Read Flash Cache
    • May be controlled by CLI or GUI
    • New type of SDS device added
    • New “Read Flash Cache” present in the backend view
    • Accelerate the reads of HDD devices by using PCI flash cards and SSDs for caching
  • ESRS
    • Two-way remote connection enables remote monitoring and remote diagnosis/repair
    • ScaleIO remote support works with ERSR v3 only
    • ScaleIO is registered as a single product using the license SWID
  • Instance Maintenance Mode
    • Supports SDS node reboot
  • Non-disruptive Upgrade Orchestration
    • NDU from Major to Major and Minor to Minor
    • Rolling
    • Done by IM
    • Upgrade Order
      • IM/Plugin, LIA, MDM, SDS, SDC
    • New Functionality available after upgrading MDM and SDS
    • SDC upgrade
      • May be accessing 2 or more clusters
      • Forwards and backwards compatible
      • Requires a host reboot
      • May be postponed to planned maintenance

ScaleIO 2.0 Expanded Agility

  • Enhanced GUI Storage Management
    • New “Frontend” tab
      • Volume operation: Add, Remove, Rename, Increase size, Enable and Disable, RAM Cache
      • Snapshots
      • SDC
    • Accessible based upon defined roles in ScaleIO or AD/LDAP