Rubrik 4.0 – Alta

Rubrik 4.0 has release and I want to give an overview of the new features. Rubrik continues to develop and is meeting my expectations as a great Backup technology!!

Hyper-V Support

The following options are available for Hyper-V 2016 with Native API (WMI) Support

  • Auto Protect
  • Failover Cluster Support
  • Agentless backups
  • Incremental Forever
  • Live Mount
  • Instant Recovery
  • Search

The following options are available for Hyper-V version below 2016 with Connectors based support

  • Incremental Forever
  • Search

Nutanix AHV Support

  • Automated protection and restore workflow
  • Securely replicate or archive to other sites
  • Rubrik Core Capabilities – global search, erasure coding, reporting
  • Scale as you need
  • Pick your hypervisor: Acropolis, ESXi or Hyper-V

Oracle Support

  • Rusbrik Cluster is now a NAS target for Oracle RMAN with an agentless approach.
  • RMAN manages Backup and Restore for DB and Redo logs. Support for Incremental Merge –advanced RMAN feature.
  • Multi-channel support, ingest to flash for fast backup/ingest.
  • Recovery and DR through RMAN

Cloud Instantiation

  • Enables customers to power-on snapshot of a VM on the cloud
  • Instance type recommendation based on VM config file
  • 2-click deployment and end-to-end orchestration
  • UI Integration to launch, power off or de-provision an instance

SQL Server Live Mount

  • Power on read/write clones instantaneously
  • Provision a clone to any desired Point in Time
  • Mount same database across multiple hosts
  • RestAPI’sallow to automate workflows
  • Self Service using RBAC

Archive to Tape

  • Uses QStar to expose tape library as NFS/SMB share
  • Each Tape vendor has their proprietary interface
  • QStar presents a common interface irrespective of tape vendor
  • Supports Industry Standard LTFS format

Other Feature add

  • NFS Archive Encryption
  • Custom TLS Certificate – Web UI

ScaleIO Fundamentals – ScaleIO Data Sever (SDS) and ScaleIO Data Client (SDC)

The next part of ScaleIO that I want to cover is the ScaleIO Data Server (SDS) and the ScaleIO Data Client (SDC).  These components provide the storage as well as access the storage in ScaleIO.  You could consider the SDS as storage and SDC as a host, but they can reside on the same server.

The ScaleIO Data Server (SDS) – A software daemon that enables a server in the ScaleIO cluster to contribute its local storage to ScaleIO Storage. An instance of the SDS runs on every server that contributes some or all of its local storage space, which can be HDDs, SSDs, or PCIe flash cards to the aggregated pool of storage within the ScaleIO virtual SAN. The SDS manages the capacity of a single server and performs the Backend I/O.  You have SDS only nodes, where the node just serves out storage, or you can have a node that is both a SDS and SDC.

The ScaleIO Data Client (SDC) – A lightweight block device driver that gives the capability to access ScaleIO shared block volumes to applications. The SDC runs on the same server as the application.  The SDC communicates with other node over TCP/IP-based protocol and it is fully routable. When installed in a server, it presents ScaleIO volumes as block devices to any application on the server. Note that the SDC is a Block device driver that can run alongside other Block device drivers. The only I/Os in the stack that are intercepted by the SDC are those I/Os that are directed at the volumes that belong to ScaleIO. There is no interference with I/Os directed at traditional SAN LUNs that are not part of the ScaleIO. Users may modify the default ScaleIO configuration parameter to allow two SDCs to access the same data. The SDC is the only ScaleIO component that applications see in the data path.

On the next post I will go into some more of the fundamental terms and start getting deeper into the technology.

ScaleIO Fundamentals – Metadata Manager (MDM)

Over the past month, I have been working with ScaleIO and wanted to share some knowledge.  The SDDC is really making waves in the industry and Software defined storage is a big part of that.  I am going to start with the Fundamentals of ScaleIO and work my way up.  Metadata Manager is a good starting point as it is the control component.

Metadata Manager (MDM) – The Metadata Manager (MDM) configures and monitors the ScaleIO system. It contains all the metadata required for system operation. MDM is responsible for data migration, rebuilds, and all system-related functions. No user data passes thru the MDMs.  Three or more instances of MDM run on different servers for high availability. The MDM can also be configured in Single Mode on a single server or in Cluster Mode for redundancy.

When you have a MDM cluster you have the options of 3 members on 3 servers or 5 members on 5 servers. Below are the key MDM members in a cluster setup.

MDM – Server that has the MDM package installed.  MDMs have a unique ID and can have a unique name as well.

Master MDM – The MDM in the cluster that controls the SDS and SDC.  The Master contains and updates the MDM repository, the database that controls the SDS config, and how data is spread across the SDSs in the system

Slave MDM – MDM in a cluster that takes over for the Master if needed.  A 3 node cluster has 1 Slave and a 5 node cluster has 2.

TieBreaker – MDM where the only role is to help determine which MDM is the Master.  A 3 node cluster has 1 TieBreaker and a 5 node cluster has 2.  There will always be an odd number of MDMs in a cluster to make sure there is a majority in electing the Master.

Standby MDM – MDM specified as a standby to a cluster.  When it is needed in the cluster it can be promoted to a Manager MDM or a TieBreaker.

Manager MDM – MDM that is a Master or a Slave in the cluster.  Manger can also be a Standby MDM.

The next blog will cover ScaleIO Data Client (SDC) and ScaleIO Data Server (SDS).