VNX Block 05.33.009.5.155 and File 8.1.9.155 New Features and Enhancements

Support for new solid state drives (SSD)

Starting with this release, 200GB and 400GB Fully Automated Storage Tiering for Virtual Pools (FAST VP) solid state drives (SSD) can now be configured for FAST Cache. In previous releases, this SSD drive was called ‘SAS Flash VP’ because it could only be used for FAST VP.

Beginning with this release, this drive type is called ‘SAS Flash 2’. Also with this release, you can configure a new class of SSD drives as their own homogeneous all Flash pools. These new ‘SAS Flash 3’ SSD drives cannot be used for FAST Cache or FAST VP. The drives are also not supported as system drives. Existing VNX2 storage systems configured with the original ‘SAS Flash’ drives will continue to be supported. The original ‘SAS Flash’ drives will continue to be available for upgrades because you cannot mix drive types within FAST Cache.

Enhancements to I/O class limits in QoS Manager

Unisphere Quality of Service (QoS) Manager now allows you to control performance on a greater number of LUNs by increasing the number of I/O classes per policy. A LUN can be assigned to only one active I/O class, a category of I/O that has a specific performance objective. An I/O class is then assigned to a policy. Previously only 32 I/O classes could be assigned to a Limit or Queue Depth policy, which meant that this type of performance objective could only be set on 4096 LUNs (an I/O class can include a maximum of 128 LUNs). Now 64 I/O classes can be assigned to a Limit or Queue Depth policy, allowing this type of performance objective to be set on 8192 LUNs.

Note: Only one policy can run at a time.

In addition, you can now create a maximum of 128 I/O classes on a storage system, an increase from the previous maximum of 64 I/O classes.

iSCSI VLANs support

Starting with this release, you can now set the maximum number of VLANs to use for each iSCSI port in your system.

  • If the SP has less than 8 physical iSCSI ports, you can set a maximum of either 8 or 16 VLANs for each iSCSI port.
  • If the SP has more than 8 physical iSCSI ports, you can set only a maximum of 8 VLANs for each iSCSI port.

Disk firmware auto update

When logged into a system, the latest version of USM automatically scans for and notifies you if a disk firmware update is available for the drives present in your array. If an update is available a popup message is displayed at the bottom of the System page. The most recent update of this wizard assigns a priority rating to your disk firmware updates and allows you to install multiple firmware updates in a sequential manner. Clicking on either the popup message or the icon displayed at the bottom of the page will launch the USM Online Disk Firmware Upgrade (ODFU) wizard, where you can see which updates are available for your system and proceed with a non-disruptive disk firmware update. This feature is supported on all VNX systems, but is not supported on legacy CLARiiON, Celerra, or VNX Gateway systems.

Changes to Proactive Copy Resiliency

The criteria to fault a drive once it enters proactive sparing have been changed in previous releases for select drives. With this latest release, Proactive Sparing is available on all drive types. Drives being proactively spared will no longer be faulted because of media errors. This will enable the RAID group to stay redundant for a longer period of time when drives are reporting media errors during proactive sparing. This is called Resiliency mode. Drives that are reporting a large number of media errors may exhibit slow response times which may propagate to the host. In environments where hosts cannot ride through these periods of slowness, the original proactive sparing criteria which fault drives that continue to report media errors may be preferable. To make this change, contact EMC Support.

VDM MetroSync

Note: If VDM MetroSync Manager is running, you must shut down the VDM MetroSync Manager Monitor service before upgrading VNX for File OE software. You should also shut down VDM MetroSync Manager while any of the sites involved in the replication session is in the process of upgrading its VNX for File OE software.

The VDM MetroSync feature provides a disaster recovery solution for VNX2 that leverages MirrorView/S replication to create a zero-data loss replication solution at a Virtual Data Mover (VDM) granularity. It allows for replication of a VDM along with all of its contents including File Systems, checkpoints, checkpoint schedules, CIFS servers, exports, interfaces, etc. It can be configured in an active/passive configuration where VDMs are only active on one site or an active/active configuration where VDMs are active on both sites. There is the ability to move or failover VDMs from one system to another as needed. This feature works for a true disaster recovery (DR) where the source is unavailable. For DR support in the event of an unplanned failover, an option is provided to recover and clean up the source system to make it ready as a future destination. VDM MetroSync Manager is optional software that can be installed on a Windows server which works with VDM MetroSync. It provides a graphic user interface to display VDM MetroSync session information and run operations to move, failover, or restore VDMs. It also has the ability to continuously monitor sessions and automatically initiate failover when issues are detected. When synchronous replication is enabled between two systems, it is also possible to add asynchronous replication to a third system by using ReplicatorV2. This allows the third system to be located further away and enables it to be used as a backup and recovery solution. When VDMs are moved or failed over between the VDM MetroSync systems, the RepV2 sessions to the third system are preserved. Since the RepV2 checkpoints are replicated along with the VDM, a common base checkpoint is available which removes the requirement for a full synchronization. The RepV2 sessions can be incrementally updated and restarted on the new system where the VDM is active.

Enforce FSID for VDM Migration

VDM File Migration provides ways to migrate an individual file systems and VDMs with file systems to another system. This release introduces an -enforce fsid option that acts to preserve FSID after migration. This option is available through either the VNX for File nas_migrate CLI command or the VNX File Migration Tool.

New Storage Processor statistic

Starting with this release, you can now view statistics that show the elapsed time since a Storage Processor (SPA or SPB) on the VNX for Block storage system was last rebooted. The time shown is in days, hours, and minutes.

Note: that this information updates only when the SP is polled. If the SP goes down, the value becomes “N/A”. Open the SP Properties – General tab to view the SP Uptime statistic.

Shared VHDX Disk Support (VMware)

EMC now supports the Microsoft Shared VHDX model available with Windows Server 2012 R2 release/SMB 3.02.

64K Logical Page Lock

When using iSCSI, and Reads overlap the same 64K reference, the system does not do TCP ACK to Host, waits until 200ms timeout for first Read, then has to timeout again for ACK on 2nd Read, causing huge performance bottleneck –Mitigation is to parallelize iSCSI reads in cache to prevent the 64K reference Read locking issue.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s