DataCore & AWS DR Strategies

Dec 01, 2016 AT 10:37 AM

Many are looking to migrate workloads or setup a DR strategy to the cloud, and it soon becomes very apparent that doing so can become complex and expensive. It’s my goal in this post to illustrate a couple inexpensive and easy ways to move data to the cloud using DataCore software. The focus of today’s post will be using Amazon AWS and the integration options available, however this is certainly not a limiting factor as there are other cloud offerings that can also be utilized using DataCore software. 

There are two ways for someone to migrate, replicate, move or backup data between an on-premises DataCore installation and an Amazon service. The first way is using the AWS Storage Gateway appliance. This local appliance securely transfers your data to AWS over SSL, and securely stores your data in Amazon S3 and/or Amazon Glacier. You can use this service to backup and archive one’s storage but the gateway can also be used to migrate workloads to the cloud. 

The Amazon gateway can also be used with DataCore’s tiering functionality as mentioned in Jeff Slapp’s post recently. https://www.linkedin.com/pulse/datacore-storage-tiering-amazon-s3-jeffrey-slapp

For example, the AWS Storage Gateway can take a snapshot of your on-premises data volumes exposed to DataCore, so that it can be transparently copied into Amazon S3 for backup. You can then subsequently create local volumes or Amazon EBS volumes from these snapshots to run the workloads on AWS EC2 instances. Notice in the diagram that the replication moves the data first to Amazon S3 at which an AWS snapshot can be taken, then it can be attached as an EBS volume to an EC2 instance. Note, this method doesn’t require a separate DataCore node to reside on AWS. 

For more information on the Amazon Storage Gateway product, check out this overview as it has a new interface to expose migration, bursting and tiering use cases. https://aws.amazon.com/storagegateway

arch_aws_storage_gateway

The second way to migrate data to AWS is using our own DataCore replication functionality. This can be done synchronously or asynchronously depending on your use case constraints between an on-premises DataCore node and an DataCore AWS EC2 windows instance. 

dcsync

This means one would install DataCore software on a EC2 windows instance where it would be connected to an on-premises DataCore node using either either use a VPN or AWS Direct Connect service. This allows you to mirror your data synchronously or setup an asynchronous policy for transparent data migration to AWS. Once the data has been migrated to AWS it then becomes possible to take a DataCore snapshot for prosperity or further migrations to another AWS region or availability zone.  As an optional step you could also take a AWS snapshot like in the first example, however one would need to make sure that all data has been persisted on the EBS volume so that there is data consistency. 

Another great post that goes into detail regarding DataCore migration strategies can be found here. 
https://www.linkedin.com/pulse/migration-datacore-sansymphony-jeffrey-slapp?trk=prof-post


As you can see there are two great ways to migrate your on-premises workloads to AWS. All without the heavy expense of a consultant or specialized expensive transcoding software to move data blocks. The next post in this AWS series will look at performance options using DataCore software on Amazon AWS. 

...

Read More

Can I Have A Witness?

Nov 08, 2016 AT 09:19 AM

There are many storage systems on the market today that require a separate protected witness construct for high availability and fault tolerant data access. The quorum mandates that at least one server instance has ownership and active access to the underlying data subsystem. This arrangement is typical in cluster architectures. It’s important to recognize that a witness node in a clustered solution is vital for data awareness and data availability. For more information see here 

As an example, here you will find a rudimentary design that illustrates where a witness makes a decision during a failure event, where Node 3 and 4 are isolated from the rest of the cluster. This arbitrated vote by the witness maintains order for a quorum to be met and eliminate split brain scenarios.

witnessdesign

However, there is more than one way to meet the required demands of data availability and fault tolerance. DataCore is one such example of a data aware system that provides cluster like availability but without the high costs and complexity of a witness architecture. DataCore is a true "active-active" grid architecture and not a cluster architecture. Each of the nodes within the grid presents mirrored disks as "active-active" storage devices. This means that the backend storage is not only presented through one DataCore node, but it can be addressed via both DataCore nodes simultaneously (read and write, R/W). 

MultiDCNode
Unlike a cluster solution, this grid approach won’t be affected by a split brain scenario, because every mirrored DataCore virtual disk is synchronously in sync. This means that each DataCore node functions similar to a witness node in that a decision is made where the active I/O needs to be acknowledged from, during a failure event. This also means one doesn’t have to figure out where to place a witness node for optimal design and failure handling. 

Another way to think about this architecture is from the witness/quorum perspective. The responsibility of the witness is to make sure it has at least two votes within a clustered system and can thus achieve a majority decision on which site/node will be active during a failover scenario. For this comparison, one can think of DataCore’s grid architecture as having multiple intelligent witness nodes all actively participating making sure there is high availability and data redundancy. Having an active/active architecture with all server nodes acting as an intelligent witness with each other, one no longer needs to design another site/node just for witness protection. 

All DataCore nodes can provide individual, autonomous services, and still only be accessed via a single DataCore node. The other peer DataCore node constitute a hot stand-by node for the (active) node and only becomes active should it be required during a failure event. This means one achieves a modern cluster like architecture without the limitations or complexity of dedicated witness nodes spread across geographic sites. There is no need to design a solution using DataCore software around a witness architecture. All DataCore nodes will function similar to a witness construct but provides a better architecture for data availability and redundancy. 

So before asking ‘Can I have a witness’ the better question is why do you need one? If the objective is to keep your applications running with continuous availability, then why add the extra cost and complexity required to manage and support extra witness nodes. DataCore software will provide continuous availability, all with fewer nodes to manage while minimizing costs.  

 
...

Read More

Microsoft Ignite 2016 Webinar Review

Oct 10, 2016 AT 11:37 AM

If you couldn't make it to the Microsoft Ignite 2016 show this year in Atlanta, then this is the webinar for you.

Microsoft announced a number of new products, discussed their roadmap and shared updates on their ecosystem.

Todd Mace, DataCore’s Tech Evangelist, and Sushant Rao, Senior Director of Product Marking, discuss in a conversation style their take on the top trends from this year’s Ignite conference.

If you missed the VMworld 2016 webinar review, you can listen here...

...

Read More

I'm happy to report that DataCore is now offering FREE license keys for its Parallel I/O-powered DataCore™ Hyper-converged Virtual SAN software in support of the Microsoft Ignite 2016 event and Microsoft Server 2016 announcements. This is available for Microsoft experts that are recognized in one of the following: MVP's, MCP, MCSE, MCT, MCSA, MCA, MCM, MTA, and MCSD's. 

http://info.datacore.com/free-NFR-Microsoft-Professionals

mvp

 

 

 

 

...

Read More

DataCore Storage Tiering to Amazon S3

Sep 20, 2016 AT 09:34 AM

aws&dc

Jeff Slapp our Director of SE's (NA), recently put up an article that outlines one way DataCore software can be used in the cloud world. In this example he shows how Amazon AWS S3 can just be a tier of storage as integrated into DataCore's auto-tiering technology. He outlines the easy process to set the AWS storage gateway with DataCore's iSCSI connection. It's worth a read. 

https://www.linkedin.com/pulse/datacore-storage-tiering-amazon-s3-jeffrey-slapp

...

Read More

DataCore Virtual SAN - vExpert NFR

Sep 06, 2016 AT 01:13 PM

I'm happy to report that DataCore is now offering FREE license keys for its Parallel I/O-powered DataCore™ Hyper-converged Virtual SAN software to VMware vExperts. If you are also a VMware Certified Design Expert (VCDX), VMware Certified Advanced Professional (VCAP), VMware Certified Professional (VCP), or VMware Certified Implementation Expert (VCIX), you are also able to request a free license from DataCore. 

http://info.datacore.com/free-NFR-VMware-Professionals

VMW-LOGO-vEXPERT-2016-k1

The not-for-resale (NFR) license keys are for non-production uses such as home labs, course development, training, feature testing and demonstration purposes -- are intended to support virtualization consultants, instructors and architects involved in efforts aimed at managing and fully leveraging storage assets.

...

Read More

VMworld 2016 vExpert Special Gift

Aug 22, 2016 AT 10:14 AM

VMworld is almost here and it's shaping up to be a great event this year! This year DataCore would like to congratulate all VMware vExperts! We would like to give each of you a FREE limited edition gift! In order to receive this gift, you will need to come by VMworld booth #2406. Keep in mind we have limited quantities, so come by early and pick yours up. 

In addition, we would like to present vExperts in a special video promotion that we are running. When you come by the booth for your free gift, you will have the opportunity to say a few words about yourself to the VMware community in an up coming special feature. More to come… 

There is a lot of other exciting things happening at VMworld, so make sure and stop by booth #2406. http://info.datacore.com/vmworld2016

 

veboxer

...

Read More

DATACORE HYBRID-CONVERGENCE

Aug 09, 2016 AT 11:13 AM

Our very own Director of Systems Engineering, Jeff Slapp recently wrote a very thought provoking article on Hyper-Convergence. This new architecture for workload deployment, management and consolidation has been given it's share of marketing spin. So it's important to step back and really understand the true reason or meaning why customers need or are wanting to gravitate to a hyper-convergence model in the first place. Jeff, does a good job explaining the draw backs and assumptions made in the market and how best to think about hyper-convergence.

DataCore's approach to a hyper-converged model is what I call a decoupled software defined architecture.This gives freedom and flexibility to a customer without the compromises and drawbacks of a traditional hyper-converged architecture. Enjoy! 

https://www.linkedin.com/pulse/hyper-converged-noun-verb-jeffrey-slapp

 

...

Read More

Parallel I/O Deep Dive Podcast

Jul 27, 2016 AT 09:12 AM

In this podcast Douglas Brown from DABCC interviews Ziya Aral, Chairman, Co-founder at DataCore Software. Ziya and Douglas discuss DataCore’s new Parallel I/O technology and how we ended up where we are today in regards to CPU architectures. This podcast gives a technical glimpse into what is possible and what the future holds for Parallel I/O processing. Enjoy! 

 
...

Read More

Servers are the new Storage podcast

Jul 25, 2016 AT 05:21 PM

In this podcast, George Teixeira (CEO of DataCore) talked with Enrico Signoretti of Juku.it about Parallel I/O, a performance feature that is part of the SANSymphony products. George talks about how parallel I/O takes advantage of next generation multi-core CPUs, lab benchmarks, and real world performance improvements. 

...

Read More

Universal VVols

Jul 13, 2016 AT 12:10 PM

Introduction: DataCore provides a universal control plane where the instrumentation of Virtual Machine based storage policy’s can be managed across a heterogeneous storage infrastructure. This Universal VVol support improves operational efficiency through a common management platform where data services and performance demands are decoupled from any underlying deficiencies.

Universal VVols_Mirror

A growing number of storage vendors today provide VVol support for vSphere. As time progresses, a customer may face the issue of having 2 or more VASA providers as generations of the same, or best of breed storage from multiple vendors that occupy the datacenter.  This is in addition to those vendors that have no plans to support VVols in their current architecture. All these misalignments can lead to provider and storage silos for customers. 

DataCore saw these challenges and developed the first and only VMware certified software based Universal VVol. DataCore’s implementation of VVols creates a storage services platform that unifies data storage resources whether they are SAN, converged or cloud. This provides one set of universal storage services across all storage devices regardless of type, whether internal or external based storage. Diverse storage platforms, regardless of manufacturer or brand are now able to communicate seamlessly, thereby reducing complexity and improving operational efficiency. 

DataCore delivers a software based VVol VASA provider, so that vSphere HA and/or multiple VASA provider installations can provide full redundancy and availability across a full heterogeneity environment. 

Some of the benefits when using DataCore’s Universal VVols:

  • Only need one VASA provider for all disparate arrays.
  • No firmware upgrades or special licensing needed to support storage. 
  • Local storage & external array fully supported with VVols. 
  • All FC & iSCSI based storage can now be VVol capable. 
  • Virtual Machine migrations VMFS > VVol or Vice-Versa fully supported with no application down time.
  • Full availability and redundancy for VASA providers. 
  • Up to 16,000 VVols per Protocol Endpoint supported. 

DataCore software becomes a universal adapter providing universal data services where virtual machines can be based on a defined policy or through virtual machine provisioning when using VMware’s SPBM (Storage Policy Based Management).

When using SPBM, a set of data services can be chosen when using the vSphere web client. The data services that DataCore currently supports is aligned with the vSphere certification for the current VVol framework.  

  • Multi-writer support
  • Deduplication support
  • Synchronous Mirroring support
  • Snapshot support
  • Caching Support

As you can see DataCore has set out to innovate around a VVol enabled storage infrastructure. Below you will find a chart showing how we have expanded vSphere VVols by using a software centric approach.

VVol_Comparison

...

Read More

Get to know DataCore PSP5

Jul 06, 2016 AT 12:12 PM

Did you know that DataCore PSP5 is now GA? 

Did you know that you can expand the RAM cache per node from 1 TB to 8 TBs in DataCore PSP5?

Did you know that DataCore PSP5 now supports 16Gbps & 32Gbps Fibre Channel HBA’s from Qlogic? 

Did you know that DataCore PSP5 now supports Advance Format 4k disks in addition to the traditional 512 bytes per sector?

Did you know that DataCore PSP5 now provides performance monitoring for the most active volumes (transactions, throughput, latency)?

performancespotlight

Did you know that DataCore PSP5 now supports QoS at the virtual disk group layer?

VDQOS

Did you know that DataCore PSP5 continuously estimates when storage space will be depleted based on rate of consumption?

poolmonitor

Did you know in DataCore PSP5 you can now define classes of service when using VMware VVols and Microsoft VMM? 

templates

Did you know that DataCore PSP5 now automates the the deployment on vSphere? 

vspheredeploy

Did you know that DataCore PSP5 supports Microsoft Windows Server 2016 hosts? 

Did you know that DataCore PSP5 now supports access controls at the Virtual disk layer?

VDRBAC

Did you know that DataCore PSP5 now includes performance counters for VVols? 

Did you know that DataCore PSP5 provides up to 50% faster performance on multi-core servers thanks to Parallel I/O optimizations? 

Did you know that you can download a 30-day trial of DataCore PSP5? 

Did you know that you can participate in a live interactive demo of DataCore PSP5? 

Did you know that you can download a free NFR of DataCore PSP5 Virtual SAN software? 
...

Read More

World Record Performance Standards

Jul 05, 2016 AT 10:06 AM

When the Hennessey Venom GT broke the world record as the fastest production car in 2014, it raised the engineering bar in the automobile industry. This world record wasn’t just about how fast a car could go; it was about breaking the barriers on what was possible.  John F. Kennedy famously said, “We choose to go to the moon in this decade, then do the other things, not because they are easy, but because they are hard…” As you know going to the moon was not just about being first or doing what some deemed a waste of time and money, it was about the forward progress of knowledge for mankind.

HGT

The same can be said for appropriate record-setting endeavors in many differing industries. One appropriate example is the SPC-1 benchmark by the Storage Performance Council. As most often recognized, the SPC-1 benchmark is a vendor neutral endeavor for the storage industry. The Storage Performance Council says it “…. fosters the free and open exchange of ideas and information, and to ensure fair and vigorous competition between vendors as a means of improving the products and services available to the general public.”

Joining the Storage Performance Council (SPC) means that every submitted benchmark is not only audited and verified, but also peer reviewed. The governing authority makes sure there is no participating vendor that is gaming the process in its favor and so will only certify the results once a full audit has been conducted. It’s not until this point until the results and reports are publically shared for comment and review.

Documented, reproducible results and a peer review process are the hallmark of scientific advancement. The SPC methodology basically takes the scientific process and applies it to benefit the storage industry. Part of the peer review process is also to stimulate open debate and learning to advance the state of the art, a good example of the ‘to and fro’ is the recent debate which has ensued on caching and its impact on SPC results, for more on this, read

DataCore Did What….?

Some may say that DataCore’s latest SPC-1 performance results are too good to be true or the process which DataCore followed the benchmark was not in line with previous vendors’.

As noted above SPC has been around for a number of years with many vendors participating (partial list includes a who’s who in the storage industry:  EMC/Dell, HP, Hitachi, NetApp to name a few). It goes without saying that every vendor’s audited result had the opportunity to do better than exhibited. If a vendor hasn't participated in the SPC-1 benchmark, then there is nothing stopping them from participating and achieving the same results or maybe even better. To suggest we did something that others weren’t capable of is a flattering statement noting all the prior SPC-1 results that came from prominent industry leaders.

A vendor showing their own performance numbers without the oversight of a vendor neutral body like SPC doesn’t reflect a fair and balanced exercise. It’s not really about how much memory was used, or even the type hardware that was used in the benchmarking process, its really about your innovation to do better than everyone else, under the same rules of conduct for a standard process of measurement. So, if an audited and peered review process isn’t acceptable then the question remains, how do we all enable open sharing and trust for the advancement of storage innovation for our customers? In this I encourage everyone to join SPC to further develop and innovate, in order to hopefully achieve even better performance. This not only benefits those participating but all those that have participated in the past with measurements that can be shared instead of just talked about.

SPC Mission Statement: “The Storage Performance Council (SPC) is a non-profit corporation founded to define, standardize, and promote storage subsystem benchmarks as well as to disseminate objective, verifiable performance data to the computer industry and its customers.”

 

...

Read More

Based on the questions raised, It seems some have missed a major aspect that contributed to DataCore's world record storage performance. As some may think, it wasn’t just the cache in memory that made the biggest difference in the result. The principal innovation that provided the differentiation is DataCore’s new parallel I/O architecture. I think our Chairman and Technologist; Ziya Aral says it well in the article below. 

From the Register Article by Chris Mellor:
 The SPC-1 benchmark is cobblers, thunders Oracle veep

DataCore Announcement: 

DataCore Parallel Server Rockets Past All Competitors, Setting the New World Record for Storage Performance

Measured Results are Faster than the Previous Top Two Leaders Combined, yet Costs Only a Fraction of Their Price in Head-to-head Comparisons Validated by the Storage Performance Council; See Chart Below:

Top 3 Capture

Comments from the original article:

The DataCore SPC-1-topping benchmark has attracted attention, with some saying that it is artificial (read cache-centric) and unrealistic as the benchmark is not applicable to today's workloads.

Oracle SVP Chuck Hollis told The Register: "The way [DataCore] can get such amazing IOPS on a SPC-1 is that they're using an enormous amount of server cache."
...In his view: "The trick is to size the capacity of the benchmark so everything fits in memory. The SPC-1 rules allow this, as long as the data is recoverable after a power outage. Unfortunately, the SPC-1 hasn't been updated in a long, long time. So, all congrats to DataCore (or whoever) who is able to figure out how to fit an appropriately sized SPC-1 workload into cache."

But, in his opinion, "we're not really talking about a storage benchmark any more, we're really talking about a memory benchmark. Whether that is relevant or not I'll leave to others to debate."

DataCore's response ... Sour grapes

Ziya Aral, DataCore's chairman, has a different view, which we present in at length as we reckon it is important to understand his, as well as DataCore's, point of view.
"Mr. Hollis' comments are odd coming from a company which has spent so much effort on in-memory databases. Unfortunately, they fall into the category of 'sour grapes'."
“The SPC-1 does not specify the size of the database which may be run and this makes the discussion around 'enormous cache', etc. moot,” continued Aral. “The benchmark has always been able to fit inside the cache of the storage server at any given point, simply by making the database small enough. Several all-cache systems have been benchmarked over the years, going back over a decade and reaching almost to the present day.”

"Conversely, 'large caches' have been an attribute of most recent SPC-1 submissions. I think Huawei used 4TB of DRAM cache and Hitachi used 2TB. TB caches have become typical as DRAM densities have evolved. In some cases, this has been supplemented by 'fast flash', also serving in a caching role."

Aral continued:
In none of the examples above were vendors able to produce results similar to DataCore's, either in absolute or relative terms. If Mr. Hollis were right, it should be possible for any number of vendors to duplicate DataCore's results. More, it should not have waited for DataCore to implement such an obvious strategy given the competitive significance of SPC-1. We welcome such an attempt by other vendors.

“So too with 'tuning tricks,'” he went on. “One advantage of the SPC-1 is that it has been run so long by so many vendors and with so much intensity that very few such "tricks" remain undiscovered. There is no secret to DataCore's results and no reason to try guess how they came about. DRAM is very important but it is not the magnitude of the memory array so much as the bandwidth to it."

Symmetric multi-processing

Aral also says SMP is a crucial aspect of DataCore's technology concerning memory array bandwidth, explaining this at length:

As multi-core CPUs have evolved through several iterations, their architecture has been simplified to yield a NUMA per socket, a private DRAM array per NUMA and inter-NUMA links fast enough to approach uniform access shared memory for many applications. At the same time, bandwidth to the DRAMs has grown dramatically, from the current four channels to DRAM, to six in the next iteration.

The above has made Symmetrical Multi-Processing or SMP, practical again. SMP was always the most general and, in most ways, the most efficient of the various parallel processing techniques to be employed. It was ultimately defeated nearly 20 years ago by the application of Moore's Law – it became impossible to iterate SMP generations as qucikly as uniprocessors were advancing.

DataCore is the first recent practitioner of the Science/Art to put SMP to work... in our case with Parallel I/O. In DataCore's world record SPC-1 run, we use two small systems but no less than 72 cores organized as 144 usable logical CPUs. The DRAM serves as a large speed matching buffer and shared memory pool, most important because it brings a large number of those CPUs to ground. The numbers are impressive but I assure Mr. Hollis that there is a long way to go.

DataCore likes SPC-1. It generates a reasonable workload and simulates a virtual machine environment so common today. But, Mr. Hollis would be mistaken in believing that the DataCore approach is confined to this segment. The next big focus of our work will be on, analytics which is properly on the other end of this workload spectrum. We expect to yield a similar result in an entirely dissimilar environment.
The irony in Mr. Hollis' comments is that Oracle was an early pioneer and practitioner of SMP programming and made important contributions in that area.

...
DRAM usage
DataCore's Eric Wendel, Director for Technical Ecosystem Development, added this fascinating fact: "We actually only used 1.25TB (per server node) for the DRAM (2.5TB total for both nodes) to get 5.1 million IOPS, while Huawei used 4.0TB [in total] to get 3 million IOPS."

Although 1.536TB of memory was fitted to each server only 1.25TB was actually configured for DataCore's Parallel Server (See the full disclosure report) which means DataCore used 1.5TB of DRAM in total for 5 million IOPS compared to Huawei's 4TB for 3 million IOPS......

Read More

The World Record Debate

Jun 22, 2016 AT 04:00 PM

As you might have already heard, we recently set a new world record in storage performance. This has created a lot of questions and conversation as one might expect. Yesterday Chris Mellor published a comment post on some of the questions raised. Our chairman Ziya Aral did a great job at articulating and responding, check out the conversation and feel free to join in.

"The DRAM serves as a large speed matching buffer and shared memory pool, most important because it brings a large number of those CPUs to ground."

                                  conversation

...

Read More

I recently had the privilege of recording a podcast with Douglas Brown at DABCC. We discuss DataCore software and how it works with VMware’s Virtual Volumes. Enjoy! 

...

Read More

PSP5 What's New

May 25, 2016 AT 07:39 AM

Today, DataCore is officially announcing the next release of SANsymphony and Hyper-Converged Virtual SAN. This new release encompasses some exciting enhancements and new features for customers. 

There are number of enhancements to the product, so today I will only highlight a couple of them. A deeper examination will follow in the near future.

spc-1
  • With this release, a continued focus on performance was a given and so it’s not surprising that DataCore engineering has done it again and raised the performance bar even higher. Just with a simple upgrade for current customers, this release can now yield up to 50% greater performance for your applications. This is in addition to the already record breaking performance of the previous release.
  • DataCore is using advances in Parallel I/O Technology to significantly improve performance. In addition, PSP5 now raises it’s read and write RAM cache from 1TB to 8TB per node. This is the largest known RAM cache in the industry today. This means now even larger data working sets can be referenced in RAM than before. This further reduces the latency from using flash devices or spinning disks in the back-end system.   

  • 32 Gbps Fibre Channel – You now have the option to use 32 Gbps Qlogic Fibre Channel Host Bus Adapters (HBAs). They may be configured for front-end connections to hosts, mirror connections between nodes and back-end connections to storage devices.
  • Microsoft Windows Server 2016 Host Support – This release of DataCore software will support hosts running Windows Server 2016 when the new Microsoft operating system debuts this fall.
  • Role Based Access Control Expanded (RBAC) – Role based access has been expanded to virtual disk objects. For example, It’s now possible with this release to have a single administrative owner or multiple owners assigned to a virtual disk.
  • VVol Support Enhancements: Adding to the already great VVol certified capabilities, customers can now use the new VVol performance counters when monitoring VVol objects. In order to provide multi-tenancy capabilities when managing VVols, PSP5 has added access controls to VVol objects. For example, the configuration of virtual disks associated with VVols can only be modified by the administrative account assigned to the VASA provider.
  • System Center 2012 Virtual Machine Manager (VMM) – Hyper-V administrators can now self-provision, monitor and offload storage tasks to a DataCore nodes when configuring virtual machines with Virtual Machine Manager. 

This is just a brief overview, but as you can see there are some great new enhancements to this release. Remember you can always download a 30-day trial and for current customers it’s a simple upgrade process. 

 
...

Read More