Data Center Dan
  • Blog
  • About
  • Contact

New! NetApp EF560 and E5600 Performance Arrays!

1/27/2015

2 Comments

 
Update: Correct typographical mistake of "EF5600" to "E5600" (URL will remain the same).

Awesome news today: NetApp is releasing its next-generation E-series hybrid and all-flash arrays with industry-leading price:performance! 
Picture
NetApp EF560 All-Flash Array

EF560 All-Flash Array

With a Storage Performance Council (SPC-1) rating of $0.54/IOP at <1ms, The EF560 now offers a monstrous 650,000 sustained IOPS at ultra-low latency. That's a 30% increase in performance over its predecessor. Awesome! A lot of the facts remain the same:
  • Up to 120 SSDs, 192TB raw (using 1.6TB SSDs)
  • Full-Disk Encryption with 800GB SED SSDs
  • 2U, 24-disk shelf and shelf expansion
Besides the nitrous-oxide-like boost NetApp has given the EF560, there are a few front-end host connectivity changes as well. The new options are:
  • 8 x 16GbFC
  • 8 x 12GB SAS
  • 8 x 10GbE iSCSI (optical only)
  • 4 x 56GbFDR (Infiniband)
Did I mention that the EF-series has an industry leading six-nines of availability? Oh yeah, that six nines, not five: 99.9999% uptime, with appropriate configuration and service plans, of course. Oh, by the way, NetApp has shipped over 100PB of flash disk, too (though not just on E-series).

E5600 Hybrid Array

In addition to the new EF-560, its smaller-but-quite-powerful-in-its-own-right brother, the E5600, is also being announced. 
Picture
NetApp E5600 Hybrid Flash Array (60-Disk Enclosure Shown)
The E5600 has long been a choice for those who need simple, reliable, powerful storage arrays—the three building blocks of the E-series. Is it simple? Absolutely. Straightforward, not a lot of bells and whistles. It does what it does and it does it over and over and over and over again. It's reliable. Is it powerful? You bet! Up to 12GBps sustained IO to disk (that's Gigabytes).

Like its all-flash brother, it supports the same front-end connectivity options. In addition to that, all E-series and EF-series array come licensed with all features (bonus!) and have integration with the following third-party products:

Application Plug-ins
  • Oracle: SANtricity Plug-in for Oracle Enterprise Manager 
  • Microsoft: SANtricity Management Pack for SCOM, SANtricity Plug-in for SQL Server® (SSMS) 
  • VMware: SANtricity Plug-in for vCenter®; VASA Provider, Storage Replication Adapter 
  • Splunk: SANtricity Performance App for Splunk Enterprise
Open Management
  • SANtricity OpenStack Cinder 
  • SANtricity Web Services Proxy (REST and SYMbol Web)
  • SANtricity PowerShell Toolkit
2 Comments

NetApp DataONTAP 8.3 | ADP Root Disk-Slice Deep-Dive

12/8/2014

17 Comments

 
It's fair to say that NetApp's Clustered DataONTAP 8.3 is one of the biggest software releases in NetApp's lengthy history. So I want to spend some more time on one of its most key features: Advanced Drive Partitioning, or ADP. 

ADP is the ability to virtually "slice" or in Cisco's terms "abstract" the physical disk blocks in order to make them malleable in ways otherwise impossible without it. And there are two major implementation of this abstraction: root disk slices and FlashPool disk slices. Here, I just want to focus on root disk slices.

Root Disk Slices

With root disk slices, each partition is treated as a distinct virtual disk, and can have it's own RAID level, parity disks, and spare partitions. (Thanks to NetApp SEs and TMEs for providing some of these diagrams)
Picture
NetApp cDOT 8.3 | Root Disk Slicing
Now, root disk slices are available only on certain configurations:
  1. Entry-level FAS platforms (2500, 2200 series)
  2. All-Flash FAS (AFF) platforms (8000, 6000, 3000 series)

While that might seem limiting (notice there is no support for hybrid platforms outside of entry-level systems), it is really based on the 90% use-case methodology. Ninety percent of the time customers who purchase 3000 series, 6000 series, or 8000 series FAS controllers are purchasing multiple shelves and have the room for dedicated root aggregates. And 90% of the time customers who purchase entry-level or AFF systems don't.  

Valid Config | 12-Disk Platforms

Picture
FAS2500, FAS2200 | Active/Passive
To maximize capacity, ADP can be configured in an active/passive configuration that enables 72% of the usable space efficiency for data storage—nice! 
FAS2500, FAS2200 | Active/Active
To maximize performance, ADP can be configured to require minimal space and manage controller-specific data aggregates, resulting in an up to 50% usable space efficiency for data storage. 
Picture

Valid Config | 24-Disk Platforms

Picture
FAS2500, FAS2200 | Active/Active
ADP here is a typical config; and the resulting savings yield up to 83% storage efficiency! Awesome! 

Valid Config | All Platforms

The preceding is a few examples of valid configurations. There are actually a good number of different configurations that can be custom ordered; here's a full list of supported configurations for ADP root disk slices:

Platforms
2200, 2500 (Internal drives)
3200, 6200, 8000 (≥ 48 drives) 
Root Data Slice (HDD), Root Data Slice (AFF)
Root Data Slice (AFF)
Disk Shelf Configurations
All-Spinning (HDD)
All-Flash (SSD)
Hybrid (HDD + SDD)
12 x HDD, 24 x HDD
8 x SSD, 12 x SSD, 18 x SSD, 24 x SSD, 36 x SSD, 48 x SSD

8 x HDD & 4 x SSD, 20 x HDD & 4 x SSD
18 x HDD & 6 x SSD, 12 x HDD & 12 x SSD

About Root Partition Sizing

The root partition size is fixed and automatically set per controller. The actual amount will vary for the total of the root data partition between 430.9GiB (462.7GB) to 431.5GiB (463.3GB). The reason for this 0.13% fluctuation is due to the differing numbers of 4k blocks available with different spindle counts. Of course, the OCD in me wishes it was always just 450GB for a nice round number!

The way that root partition is parceled out is then based upon the number of disk drives with root partitions, calculating for the appropriate RAID level and hot spares as well. Here are the common disk configurations and their corresponding root disk sizes:
FAS2520 (12 Internal HDD)
FAS2552/4 (24 Internal HDD)
FAS2552/4 (20 Internal HDD)
FAS8000 AFF (24 SSD)
FAS8000 AFF (36 SSD)
FAS8000 AFF (48 SSD)
144GiB per root slice
54GiB per root slice
62GiB per root slice
54GiB per root slice
31GiB per root slice
22GiB per root slice
Let's look at three real-life examples just to solidify how ADP root disk slices are configured. 
Example 1: FAS2520 (12 x 3TB 7.2k)
Let's say I just purchased a FAS2520 for my backups using SnapVault. The usable space on a 3TB drive, in this case, is 2.18TB (but don't get me started on the TiB vs TB discussion). Here is how the root data partitioning would look in this case:
  • Root Slice: 144GiB per disk
  • Data Slice: 2335GiB per disk
  • Usable Size: 18.47TiB (active/passive config)
Picture
Example 2: FAS8040 All-Flash (36 x 800GB SSD) 
Let's say I am just about to roll out a new Horizon 6 VDI deployment for 4,000 users on a FAS8040.  The usable space after formatting on a 800GB SSD drive, in this case, is 702.5GB (654.3GiB). Again, here's how the partitioning would look in this case:
  • Root Slice: 31GiB per disk
  • Data Slice: 745GiB per disk
  • Usable Size: 18.83TiB (active/active config)
Picture
Example 3: FAS2552 (20 x 900GB, 4 x 400GB SSD)
Let's say I am just I am implementing a new co-located disaster recovery (DR) site at Peak10.  I run a 3250 in production and I am just replicating a few critical applications—a FAS2500 series to totally sufficient. The usable disk size is 837TiB for a 900GB 10k drive. Here's how the partitioning would look in this case:
  • Root Slice: 62GiB per disk
  • Data Slice: 775GiB per disk
  • Usable Size: 9.54TiB (active/active config)
Picture
17 Comments

NetApp Insight | cDOT 8.3 Features #NTAPInsight

11/14/2014

10 Comments

 
Insight is NetApp's annual conference—but you probably already know that. And the buzz going around Insight was the next version of Clustered DataONTAP (cDOT) was going to be downloadable November 13 . . . and guess what? It was! So in honor of the cDOT 8.3 release yesterday, here's a post of some of the new awesomeness you can expect. 
  1. Clustered DataONTAP 8.3 Features
  2. Clustered DataONTAP 8.3 Manageability

Clustered DataONTAP 8.3 Feature Enhancements

There are loads of new features in the cDOT 8.3 release. I will dive in to a couple of them in brief now, and then more in detail later. 

Advanced Drive Partitioning

Advanced Drive Partitioning or ADP is perhaps one of the most significant features of Clustered DataONTAP 8.3 (perhaps Flash performance might be as well). Why? Because it will dramatically affect the usable space for entry level and extreme performance systems, an area that NetApp (like EMC) has needed to improve, to be honest.

If you have ever had a FAS2040, for example, you know the pain: 12 disks internally, say 1TB. DataONTAP required at least two (if you did RAID 4 instead of RAID-DP). You could turn off the hot spare requirement, but then you most likely went to RAID-DP. Regardless, you effectively lost 1.8TB of usable space per controller for an OS that required 240GB or so. No anymore.
Picture
NetApp Clustered DataONTAP 8.3 | Root Disk-Slice Efficiency
How exactly does that happen? Well, I will dive into it more later, but a new feature called Disk Slicing allows "virtual disks" to be individually addressed as logical units, configured in their own RAID group, and utilized independent of the other physical blocks on the disk.
Picture
NetApp Clustered DataONTAP 8.3 | Factory-Default Root Disk-Slicing
But what if you have more than a single disk shelf? How does that work? I'm glad you asked. The answer is two-fold: for All-Flash FAS (AFF), you can span up to 48 disks with the root slices; and as expected, the most disks on which you span the less you use of each disk (the root slice is about 430GB per controller, in case you are wondering).
Picture
Clustered DataONTAP 8.3 | Root Disk-Slice for AFF
With AFF systems, the minimum number of disks for root slices is 4 per controller (8 per pair) and the maximum is 24 per controller, or 48 for an HA pair. 

But what if you have an entry level system that's not all-flash? Does it work the same way? It's very similar, except that the root slice only lives on the internal disks—but there is no reason the internal disks can't be combined with external disks. They are virtual disks, and cDOT 8.3 is designed to specifically accommodate this configuration.
Picture
Clustered DataONTAP 8.3 | Root Disk-Slicing on Entry-Level FAS
That's what I like to call #NetAppInnovation. And I will go into disk slicing a bit more in another post, but for now, this should get you pretty excited! 

Oh, and one more thing: disk slicing works for FlashPools (hybrid flash arrays), too. 
Picture
Clustered DataONTAP 8.3 | Flash Pool Disk-Slicing and Allocation Units
Yes, that was a thinly-veiled attempt to reference Steve Jobs. Whether or not that succeeded, SSD partitioning in ADP for cDOT 8.3 is awesome. Now you can have custom-sized flash pools by logically dividing the SSD physical blocks into logical partitions called allocation units. One or more of these units can be assigned to an aggregate, up to 4 aggregates in total. Pretty sweet.

All-Flash Performance

Flash gets some great enhancements from cDOT 8.3 right out of the box. And they are some big ones. And unlike some other vendors, you don't have to necessarily buy a new box to support it. Yes, some equipment is too underpowered (old) to run 8.3, but if you can install 8.3 on the both, you can get all of these benefits. That's Software-Defined Storage. 

What NetApp has been working on is how to increase the predictable latency in All-Flash FAS (AFF) arrays so that there is more "available" horsepower before the performance hit kicks in (as is the case in any All-Flash Array (AFA). This is call the "knee" of the curve—the place where latency takes that exponential rise into the (sometimes) unacceptable category.
Picture
Clustered DataONTAP 8.3 | All-Flash FAS Performance
Now, these are the same system. Both FAS8080EX, both with with the same 15 400GB SSDs, both with the same set of resident data and the same switches, load generators, etc. The only variable is ONTAP. And by the way, that is much better than the leading competitors:
Picture
NetApp All-Flash FAS | Comparison
This means that out of the gate customers with AFFs who in-place upgrade from 8.2 to 8.3 will automatically get a built-in performance increase. I like that value—talk about ROI! But how is NetApp achieving this?

First, inline zero-write detection. Put simply, everything written to disk is a 1 or a 0, right? Binary language. But why write zeros? So, I like to think of inline zero-write detection as the equivalent in binary terms of changing the storage command from "write a zero" to "don't write a 1 to this bit". This saves a lot of write cycles, which has a cumulatively large effect on flash performance: less writes overall, therefore less overwrites, therefore less garbage collection, therefore less latency for overwrites, and so forth. You get the picture.

There are other enhancements as well, of course, but that is enough for now.

Hybrid Flash Performance

Now, we all know that contrary to some gigantic marketing turbines, all-flash arrays are not 1) necessary for every workload, 2) not best for some workloads, and 3) not cost-effective for many workloads. Like "the cloud," that is, cloud computing, hybrid is the future, at least the foreseeable future. There may come a day when all-flash is the standard, but that day is a bit distant still, in my opinion. Flash technology still has a lot of maturity and is still changing far too rapidly  (SLC, MLC, eMLC, cMLC, tMLC, NVMe and lions and bears oh my!) for mass adoption at this point. 

At Insight, NetApp stated that 70% of all disk shelves going out the door this year so far have been hybrid (usually in the form of 4xSSD + 20xHDD). So what about these customers (of which we ourselves are one), are we just left in the dust because we haven't gone all-flash? By no means!
Picture
NetApp Clustered DataONTAP 8.3 Hybrid Flash Capacities
First, we get the ability to increase four-fold the amount of flash cache (whether that's FlashCache or FlashPool) across all controllers—entry level and extreme performance. Second, we get the benefits of 8.3 inline zero-write detection.

Oh, and did I mention that inline-zero write detection works on spinning disks, too? Boom.

Clustered DataONTAP 8.3 Manageability Enhancements

There are, likewise, a ton of enhancements to the manageability of the new 8.3 code...here's a quick look at just a few of them.

On-Controller System Manager

Yes, that's right: FilerView is back. Well, I shouldn't say that, because it simply doesn't do justice to the awesomeness of the on-box HTML5 System Manager. No more client to download, no more Java conflicts. Gimmie that upgrade now! 

The new on-box HTML5 System Manger in #NetApp CDoT 8.3 is insanely fast. I'm already in love. Lots of new functionality too!

— Adam J. Bergh (@ajbergh) November 13, 2014

DataMotion for Volumes & LUNs

If you have ever managed a NetApp system, you are likely familiar with DataMotion—it's one of NetApp's most popular features. Or perhaps you know it by its better-known command line syntax: vol move :). However you know it, you probably love it. Basically, the NetApp takes a SnapMirror (no license required) of the volume and non-disruptively moves it to a new location, allowing for seamless capacity and performance upgrades when the underlying aggregate has been depleted of either or both resources.

Clustered DataONTAP 8.3 extends this feature . . . wait for it . . . to LUNs! Now you don't have to move an entire volume, you can move a single LUN. What's even better, it is even more powerful:
What makes this new engine powerful is that it provides instantaneous cutover. Immediately after a request is made to move a LUN, that LUN becomes available on the destination node. Writes go to the destination node, while reads are pulled across the cluster interconnect from the source. This means load on the source node is immediately reduced because it is not processing writes. — Clustered DataONTAP 8.3

3-Step Automatic NDU

How long have you spent waiting for a storage processor software update? In my lifetime, I would guess my total is in days or weeks, not hours. No more.
Clustered Data ONTAP 8.3 supports automated, nondisruptive software upgrades. Three commands are all that is needed to bring the Data ONTAP package (obtained from support.netapp.com) into the cluster, do validation to make sure the cluster is prepared for the upgrade, and then perform the upgrade. All downloads, takeovers, and givebacks are performed as part of the automated process.
— Clustered DataONTAP 8.3
Three commands. 3. Previously, the amount was 35. Yep. NetApp.

VVol Integration

Of course, 8.3 will support VMware Virtual Volumes when they are available in the upcoming version of vSphere. You can read more about that in my VVols post. 
10 Comments

#VMWORLD FOLLOW-UP | 5. VMware VVols & NetApp Integration

10/21/2014

0 Comments

 
Continuing with my series recap of VMworld 2014, here is the third installment.  
  1. VMworld: The Numbers
  2. VMware EVO: RAIL
  3. VMware NSX
  4. NetApp All-Flash FAS for VMware Horizon 6
  5. VMware VVols
  6. VMware CloudVolumes
  7. Veeam Backup and Recovery 8.0 for NetApp
  8. Zerto Disaster Recovery
  9. New Companies of Interest

VMware VVols

VMware has been talking about Virtual Volumes or VVols for the past two or three years. Some have complained that it is taking too long. Sure, and software-defined networking (SDN) was developed in a year, too. Like SDN, SDS (software-defined storage) completely changes everything we storage admins have learned about best practices, limitations, segregation of workloads, and opens up storage to a whole new world of design and deployment. 

Why Do We Need VVols?

So there are a couple of preliminary things to discuss first. The most of important of these is, "What problem or problems does a VVol solve?" In other words, is VMware just trying to update a product, or are they trying to innovate a solution?

If you have never worked in a large environment, then you will likely have never run into virtualization limitations with regard to storage: 256 LUNs per ESXi host—which, consequently means, per cluster—with a limit of 1024 paths to those LUNs, and other limitations, then perhaps you might not see a need. But imagine you had a 64-host cluster, which is now possible, but you are limited to 256 LUNs—that's an average of 4 LUNs per server. Ouch. Yes, those LUNs can be huge, but then you run into other problems! 

In addition to that, for a long time we in the storage industry have practiced workload segregation—different IO profiles have different workload characteristics. An Exchange database has different storage characteristics and thus requires a different storage profile than a file server. That's not changing. In fact, it is somewhat getting worse: workloads are becoming even more unpredictable and changing rapidly.
Picture
Traditional Storage Provisioning | VMware.com
Picture
VVols Storage Provisioning | VMware.com
So VVols was introduced as a solution to both workload agility and quality. In addition to the need to spend less and achieve more through consolidation, VVols will, I think, change the way storage is designed on mainstream arrays like NetApp and others. Finally, it will enable the rapid provisioning of storage and reduce the time-to-market for new applications—or whatever the equivalent is for your business.

VVols Explained

As I said, SDS and VVols through some of our traditional storage categories on their heads. So, a bit of terminology before we begin.

VAAI | VMware APIs for Array Integration. Also called hardware or storage array offload, VAAI is a collection of APIs that, in simple terms, relieve ESXi of some storage overhead by offloading the work to arrays identified as capable of performing it.
The APIs define a set of “storage primitives” that enable the ESXi host to offload certain storage operations to the array, which reduces resource overhead on the ESXi hosts and can significantly improve performance for storage-intensive operations such as storage cloning, zeroing, and so on. The goal of VAAI is to help storage vendors provide hardware assistance to speed up VMware I/O operations that are more efficiently accomplished in the storage hardware. — VMware TR 10337
Picture
VAAI Offload | VMware.com
Picture
VASA | VMware APIs for Storage Awareness. VASA is a complement to VAAI. While an array may advertise directly to the ESXi host kernel its fundamental storage characteristics (i.e., is it an SSD, does it have VAAI capabilities, and so forth), VASA provides enable full integration into the storage environment, allowing for the correlation of LUN and ESXi events, for example, monitoring and reports of usage, trending, and troubleshooting analysis. Most importantly for VVols, VASA providers enable policy-based management within ESXi by integrating into vCenter and communicating with the backend array. (click for larger image)
Protocol Endpoint. A protocol endpoint is exactly what it sounds like: the place where the storage protocol terminates and storage operations are handed off to the IO De-multiplexor. It is IO Demuxor that enables thousands and thousands of VVols to site behind the protocol endpoint, providing the scale that was once previously a pipe dream. The IO Demuxor that words with the protocol endpoint to direct the IO to the appropriate VVol. Protocol endpoints are analogous to the block and file protocols that we use today, and will traverse the same.

Storage Container. VVols live in storage containers. Storage containers may be contain a single VVol, or thousands of them. Instead of creating multiple LUNs or mounts to present to ESXi, the storage admin will create only the amount of storage containers as is needed for 1) space and 2) storage characteristics. Each container itself has a set of capabilities, and those capabilities are logically presented to ESXi for correlation of a storage profile and compliant placement and monitoring of a VVol.

Storage Profile. A storage profile is a group of storage policies that work together to define a VVol's characteristics and allowable features. Should it be deduplicated? How much bandwidth can it consume? What kind of other features should it have? Should it be replicated to the Disaster Recovery site?
Picture
VMware VVols Solution Overview Diagram | VMware.com
Like all things software-defined, VVols separates the control plane from the data plane, making the two independent. Cisco calls this process abstraction for UCS, but whatever you call it, it enables software to define, configure, and monitor for compliance storage entities based on storage policies and storage profiles.

So, I don't have to worry about workload segregation, necessarily. I might still want this, but the idea of making sure my application gets a guaranteed number of IOPs or maintains a <10ms latency is very appealing. While VVols is complicated, in the end what is does is simplify management by enable per-vDisk (which is a single VVol) granular-level control that was never before possible unless you wanted to create a dedicated LUN or mount for a single VMDK—and that's a LOT of administrative overhead.

VVol Problems?

So VVols can solve some existing problems, but, doesn't it also create new ones? Indeed it does.

I can imagine that every storage admin reading this right now is thinking, "Whoa—VM admins are going to be provisioning and defining their own storage characteristics? That's dangerous." And it could be. What if a junior admin decided he wanted to create a new VM and gave it guaranteed bandwidth of 1000MBs, thinking it was 1000Mbps? Yeah.

Well, there is a mitigation process for that. When a storage admin creates a Storage Container, he or she will define the storage capabilities, which in turn are presented to the ESXi host. The storage admin can maintain the control of these policies. Duncan Epping says it this way:
Profiles are a set of QoS parameters (performance and data services) which apply to a VM Volume, or even a Capacity Pool. The storage administrator can create these profiles and assign these to the Capacity Pools which will allow the tenant of this pool to assign these policies to his VM Volumes.
In theory, both the VM admin and the storage admin will be happier in a VVol world: the VM admin will have granular visibility and control over his VMs through storage profiles, and the storage admin will maintain the overall system setup, monitoring, backup, and recovery just as before. In practice, however, we will have to see. :)

NetApp Integration with VMware VVols

NetApp is leading the way when it comes to array integration, said Pat Gelsinger, CEO of VMware, at a recent NetApp VIP event during VMworld 2014. In fact, NetApp was the only storage vendor to provide a working, hands-on lab of VVols this year. In fact, if you haven't taken it already, you can go to the VMware HOL and take the NetApp Virtual Storage Console Lab and see what the setup of a VVol datastore looks like yourself! 
Picture
NetApp VVol Datastore Wizard
0 Comments

#VMworld Follow-Up | 2. VMware #EVORAIL

9/11/2014

0 Comments

 
Continuing with my series recap of VMworld 2014, here is the second installment.  
  1. VMworld: The Numbers
  2. VMware EVO: RAIL
  3. VMware NSX
  4. NetApp All-Flash FAS for VMware Horizon 6
  5. VMware VVols
  6. VMware CloudVolumes
  7. Veeam Backup and Recovery 8.0 for NetApp
  8. Zerto Disaster Recovery
  9. New Companies of Interest

What is VMware EVO: RAIL?

Picture
VMware announced EVO: RAIL, their first foray into the hyper-converged computing market. For the uninitiated, here's a brief introduction to hyper-convergency. 

Hyper-converged appliances combine other traditionally separate components into a single hardware platform: compute (processors and memory), networking (network interfaces), storage (typically SSD cached and HDD backed), and virtualization (hypervisor). And these appliances take a strictly scale-out approach; if you need more resources, add another node. 
Now, before we go too far allow me to clarify one thing: you can't order EVO: RAIL from VMware. VMware was very clear that they are a software company and will always remain so—they are a partner-based business and will always remain as such. So, if you want to order EVO:RAIL, at least in the US, you will either buy directly from SuperMicro or Dell, since those are the two North American partners with the hardware form-factor that EVO: RAIL requires. 

EVO: RAIL Overview

Let's first take a look at the hardware, since that is one area that is different than most traditional computing environments—for sure. VMware has adopted a sort of "mini-chassis" as its EVO: RAIL platform: a 2U, 4-node appliance. (click for larger versions)
Picture
SuperMicro 2U TwinPro (Front)
Picture
SuperMicro 2U TwinPro (Rear)
What you will notice right away is that this is a very compact package, and that is explicitly part of the strategy. VMware wants a highly-available form factor out of the box. You can slap down a single 2U appliance and actually have full high-availability out of the box; there is no single point of failure, since every single thing is redundant (depending upon how the midplane for the disk drives is split up on the hardware vendors, this may or more not be true).

And here are the specs, per node (4 nodes/2U appliance):
  • Two Intel E5-2620v2 six-core CPUs (24 logical cores per node)
  • 192GB of memory
  • One SLC SATADOM or SAS HDD for the ESXiTM boot device
  • Three SAS 10K RPM 1.2TB HDD for the VMware Virtual SANTM datastore
  • One 400GB MLC enterprise-grade SSD for read/write cache
  • One Virtual SAN-certified pass-through disk controller
  • Two 10GbE NIC ports (configured for either 10GBase-T or SFP+ connections)
  • One 1GbE IPMI port for remote (out-of-band) management
Now, you might have picked up on the third from the bottom "VSAN-certified" that this is, in fact, running VSAN. Of course it is. Each EVO: RAIL appliance comes with vCenter, Enterprise Plus licensing, VSAN, and vCenter Log Insight. What you might have also picked up on is that there is no customizability, at least not at this point. But we will get to that later. 

And with all of this, comes a new "deployment manager"—the EVO: RAIL interfaces that really simplifies installation and configuration.
I watched a session at VMworld (not online yet) where they introduced the interface and walked through it, and it was very easy to use. You can also monitor and manage all your servers and interfaces through that particular interface if you like, or you can choose a standard vCenter Web Client approach. Likewise, adding a nodes is straight-forward; all you need is the master password for the current node and it will perform all the configuration for you in about 10 minutes. Nice!

Likewise, updating EVO: RAIL is equally impressive, with just an update file and an automated process. The update will be applied sequentially and VMs automatically migrated so that the administrator can just start the process and watch it perform non-disruptively.

EVO: RAIL Performance

So what can you expect to be able to run on EVO: RAIL? Great question. VMware used a generic Virtual Machine profile to do some basic load testing and provide some guidelines for server workloads:
  • @ 2 vCPU, 4GB vMEM, 60GB vDisk = 100 VMs / appliance
Of course, there is a disclaimer:
Actual capacity varies by VM size and workload. There are no restrictions on application type. EVO: RAIL supports any application that a customer would run on vSphere. — VMware EVO: RAIL Introduction
Likewise, there is a profile for VMware Horizon 6 virtual desktops, as EVO: RAIL could be a great solution for those needing a dedicated VDI cluster:
  • 2vCPU, 2GB vMEM, 32GB vDisk linked clones = 250 VDs / appliance
And you can scale out (currently) to a max of four appliances, which is a total of 16 nodes for each individual EVO: RAIL environment.
Picture

EVO: RAIL vs Nutanix vs Simplivity

Now, if you have been around the hyper-converged market, you will know immediately that this is nothing new. In fact, it kind of seems like a rip-off of some of VMware's partners, particular Nutanix and Simplivity. I have a number of contacts that work for these companies, and I spoke with them about EVO: RAIL and how they felt about it:
[Dan] How do you feel about the announcement today of VMware EVO: RAIL?
[Partner] We are happy for VMware. Actually, their announcement validates what we have been doing for the last four years. 
[Dan] Are you worried that EVO: RAIL will take your [potential] customers?
[Partner] Not really. We have some of the best people on the planet, both with VMware and with hardware and software. We have a four-year head start. We have hundreds of happy customers, and that is only growing.
It is noteworthy that both of these partners offer different form-factors for their devices, whereas VMware only offers a single platform. VMware also only offers a four-node appliance, whereas both Nutanix and Simplivity both offer 2U full rack servers as nodes, which allow for PCIe expansion, such as might be needed for NVIDIA GRID™K2 graphics cards for VDI. (Note: Simplivity only offers their Omnicube in a rack-server-based form factor; they don't have an appliance version at this time).
VMware
EVO: Appliance 48C / 768GB
13TB usable capacity
  4 x 400GB SSD
  12 x 1.2TB SAS HDD
Nutanix
NX1000 24–48C / 128–512GB
3–24TB effective capacity
  4 x 200/400GB SSD
  8/12 x 1TB NL-SAS HDD

NX3000 64–80C / 1–2TB
12–24TB effective capacity
  8 x 400/800GB SSD
  16 x 1TB NL-SAS HDD

NX6000 24–40C / 256GB–1TB
35–70TB effective capacity
  2/4 x 400/800GB SSD
  8/10 x 4TB NL-SAS HDD

NX7000 40C / 128–256GB
5–10TB effective capacity
  2 x 400GB SSD
  6 x 1TB NL-SAS HDD

NX8000 40–48C / 128–384GB
18–36TB effective capacity
  4 x 400/800/1600GB SSD
  20 x 1TB NL-SAS HDD
Simplivity
CN2000 8C / 128–256GB
5–10TB effective* capacity
  4 x 100GB SSD
  8 x 1TB NL-SAS HDD

CN3000 12–24C / 128–512GB
20–40TB effective capacity
  4 x 400/800GB SSD
  8 x 3TB NL-SAS HDD

CN5000 24C / 384–512GB
15–30TB effective capacity
  6 x 400/800GB SSD
  18 x 1.2TB SAS HDD
*Effective capacity means after deduplication and compression. The actual RAW capacity is not specified, and depending upon the data set residing on these disks, you may see more or less actual usable capacity.

It is also worth mentioning that all hyper-converged solutions require 10Gbps Ethernet switches for operation and data transfer. So if you don't have them already, plan on adding that into your budget.

EVO: RAIL Use Cases

So who is going is VMware targeting with EVO: RAIL, and who will actually buy it? Well, that's different questions, of course! According to VMware,
EVO: RAIL is optimized for the new VMware user as well as for experienced administrators. Minimal IT experience is required to deploy, configure, and manage EVO: RAIL, allowing it to be used where there is limited or no IT staff on-site. As EVO: RAIL utilizes VMware’s core products, administrators can apply existing VMware knowledge, best practices, and processes. — VMware EVO: RAIL Introduction
 In my own opinion, EVO: RAIL is a good fit for the SMB space, where a single admin wears virtually every hat (pun intended) and has little time and money to deal with issues. The simplicity is an easy trade-off for flexibility, since flexibility creates complexity, of which the lone admin does not need any more than he already has.

I also think it is a good fit for application developers in test and dev environments where IT wants to give them "full control" of their own hardware and just leave it alone and not worry about it. 
0 Comments
<<Previous

    Author

    Husband.Father. Lifelong Learner & Teacher. #NetAppATeam. #vExpert.
    All posts are my own.

    Picture
    Picture
    Tweets by @dancbarber
    Check Out koodzo.com!

    Archives

    March 2017
    June 2016
    December 2015
    July 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014
    July 2014
    June 2014
    May 2014
    April 2014
    March 2014

    Categories

    All
    Best Practices
    Cisco Nexus
    Cisco UCS
    Cloud
    Compute
    Design
    Disaster Recovery
    ESXi
    Flash
    FlexPod
    Geeknicks
    HA/DRS
    HomeLab
    Horizon
    Hyper-Converged
    Management
    Memory
    NetApp
    Networking
    NFS
    Performance Optimization
    Power
    ProTips
    SAN
    Scripts
    Security
    Servers
    SQL
    Storage
    Training/Certification
    Troubleshooting
    VCenter
    VDI
    VMware
    VSOM/vCOPS
    VUM
    Windows

    RSS Feed