Fighting FUD with FACTS – Cisco HyperFlex

Fighting FUD with FACTS – Cisco HyperFlex

FUD #1: On Cisco HyperFlex, VMs are pinned to one side
This statement is 100% inaccurate. VMs are never pinned to any side in HyperFlex. (We do have a feature called dynamic vmnics which allows us to create an extension to a VM all the way from the fabric interconnects, or pin to a VM; but this is neither the default nor is it enabled within Cisco HyperFlex) In Cisco Hyperflex, VMs have two active data paths available on both Side-A and Side-B and we rely on the ESXi vSwitch/vDS for VM connectivity. A customer has complete control of how to use load balancing algorithms within the vSwitch (or vDS). In addition, we provide an enhancement that our competitors can’t provide; vis-a-vis hardware based load-balancing. This allows us to leverage the VIC adapter to dynamically flip its primary path in case of a failure on one side; if you chose to enable this feature. What we do (do) in terms of intelligent pinning which again is an enhancement that our competitors can’t provide is how we manage the internal storage traffic, and vmotion traffic. We are able to ‘pin’ all vmotion vnics to one side (with failover to the other side in hardware) thus allowing for sub micro-second latency. Our competitors would need to use the downstream network, which adds latency and unpredictability. This is even more pronounced as an advantage for us when it comes to a Hyperconverged storage solution. The key function that any Hyperconverged solution is required to provide is the data redundancy between nodes. How does the competition do it? They all have to rely on external networking to re-balance data. With Cisco HX, we are able to remove this dependency and provide the lowest latency and highest performance for storage traffic, because we don’t let the data leave the system to a downstream network which may be congested. We are able to do this inside the fabric interconnect at Layer 2. (I have an example of a local healthcare customer that bought a competitor’s product, who failed to mention the network requirements and ended up spending over $500,000.00 in networking equipment after they purchased their Hyperconverged system)

FUD #2: Cisco HyperFlex only supports a max number of nodes (26?)
Honestly, this is the first time I’ve heard of this, I don’t even understand how they came up with this number. We support 64 nodes in a single cluster, period. You could have 32 converged nodes and 32 compute-only nodes. Again, an advantage, where I can bring blade servers and other rack-mount servers with no disks into the cluster without any performance penalty for not having access to the data on the same node as the VM. Our competitors call their weakness or lack of this capability – data locality. Data-locality is an inferior way of offering data convergence. Why would you want all of the VM data on the same node as the VM? Wouldn’t you want to leverage all the disks from all the nodes in the cluster simultaneously for read-write access? They can’t do this because again, an inferior architecture which relies upon downstream network devices. Their design would create data traffic and node hot-spots. Cisco designed a solution (btw, the guys that wrote VMFS file system are the guys that wrote the HXDP filesystem from the ground-up which is a log-structured file system necessary for true shared storage between a set of nodes) which has higher performance with fewer number of nodes. And did I mention that you can have 2x the number of compute-only nodes to your converged nodes thus reducing your TCO by 1/3 rd; especially the licensing cost?

FUD #3: VM storage access between clusters is not possible
With the next major release of the HXDP (HyperFlex Data Platform) software, we’ll be able to expose access to external devices via iSCSI. The general idea of a Hyperconverged system is to have everything within the boundary of the cluster, but we can of course build active/active clusters today with synchronous (and async) replication between sites. If you are building two separate clusters within a single site, you probably want to keep these functionally separate, like a DMZ cluster and a Prod Cluster. We from day one have had this feature that allows you to tap into an external block storage devices (Fiber Channel/iSCSI) from within the cluster. In other words, you can use HyperFlex to either migrate your workloads from a NetApp/EMC array or design a hybrid solution where VMs have access to both the native HXDP file store and external block/file devices.`Can our competition do this? I don’t think so, not as a long term viable supported solution to have equal access between external and internal storage. Btw, did you know that we have a solution called CWOM that will merge clusters and automatically balance resource utilization based on performance metrics?

FUD #4: Cisco Hyperflex can’t do tiering of data
This is again something that is being claimed as a feature but all you are doing is adding unnecessary complexity in an attempt to give the customers a perceived cost benefit. Customers are adopting Hyper convergence to get away from the traditional SAN vendors that did tiering. Why do you want spinning rust as a tier in your high performance hyper converged storage system, when you can get fast flash/SSD disks at the same cost? IMO, the idea of a simple system is to keep things consistent both from a HW and SW perspective, which reduces your risk of failures and technical break-fix calls to your vendor.

Leave a Reply

Your email address will not be published. Required fields are marked *

*