Administrator's Guide to VMware Virtual SAN: Introduction to VSAN

Date: Aug 11, 2014

Return to the article

This chapter introduces you to the world of the software-defined datacenter, but with a focus on the storage aspect. The chapter covers the basic premise of the software-defined datacenter and then delves deeper to cover the concept of software-defined storage and associated solutions such as the server storage-area network (Server SAN).

Software-Defined Datacenter

VMworld, the VMware annual conferencing event, introduced VMware’s vision for the software-defined datacenter (SDDC) in 2012. The SDDC is VMware’s architecture for the public and private clouds where all pillars of the datacenter—compute, storage, and networking (and the associated services)—are virtualized. Virtualizing datacenter components enables the IT team to be more flexible. If you lower the operational complexity and cost while increasing availability and agility, you will ultimately lower the time to market for new services.

To achieve all of that, virtualization of components by itself is not sufficient. The platform used must be capable of being installed and configured in a fully automated fashion. More importantly, the platform should enable you to manage and monitor your infrastructure in a smart and less operationally intense manner. That is what the SDDC is all about! Raghu Raghuram (VMware senior vice president) captured it in a single sentence: The essence of the software-defined datacenter is “abstract, pool, and automate.”

Abstraction, pooling, and automation are all achieved by introducing an additional layer on top of the physical resources. This layer is usually referred to as a virtualization layer. Everyone reading this book is probably familiar with the leading product for compute virtualization, VMware vSphere. Fewer people are probably familiar with network virtualization, sometimes referred to as software-defined network (SDN) solutions. VMware offers a solution named NSX that is based on the solution built by the acquired company Nicira. NSX does for networking what vSphere does for compute. These layers do not just virtualize the physical resources but also allow you to pool them and provide you with an application programming interface (API) that enables you to automate all operational aspects.

Automation is not just about scripting, however. A significant part of the automation of virtual machine (VM) provisioning (and its associated resources) is achieved through policy-based management. Predefined policies allow you to provision VMs in a quick, easy, consistent, and repeatable manner. The resource characteristics specified on a resource pool or a vApp container exemplify a compute policy. These characteristics enable you to quantify resource policies for compute in terms of reservation, limit, and priority. Network policies can range from security to quality of service (QoS). Unfortunately, storage has thus far been limited to the characteristics provided by the physical storage device, which in many cases did not meet the expectations and requirements of many of our customers.

This book examines the storage component of VMware’s SDDC. More specifically, the book covers how a new product called Virtual SAN (VSAN), releasing with VMware vSphere 5.5 Update 1, fits into this vision. You will learn how it has been implemented and integrated within the current platform and how you can leverage its capabilities and expand on some of the lower-level implementation details. Before going further, though, you want to have a generic understanding of where VSAN fits in to the bigger software-defined storage picture.

Software-Defined Storage

Software-defined storage is a term that has been used and abused by many vendors. Because software-defined storage is currently defined in so many different ways, consider the following quote from VMware:

A software-defined storage product is a solution that abstracts the hardware and allows you to easily pool all resources and provide them to the consumer using a user-friendly user interface (UI) or API. A software-defined storage solution allows you to both scale up and scale out, without increasing the operational effort.

Many hold that software-defined storage is about moving functionality from the traditional storage devices to the host. This is a trend that was started by virtualized versions of storage devices such as HP’s StoreVirtual VSA and evolved into solutions that were built to run on many different hardware platforms. One example of such a solution is Nexenta. These solutions were the start of a new era.

Hyper-Convergence/Server SAN Solutions

In today’s world, the hyper-converged/server SAN solutions come in two flavors:

A hyper-converged solution is an appliance type of solution where a single box provides a platform for VMs. This box typically contains multiple commodity x86 servers on which a hypervisor is installed. Local storage is aggregated into a large shared pool by leveraging a virtual storage appliance or a kernel-based storage stack. Typical examples of hyper-converged appliances that are out there today include Nutanix, Scale Computing, SimpliVity, and Pivot3. Figure 1-1 shows what these appliances usually look like: a 2U form factor with four hosts.

Figure 1-1 Commonly used hardware by hyper-converged storage vendors

You might ask, “If these are generic x86 servers with hypervisors installed and a virtual storage appliance, what are the benefits over a traditional storage system?” The benefits of a hyper-converged platform are as follows:

These solutions are sold as a single stock keeping unit (SKU), and typically a single point of contact for support is provided. This can make support discussions much easier. However, a hurdle for many companies is the fact that these solutions are tied to hardware and specific configurations. The hardware used by hyper-converged vendors is often not the same as from the preferred hardware supplier you may already have. This can lead to operational challenges when it comes to updating/patching or even cabling and racking. In addition, a trust issue exists. Some people swear by server Vendor X and would never want to touch any other brand, whereas others won’t come close to server Vendor X. This is where the software-based storage solutions come in to play.

Software-only storage solutions come in two flavors. The most common solution today is the virtual storage appliance (VSA). VSA solutions are deployed as a VM on top of a hypervisor installed on physical hardware. VSAs allow you to pool underlying physical resources into a shared storage device. Examples of VSAs include VMware vSphere Storage Appliance, Maxta, HP’s StoreVirtual VSA, and EMC Scale IO. The big advantage of software-only solutions is that you can usually leverage existing hardware as long as it is on the hardware compatibility list (HCL). In the majority of cases, the HCL is similar to what the used hypervisor supports, except for key components like disk controllers and flash devices.

VSAN is also a software-only solution, but VSAN differs significantly from the VSAs listed. VSAN sits in a different layer and is not a VSA-based solution.

Introducing Virtual SAN

VMware’s plan for software-defined storage is to focus on a set of VMware initiatives related to local storage, shared storage, and storage/data services. In essence, VMware wants to make vSphere a platform for storage services.

Historically, storage was something that was configured and deployed at the start of a project, and was not changed during its life cycle. If there was a need to change some characteristics or features of the logical unit number (LUN) or volume that were being leveraged by VMs, in many cases the original LUN or volume was deleted and a new volume with the required features or characteristics was created. This was a very intrusive, risky, and time-consuming operation due to the requirement to migrate workloads between LUNs or volumes, which may have taken weeks to coordinate.

With software-defined storage, VM storage requirements can be dynamically instantiated. There is no need to repurpose LUNs or volumes. VM workloads and requirements may change over time, and the underlying storage can be adapted to the workload at any time. VSAN aims to provide storage services and service level agreement automation through a software layer on the hosts that integrates with, abstracts, and pools the underlying hardware.

A key factor for software-defined storage is storage policy–based management (SPBM). This is also a key feature in the vSphere 5.5 release. SPBM can be thought of as the next generation of VMware’s storage profile features that was introduced with vSphere 5.0. Where the initial focus of storage profiles was more about ensuring VMs were provisioned to the correct storage device, in vSphere 5.5. SPBM is a critical component to how VMware is implementing software-defined storage.

Using SPBM and vSphere APIs, the underlying storage technology surfaces an abstracted pool of storage space with various capabilities that is presented to vSphere administrators for VM provisioning. The capabilities can relate to performance, availability, or storage services such as thin provisioning, compression, replication, and more. A vSphere administrator can then create a VM storage policy (or profile) using a subset of the capabilities that are required by the application running in the VM. At deployment time, the vSphere administrator selects a VM storage policy. SPBM pushes the VM storage policy down to the storage layer and datastores that understand that the requirements placed in the VM storage policy will be made available for selection. This means that the VM is always instantiated on the appropriate underlying storage based on the requirements placed in the VM storage policy.

Should the VM’s workload or I/O pattern change over time, it is simply a matter of applying a new VM storage policy with requirements and characteristics that reflect the new workload to that specific VM, or even virtual disk, after which the policy will be seamlessly applied without any manual intervention from the administrator (in contrast to many legacy storage systems, where a manual migration of VMs or virtual disks to a different data-store would be required). VSAN has been developed to seamlessly integrate with vSphere and the SPBM functionality it offers.

What Is Virtual SAN?

VSAN is a new storage solution from VMware, released as a beta in 2013 and made generally available to the public in March 2014. VSAN is fully integrated with vSphere. It is an object-based storage system and a platform for VM storage policies that aims to simplify VM storage placement decisions for vSphere administrators. It fully supports and is integrated with core vSphere features such as vSphere High Availability (HA), vSphere Distributed Resource Scheduler (DRS), and vMotion, as illustrated in Figure 1-2.

Figure 1-2 Simple overview of a VSAN cluster

VSAN’s goal is to provide both resiliency and scale-out storage functionality. It can also be thought of in the context of QoS in so far as VM storage policies can be created that define the level of performance and availability required on a per-VM, or even virtual disk, basis.

VSAN is a software-based distributed storage solution that is built directly in the hyper-visor. Although not a virtual appliance like many of the other solutions out there, a VSAN can best be thought of as a kernel-based solution that is included with the hypervisor. Technically, however, this is not completely accurate because components critical for performance and responsiveness such as the data path and clustering are in the kernel, while other components that collectively can be considered part of the “control plane” are implemented as native user-space agents. Nevertheless, with VSAN there is no need to install anything other than the software you are already familiar with: VMware vSphere.

VSAN is about simplicity, and when we say simplicity, we do mean simplicity. Want to try out VSAN? It is truly as simple as creating a VMkernel network interface card (NIC) for VSAN traffic and enabling it on a cluster level, as shown in Figure 1-3. Of course, there are certain recommendations and requirements to optimize your experience, as described in further detail in Chapter 2, “VSAN Prerequisites and Requirements for Deployment.”

Figure 1-3 Two-click enablement

Now that you know it is easy to use and simple to configure, what are the benefits of a solution like VSAN? What are the key selling points?

That sounds compelling, doesn’t it? Of course, there is a time and place for everything; Virtual SAN 1.0 has specific use cases. For version 1.0, these use cases are as follows:

Now that you know what VSAN is, it’s time to see what it looks like from an administrator’s point of view.

What Does VSAN Look Like to an Administrator?

When VSAN is enabled, a single shared datastore is presented to all hosts that are part of the VSAN-enabled cluster. This is the strength of VSAN; it is presented as a datastore. Just like any other storage solution out there, this datastore can be used as a destination for VMs and all associated components, such as virtual disks, swap files, and VM configuration files. When you deploy a new VM, you will see the familiar interface and a list of available datastores, including your VSAN-based datastore, as shown in Figure 1-4.

Figure 1-4 Just a normal datastore

This VSAN datastore is formed out of host local storage resources. Typically, all hosts within a VSAN-enabled cluster will contribute performance (flash) and capacity (magnetic disks) to this shared datastore. This means that when your cluster grows, your datastore will grow with it. VSAN is what is called a scale-out storage system (adding hosts to a cluster), but also allows scaling up (adding resources to a host).

Each host that wants to contribute storage capacity to the VSAN cluster will require at least one flash device and one magnetic disk. At a minimum, VSAN requires three hosts in your cluster to contribute storage; other hosts in your cluster could leverage these storage resources without contributing storage resources to the cluster itself. Figure 1-5 shows a cluster that has four hosts, of which three (esxi-01, esxi-02, and esxi-03) contribute storage and a fourth does not contribute but only consumes storage resources. Although it is technically possible to have a nonuniform cluster and have a host not contributing storage, we do highly recommend creating a uniform cluster and having all hosts contributing storage for overall better utilization, performance, and availability.

Figure 1-5 Nonuniform VSAN cluster example

Today’s boundary for VSAN in terms of both size and connectivity is a vSphere cluster. This means that at most 32 hosts can be connected to a VSAN datastore. Each host can run a supported maximum of 100 VMs, allowing for a total combined of 3,200 VMs within a 32-host VSAN cluster, of which 2,048 VMs can be protected by vSphere HA.

As you can imagine, with just regular magnetic disks it would be difficult to provide a good user experience when it comes to performance. To provide optimal user experience, VSAN relies on flash. Flash resources are used for read caching and write buffering. Every write I/O will go to flash first, and eventually will be destaged to magnetic disks. For read I/O it will depend, although in a perfect world all read I/O will come from flash. Chapter 5, “Architectural Details,” describes the caching and buffering mechanisms in much greater detail.

To ensure VMs can be deployed with certain characteristics, VSAN enables you to set policies on a per-virtual disk or a per-VM basis. These policies help you meet the defined service level objectives (SLOs) for your workload. These can be performance-related characteristics such as read caching or disk striping, but can also be availability-related characteristics that ensure strategic replica placement of your VM’s disks (and other important files).

If you have worked with VM storage policies in the past, you might now wonder whether all VMs stored on the same VSAN datastore will need to have the same VM storage policy assigned. The answer is no. VSAN allows you to have different policies for VMs provisioned to the same datastore and even different policies for disks from the same VM.

As stated earlier, by leveraging policies, the level of resiliency can be configured on a per-virtual disk granular level. How many hosts and disks a mirror copy will reside on depends on the selected policy. Because VSAN uses mirror copies defined by policy to provide resiliency, it does not require a local RAID set. In other words, hosts contributing to VSAN storage capacity should simply provide a set of disks to VSAN.

Whether you have defined a policy to tolerate a single host failure or, for instance, a policy that will tolerate up to three hosts failing, VSAN will ensure that enough replicas of your objects are created. The following example illustrates how this is an important aspect of VSAN and one of the major differentiators between VSAN and most other virtual storage solutions out there.

EXAMPLE: We have configured a policy that can tolerate one failure and created a new virtual disk. This means that VSAN will create two identical storage objects and a witness. The witness is a component tied to the VM that allows VSAN to determine who should win ownership in the case of a failure. If you are familiar with clustering technologies, think of the witness as a quorum object that will arbitrate ownership in the event of a failure. Figure 1-6 may help clarify these sometimes-difficult-to-understand concepts. This figure illustrates what it would look like on a high level for a VM with a virtual disk that can tolerate one failure. This can be the failure of a host, NICs, disk, or flash device, for instance.

Figure 1-6 VSAN failures to tolerate

In Figure 1-6, the VM’s compute resides on the first host (esxi-01) and its virtual disks reside on the other hosts (esxi-02 and esxi-03) in the cluster. In this scenario, the VSAN network is used for storage I/O, allowing for the VM to freely move around the cluster without the need for storage components to be migrated with the compute. This does, however, result in the first requirement to implement VSAN. VSAN requires at a minimum one dedicated 1Gbps NIC port, but VMware recommends a 10GbE for the VSAN network.

Yes, this might still sound complex, but in all fairness, VSAN masks away all the complexity, as you will learn as you progress through the various chapters in this book.

Summary

To conclude, vSphere Virtual SAN (VSAN) is a brand-new, hypervisor-based distributed storage platform that enables convergence of compute and storage resources. It enables you to define VM-level granular SLOs through policy-based management. It allows you to control availability and performance in a way never seen before, simply and efficiently.

This chapter just scratched the surface. Now it’s time to take it to the next level. Chapter 2 describes the requirements for installing and configuring VSAN.

800 East 96th Street, Indianapolis, Indiana 46240

vceplus-200-125    | boson-200-125    | training-cissp    | actualtests-cissp    | techexams-cissp    | gratisexams-300-075    | pearsonitcertification-210-260    | examsboost-210-260    | examsforall-210-260    | dumps4free-210-260    | reddit-210-260    | cisexams-352-001    | itexamfox-352-001    | passguaranteed-352-001    | passeasily-352-001    | freeccnastudyguide-200-120    | gocertify-200-120    | passcerty-200-120    | certifyguide-70-980    | dumpscollection-70-980    | examcollection-70-534    | cbtnuggets-210-065    | examfiles-400-051    | passitdump-400-051    | pearsonitcertification-70-462    | anderseide-70-347    | thomas-70-533    | research-1V0-605    | topix-102-400    | certdepot-EX200    | pearsonit-640-916    | itproguru-70-533    | reddit-100-105    | channel9-70-346    | anderseide-70-346    | theiia-IIA-CIA-PART3    | certificationHP-hp0-s41    | pearsonitcertification-640-916    | anderMicrosoft-70-534    | cathMicrosoft-70-462    | examcollection-cca-500    | techexams-gcih    | mslearn-70-346    | measureup-70-486    | pass4sure-hp0-s41    | iiba-640-916    | itsecurity-sscp    | cbtnuggets-300-320    | blogged-70-486    | pass4sure-IIA-CIA-PART1    | cbtnuggets-100-101    | developerhandbook-70-486    | lpicisco-101    | mylearn-1V0-605    | tomsitpro-cism    | gnosis-101    | channel9Mic-70-534    | ipass-IIA-CIA-PART1    | forcerts-70-417    | tests-sy0-401    | ipasstheciaexam-IIA-CIA-PART3    | mostcisco-300-135    | buildazure-70-533    | cloudera-cca-500    | pdf4cert-2v0-621    | f5cisco-101    | gocertify-1z0-062    | quora-640-916    | micrcosoft-70-480    | brain2pass-70-417    | examcompass-sy0-401    | global-EX200    | iassc-ICGB    | vceplus-300-115    | quizlet-810-403    | cbtnuggets-70-697    | educationOracle-1Z0-434    | channel9-70-534    | officialcerts-400-051    | examsboost-IIA-CIA-PART1    | networktut-300-135    | teststarter-300-206    | pluralsight-70-486    | coding-70-486    | freeccna-100-101    | digitaltut-300-101    | iiba-CBAP    | virtuallymikebrown-640-916    | isaca-cism    | whizlabs-pmp    | techexams-70-980    | ciscopress-300-115    | techtarget-cism    | pearsonitcertification-300-070    | testking-2v0-621    | isacaNew-cism    | simplilearn-pmi-rmp    | simplilearn-pmp    | educationOracle-1z0-809    | education-1z0-809    | teachertube-1Z0-434    | villanovau-CBAP    | quora-300-206    | certifyguide-300-208    | cbtnuggets-100-105    | flydumps-70-417    | gratisexams-1V0-605    | ituonline-1z0-062    | techexams-cas-002    | simplilearn-70-534    | pluralsight-70-697    | theiia-IIA-CIA-PART1    | itexamtips-400-051    | pearsonitcertification-EX200    | pluralsight-70-480    | learn-hp0-s42    | giac-gpen    | mindhub-102-400    | coursesmsu-CBAP    | examsforall-2v0-621    | developerhandbook-70-487    | root-EX200    | coderanch-1z0-809    | getfreedumps-1z0-062    | comptia-cas-002    | quora-1z0-809    | boson-300-135    | killtest-2v0-621    | learncia-IIA-CIA-PART3    | computer-gcih    | universitycloudera-cca-500    | itexamrun-70-410    | certificationHPv2-hp0-s41    | certskills-100-105    | skipitnow-70-417    | gocertify-sy0-401    | prep4sure-70-417    | simplilearn-cisa    |
http://www.pmsas.pr.gov.br/wp-content/    | http://www.pmsas.pr.gov.br/wp-content/    |