Subscribe by Email

Your email:

Contact Us

Connect With Us

Clearpath’s Blog on Clouds and the Tools to Make Them – Private, Public and Hybrid

Current Articles | RSS Feed RSS Feed

How Clearpath Uses vCloud Director 1.5 to Automate Our Lab Resources

  
  
  

In a lab environment, you have competing priorities. On one hand, individual users want endless amounts of compute, network, and storage resources. They also want consumption of those resources, whether that means delivering individual applications, operating systems, or the entire stack, to be quick and easy. On the other hand, you have a limited amount of compute, network, and storage resources available, and since it's your lab, you don't want to spend tons of time manually delivering solutions to your end users. You also want to have the ability to segregate these users from one another so their individual deployments don't step all over each other.

Enter VMware vCloud Director:
VMware vCloud Director

With vCloud Director, we can give our lab users access to the underlying VMware vSphere clusters in a way that is easy for them to use, equitable (or at least as equitable as we want to make it) and secure.

Physical Resources

When architecting a vCloud Director lab environment, you first need to look at the underlying hardware upon which you will be running VMware vSphere. After all, a solid foundation is the most important part of building anything.

Servers
We started with three (3) Cisco UCS B200 M2 blades with:

    • 2x Intel Xeon E5620 CPUs – 8 total cores at 2.4GHz each (16 logical CPUs with HyperThreading enabled)
    • 48GB RAM
    • 8x 10GbE NICs via Cisco UCS M81KR Virtual Interface Card (VIC)
    • No local storage (boot from SAN)
Storage

We have a couple of EMC VNX presenting LUNs for VM storage, ISOs and VM templates, and other EMC products like Recover Point, Avamar, and Replication Manager.

Network

For our vCloud external network (I'll get into our vCloud network architecture later), we provisioned a single VLAN with a /24 subnet. Our UCS chassis has four (4) 1GbE uplinks to our core lab network.

vSphere Configuration

Some basics on our vSphere configuration are as follows:

Cluster Configuration

    • DRS in Fully Automated mode
        • Requirement for vCloud Director
    • HA enabled
    • EVC disabled

VMware vSphere configuration

Storage Configuration

    • No Datastore Clusters configured
        • Storage DRS not supported with vCloud Director

Network Configuration

    • Two NICs on vSwitch0
        • Management vmkernel port
        • vCenter Network (which ended up being an all-purpose management VM PG) virtual machine port group
    • Two NICs on vSwitch1
        •  vMotion vmkernel port

 VMware vCloud Network Configuration

    • Four NICs on dvSwitch0
        • All virtual machine dvPortGroups

vCloud Director dvSwitch Configuration

vSphere Licenses
We have deployed vSphere 5 Enterprise Plus and vCenter Server 5 Standard licenses

vCloud Director Architecture

Let's start off with a simple diagram.

vCloud Director Architecture

Visio skills notwithstanding, this is the basic layout of our vCloud Director architecture. The pieces are:

    • Three (3) VMware vSphere (ESXi) 5.0 update 1 build 623860 hosts
    • One (1) vCenter Server 5.0 update 1 build 623373 virtual machine
    • One (1) vShield Manager 5.0.1 build 638924 virtual appliance
    • One (1) vCloud Director server 1.5.1 build 622844 virtual machine
        • Installed on CentOS 5.6
    • One (1) MS SQL Server virtual machine
        • vCloud DB
        • vCenter DB
        • Other misc. DBs

These are the major building blocks for any vCloud Director deployment. Since our lab is rather small, we only need a single instance of vCenter Server + vShield Manager, a single vCloud Director server, and a single small VM for MS SQL.

BEST PRACTICE ALERT: CentOS is not supported by VMware for vCloud Director servers. As of this writing, the following are supported for vCloud Director, from vCloud Director 1.5 Installation and Configuration Guide:

Supported vCloud Director Server Operating Systems

    • Red Hat Enterprise Linux 5 (64 bit), Update 4
    • Red Hat Enterprise Linux 5 (64 bit), Update 5
    • Red Hat Enterprise Linux 5 (64 bit), Update 6

BEST PRACTICE ALERT: Setting aside a separate management cluster of three (3) or more hosts where all resources supporting your vCloud Director resources, such as vCenter Servers, vShield Managers, and vCloud Director cells, will run is best practice. Since we have limited resources, e.g., a single vSphere cluster, we were not able to follow this best practice.

Provider vDCs
Given the small implementation, it didn't make sense to have multiple Provider vDCs, so we have only one. Since I've already gone over the physical infrastructure above, we won't reiterate it here.

vCloud Organization Structure
As we thought about how we wanted to lay everything out, we had to decide how we wanted to map users to organizations. Since this lab is going to be used by engineers to demo new products, do testing, etc., we decided to make each user his or her own Organization. Now, this is not going to apply to everyone or even to most lab deployments. Our thinking when making this decision was primarily around the disparate network configurations each user might need, as well as making sure that each user making configuration changes to their own Organization Networks wouldn't impact those of other users. Your users probably aren't going to need to make these kinds of changes, so they'll likely fit better into more standard organization layouts.

Organization vDCs
As mentioned in the vCloud Organization Structure section above, each Organization ties directly to each engineer using our lab resources. I'll go over mine in this section.

vCloud Virtual Data Centers

I have a single Organization vDC using the Pay-As-You-Go allocation model. The vDC was set up with unlimited access to cluster resources; however, none of those resources are guaranteed. At this point, users are left to police themselves on resource usage.

BEST PRACTICE ALERT: As you might guess, giving users or organizations free reign over lab resources is not the best idea. Since our lab users are few in number and technically savvy enough to know not to hog resources others will need, we decided to give users as much flexibility in resource usage as possible. This is ultimately a policy decision, and one that shouldn't be weighed lightly.

vCloud Network Architecture
From the get-go, everything network-wise was done with using vCloud Director Network Isolated (vCD-NI) networks in mind. vCD-NI networks give me and our users the flexibility needed to do our demos and our testing, while using as few physical network resources in the process. Here's how it's set up:

Network Pools
As we referenced above, using VCD-NI for Organization Networks was the direction we chose, so we have a single VCD-NI Network Pool with 20 networks in the pool. One of the nice things about this is that if we run out of networks in the pool, we can increase the pool on the fly.
network-pools.png

As shown below, 15% of those networks (3) have already been instantiated.

vCloud Network Pool Progress

External Networks
We have a single VLAN with a /24 subnet for our external network.

vCloud External Networks

That external network maps to a vSphere port group already defined.

vSphere Network Port Group

Organization Networks
For this section, I'll focus on my Organization Networks.

VMware Organization Networks

As you can see, three (3) Organization Networks have been instantiated for our Organization. There is a direct external network with a small allocation of IPs from our external network in case I run into anything that can't handle being on the routed network. The internal network is for any Organization-wide vApps.

vCloud Guests vSphere

The direct network is connected to the vCloud Guests vSphere port group with no network pool, because it provides direct access to the external network. Both the internal and routed networks are from the vCD-NI Network Pool, but only the routed network has access to the vCloud Guests vSphere port group, so the internal network, as you might imagine, has no external network access.

vApps

Delivering applications to users is really what this is all about, so I would be remiss to skip over the subject of vApps. In this section, I'm going to focus on one of our more highly used vApps, a vSphere demo environment.

vApp_vSphere_Demo
This is a simple vApp, with four virtual machines running inside. Below are the contents:

    • 2x ESXi virtual machines (with VT-x passthrough enabled – required to run 64-bit nested virtual machines)
    • A Windows Server 2008R2 virtual machine running Active Directory Domain Services for test.local, as well as DNS
    • A Windows Server 2008R2 virtual machine running vCenter Server and Microsoft iSCSI Target software for shared storage

VMware vSphere vApps

This vApp is deployed via Fast Provisioning, which means linked clones are made of the vApp in the Catalog, rather than full clones, saving on precious SAN space.

Networking
Since I want this vApp to have no access to the outside world (in fact, I want to be able to deploy multiple versions of the vApp with the same IP scheme on each vApp), it uses only vApp Networking. Here


vApp Networking
The "vSphere De..." network shown above is actually the dvPortGroup shown below:

dvport group

Each time the vApp is deployed from the Catalog, a new dvPortGroup is created, thereby allowing the VMs in the vApp to run on different hosts, while keeping the traffic for that vApp segregated from all other VMs running in the vSphere infrastructure.

Conclusion

VMware vCloud Director is a great way to automate lab resources. It allows us to spin up new test and demo environments in a matter of minutes, rather than taking hours or days to rebuild each individual environment. It also allows us to be as flexible as we need to be with virtual networking to run multiple competing test and demo environments in the same space without provisioning tons of physical network resources. 


Comments

Currently, there are no comments. Be the first to post one!
Post Comment
Name
 *
Email
 *
Website (optional)
Comment
 *

Allowed tags: <a> link, <b> bold, <i> italics

Live Chat Support Software