In Part 1 of Exchange 2010 on vSphere 4, Best Practices, we looked into the sizing requirements relative to our Client Access, Hub Transport, and Mailbox server roles. For this entry, we'll take a look at our vSphere 4 environment and apply VMware and Microsoft support Best Practices to our ESX cluster. Keep in mind that these configurations have been tailored in order to apply to all use cases inside of a VMware environment so whether your goal is to support 50 users on a single Exchange 2010 Virtual Machine, or 50,000 users across several ESX clusters, the same measures should be applied to your VMs and host servers.
In our next entry, we'll take our sizing calculations and vSphere environment configurations and apply them to an example scenario. This way we can get a good understanding of how it all ties together. It's important to note that I won't be diving into Exchange 2010 clustering principles or vSphere architecture fundamentals. Those topics are too vast to cover and ultimately take away from the goal of this series. Our focus is on making sure our Exchange 2010 environment performs at top speed while maintaining supportability from both VMware and Microsoft.
Exchange 2010 Virtualization Rule #2: Dedicate resources, not servers
CPU & Memory
One of the common misconceptions around Exchange virtualization is that it's near impossible to achieve because you can't provide enough horse power to the app without deploying it to the physical server environment. While that held true several years ago, hardware enhancements in CPU, memory capacity, and general throughput have laid that problem to rest. With 6-way CPU's readily available and hundreds of gigabytes in memory capacity (per server) at our disposal, we don't necessarily need to focus on CPU affinity tactics to ensure our VM's perform well. But I'll add that you should always ensure your host has sufficient CPU and memory resources to power your Exchange VM's after properly sizing out your environment. To aid in our efforts, we'll apply some easy tactics for CPU and memory performance:
Resource Pools around our Exchange 2010 VM's to ensure reservations for CPU and Memory
- Especially in large environments, this can be leveraged to ensure a minimum level of CPU and Memory is always available regardless of where the VM's live inside of your vSphere cluster.
One of the most overlooked aspects of creating and deploying a VM is the type of network adapter and what network segment we place our Exchange 2010 VM's on. While this seems trivial to some organizations deploying 10GB networks, it's still a rather important step to ensure that our VM's perform at peak, especially when discussing clustering our back-end Mailbox servers and utilizing a heartbeat path. For this task we leverage VMware's VMXNet 3 adapters and dedicated VLANs or network switches for Exchange cluster networks. If considering iSCSI for storage presentation, then it's an ABSOLUTE MUST to ensure that iSCSI traffic is allowed a dedicated VLAN at a minimum for storage presentation to our ESX hosts. In most of the Exchange 2010 environments I've consulted on, I've been able to leverage Fiber Channel presentation. But in a few environments where the customer fell into the SMB space and utilized iSCSI in their vSphere environment, I've deployed the Exchange 2010 environment with dedicated switches for iSCSI traffic. I always prefer this method, but if you're in a smaller environment and have verified your throughput requirements are within reason, then dedicated VLANs can also be employed.
So that said, our biggest concern now becomes disk throughput and I/O. Enter VMware vSphere 4.x. The vSphere Hypervisor is now capable of over 350,000 I/O requests on a single host when processing VM requests from a VMFS volume. Don't think it's possible? Check out the following performance whitepaper from VMware's camp:
Another aid in our quest for Exchange virtualization is in how Microsoft has optimized the ESE database architecture and reduced overall I/O requests. As to how they've achieved this, here's a short list of enhancements to the Exchange 2010 architecture:
- Native 64-bit architecture allows more of the database to load into memory
- Increased database page size to 32KB allows more data per page as opposed to legacy 8KB page size limitations
- Attachment compression and decompression allows CPU to process attachment store and retrieval without having to manage large databases queries for fetch operations
- Database index updates only when it's actively presenting content so older items being accessed via an Outlook client can be flushed to free up background processing
With throughput and I/O enhancements on our side, we now turn our focus on disk spindle and type in order to properly size the I/O for the Exchange 2010 databases and logs. Back to entry 1 of the series - we found out our read:write ratios and IOPs requirements per the calculator. As I don't want to turn this into a storage design discussion, let's assume we've designed and implemented a disk layout on our SAN that accommodates dedicated database and log LUNs. The key word here is DEDICATED. While we don't want to dedicate our compute resources, i.e. servers, we absolutely want to ensure we have dedicated disk resources. So following that mantra, here are the best practice principles for disk presentation and configuration of a virtual Exchange 2010 server:
Dedicated Disk Groups and LUNs for Database and Log drives for each Exchange 2010 Mailbox server VM
- NOTE: You can absolutely place OS disks on a shared vSphere Datastore, but I always recommend to keep your OS volumes on FC storage as opposed to SATA with no more than 10 VM's per Datastore to ensure you don't run into contention issues.
PVSCSI adapters for our Exchange 2010 Mailbox Server VM's
- This is critical for our database and log drives, but can be considered optional for the OS disks. PVSCSI is an option exclusive to the vSphere 4.x platform and ensures maximum throughput and I/O performance.
Eager-Zeroed Thick Disk types for your Database and Log VMDK volumes
- This procedure ensures that the blocks associated with your Exchange 2010 Mailbox VM's disk files are zeroed out inside of the VMDK. Normal Thick type disks only commit space upfront and zero blocks out prior to committing a write operation. So once you've created your Mailbox VM's and allocated your Database and Log drives, be sure to format the disk type to Eager-Zeroed Thick using the following command: vmkfstools -k .vmdk
- While discussing disk format, it's important to note that Microsoft does NOT support thin disks or snapshots taken of VM's running Exchange 2010. So Thick format all OS volumes and Eager-Zeroed Thick database and log volumes for optimal performance and full support.
Now that we've ensured our VM's will perform optimally, there are still a few things we need to do inside of our vSphere cluster to ensure success. Primarily this is about ESX cluster options around HA and DRS. Microsoft's stand is that they don't support mixing cluster solutions inside of your vSphere environment with those of the Exchange 2010 availability options. So while Client Access and Hub Transport server are fair game inside of the rules, clustered DAG Mailbox servers are off limits in terms of HA and DRS. If you're curious as to whether or not it'll work, the answer is yes. I've tested both HA and DRS to verify Exchange 2010 DAG functionality with both of these options, but this is about ensuring we meet compliance when calling Microsoft in the event we have a problem with our Exchange environment. So for Exchange 2010 Mailbox VM's in a DAG cluster, we'll need to ensure that we've disabled failover and vMotion options by leveraging our ESX cluster rules.
With that, we're a wrap on Part 2 of the series. Check out Part 3 where we'll walk through an example deployment of an Exchange 2010 environment inside of a vSphere 4 datacenter.