Part 1 of our Exchange 2010 on vSphere 4, Best Practices series discussed proper Exchange 2010 sizing and requirements around Client Access, Hub Transport, and Mailbox Server roles. In part 2 of Exchange 2010 on vSphere 4, Best Practices, we focused on our vSphere 4 environment and applied VMware's and Microsoft's support Best Practices to our ESX cluster and Virtual Machines. For part 3, we'll be adding everything up and walking through an example deployment following the guidelines set in the previous installments. Without further delay, lets dig in…
Exchange 2010 Virtualization – RULE #3 – Follow rules 1 and 2 :)
Sizing Up Exchange 2010
We begin with an example scenario based on a ficticous company aptly named "WhereVM." This happens to be one of my test domains so it serves it's purpose well here… Now, lets say that WhereVM has 1,000 employees that work out of a central office. So no regional office support considerations are in the mix. That said, WhereVM has the following requirements for their new email environment:
1. Exchange 2010 must support clustering using Microsoft's Database Availability Group (DAG) model for service redundancy and high availability of Mailbox Server systems across two (2) servers.
2. Exchange 2010 must support OWA, RPC over HTTPS (or OutlookAnywhere), and ActiveSync mobile devices, but these services must be segregated on a separate server (or servers) other than the Mailbox Server systems.
3. Exchange 2010 must support two (2) classes of users: VIP users with a 5GB mailbox, and Regular users with a 1GB mailbox. There will be 100 VIP users, and 900 Regular users, totaling up the 1,000 employees that work at WhereVM.
4. Exchange 2010 must support traditional daily full backups, but sustain backup failures of up to three (3) days.
a. NOTE: This plays directly into how we size our Database Log volumes on our Mailbox Server systems, so it's always important to discuss backup strategies and tolerances when sizing Exchange.
5. Exchange 2010 must support an average of 100,000 message transactions per day when aggregating all user accounts using a 20 sent / 80 received average calculation.
6. Company growth is expected to trend at a 5% average.
7. For RAID groups, we'll be placing the Databases on 10k RPM, 600GB drives and Logs on 15k RPM, 300GB drives. OS volumes will reside on shared 10k RPM, 600GB drives.
8. Each Mailbox node will be provisioned with 4 vCPU's.
9. And last but not least, Exchange 2010 must be deployed on VMware vSphere 4.1.
Armed with our sizing information, we pull up the Exchange 2010 Mailbox Role Sizing Calculator and see what our storage and server requirements look like.
Microsoft CPU and Memory Requirements:
Microsoft Storage Space Requirements:
Microsoft Storage I/O Requirements: Microsoft Storage Recommendations:
Adding It Up and Deploying Virtual Machines
Considering the results from the sizing calculator, for our Mailbox roles we'll create two (2) VM's with 4 vCPU and 12 GB of RAM per system. We'll set up four (4) LUNs on our SAN and dedicate two (2) per Mailbox VM (1 for the Databases, and 1 for the Logs). The OS volumes can be placed on generic VM datastores. We'll look to use RAID 5, 5+1 disk groups for our Database volumes and RAID 1/0, 1+1 disk groups for our Log volumes. The total disk footprint from a capacity perspective is 1,973 GB for Database volumes (per server) and 94 GB for Log volumes (per server). Since the Mailbox sizing calculator factors in a 20% data overhead factor, we're safe in creating our Database VMDK at the 1,973 GB mark, and our Log VMDK at 94 GB. For the OS, VMware recommends 40 GB. You can safely slim it down and use 30 GB for the OS during initial setup, but 40 GB works fine too. The thing to note here is that each Database and Log VMDK have their own dedicated LUN / VM Datastores. This is critical in ensuring that disk I/O is segregated and that I/O contention doesn't wreak havoc with your new Exchange 2010 deployment.
For the CAS and HT roles, we'll deploy a dedicated CAS server and a dedicated HT server. Both can safely be configured with 2 vCPU, 4 GB of RAM, and 40 GB of storage space for the OS and app. This follows Microsoft's sizing from the previous articles as well as the requirements for support – remember minimum 2 vCPU and 2 GB of RAM per vCPU on any Exchange 2010 VM. For our CAS and HT servers, dedicated datastores aren't a requirement as the disk I/O of each of these roles isn't enough of a factor to warrant segregated storage resources.
So we've figured out our CPU, Memory, and Storage requirements. The last thing on our list is the network settings. That said, Network configuration is straight forward with one (1) vNIC per VM with the exception being the Mailbox nodes. Each of these VM's will receive two (2) vNIC's to support the production network as well as the cluster heartbeat interface. In all cases, we'll want to select VMware's VMXNET 3 adapter when creating the VM's. Additionally, I recommend that you create dedicated VLAN segments for the Exchange heartbeat traffic. Although it's not necessarily heavy traffic, it's important that it's isolated from any network issues that may cause the heartbeat to fail between the cluster nodes.
VMware VM Network Settings for Mailbox nodes:
While we're setting up the Exchange Mailbox node VM's, lets have a look at the disk controller and take note of the VMware Paravirtual or PVSCSI adapter selection.
VMware Paravirtual SCSI Controller:
Keep in mind that the VMware Paravirtual adapter isn't necessary for all of the Exchange 2010 VM's, but should be considered a requirement for the Mailbox nodes. That said, one of the gotcha's that several of my clients have run into is setup of the PVSCSI adapter during the VM OS install. To tackle this, you just need to make sure you have a virtual floppy drive configured on the VM and that it's attached to the pvscsi-22.214.171.124-signed-Windows2008.flp file located in the default vmimages folder under your Datastores. Load the driver during OS install and the rest of the initial setup should go quite smoothly.
VMware Paravirtual / PVSCSI driver disk location:
So now that we've sized and deployed our VM's, lets go back to some settings inside of our VMware vSphere cluster that can help us ensure maximum availability. First we'll create a DRS segregation rule around our Mailbox nodes and File Share Witness. This ensures that loss of any one ESX host doesn't cause the Exchange cluster to fail.
DRS rules for separating Exchange cluster nodes and the File Share Witness server:
Remember that Microsoft has a strict no support policy for DRS and Exchange clusters, so the DRS rule is there for start up purposes only. That said, we'll now disable HA and DRS for the Exchange 2010 cluster nodes keeping it active for our CAS and HT servers though.
Disabling HA for Exchange cluster nodes:
Setting DRS actions to Manual for Exchange cluster nodes:
So we're now at the point where we can install Exchange 2010, deploy our Mailbox, Hub Transport, Client Access servers and Database Availability Groups, and prepare for any legacy migration work that lies ahead. While it still may sound like a daunting task, we've taken several precautions along the way to ensure success in our new environment. This not only helps us ensure that our end-to-end deployment goes smoothly, but it also ensures us that if we do happen run into any snags along the way, we've got full support from both VMware and Microsoft. And vendor support should always be a key factor when planning virtualization of any Tier-1 application.
Given that you're now on your way to deploying your own Exchange 2010 solution, we'll wrap up this series and bring our topic to a close. Stay tuned for additional articles on Tier-1 application virtualization down the road, as well as other topics centered on virtual datacenter management. See you next time…