Yesterday, VMware Horizon 6 (formerly View, and Horizon View) was released for general availability! Yay! Now, if you are not familiar, Horizon 6 has a ton of really interesting features, and I am excited to get my hands on it.
Well, there is a lot of new features—over 150, in fact! But we are only going to cover a few of them here, the ones that I am personally most excited about, and the ones that I think businesses will likely see as meeting their requirements the most (a.k.a., the most asked-for features).
There are three Horizon 6 editions, Standard, Advanced and Enterprise; though in my experience customers will go for either Standard (View only, full VDI but no vSOM, VSAN, etc.) or Enterprise (everything included, especially hosted apps and vSOM).
Very excited to have a truly unified workspace across all devices! Now you can pick up and move and disconnect and log back in—whatever device—and be right where you were! Awesome. And add some more applications with the application catalog.
Windows 2012 has a great feature for thin application publishing built right in, called Hosted Apps (RDSH). And Horizon 6 really takes advantage of this—check this video out and look at how quickly an app can be provisioned to users.
Using supported hardware such as NVIDIA Grid adapters, you can pass graphics acceleration through from the host to the guest directly! Now virtualizing your high-end creative desktops is much more feasible the performance is near-native.
So what good is all this awesomeness if there isn't any management to it? One of the most common complaints in a VDI setup is sluggishness with regard to end-user experience. Now, with Horizon 6, there is tighter integration into vSOM (formerly vCOPS or vCOps, depending on your capitalization scheme), enabling admins to troubleshoot faster and more accurately.
Well, my twenty-two part series has come to an end. Here is a summary of each best practice and a link to the full article. Certainly there is a lot more that could be said, but I will leave that for another time.
I hope my posts have been helpful!
Best Practice Summary: Upgrading Gotchas
Best Practices Summary: General Management & Monitoring
Best Practices Summary: Performance Optimization
Best Practices Summary: Troubleshooting
Best Practices Resources
VMware KB Articles
Other Best Practice Links
Continuing my ongoing recap of my recent vSphere 5.5 technical deep-dive, I now shift to Best Practices. This is installment four in this section of the series. To view all the posts in this series, click Geeknicks in the categories list.
Pick a Template Strategy——and Stick to It!
If you have vCenter, which most of us do, you will probably create templates. Multiple templates, I imagine. And if you want to expend the least amount of effort and gain the largest amount of efficiency, then you need to spend some time refining your templates.
A poor template replicates poor design and poor performance enterprise-wide—keep that in mind.
Typically, most VMware admins today (at least in my experience) prefer the adaptive approach to design. That is, they start small, and plan to scale as needed within a given VM (as opposed to a predictive approach, where you try to right-size a VM as close as possible before deployment). This is typically done by enabling CPU/Memory Hotplug in virtual machine settings before converting/cloning it to template. In this manner, you can create a template with 1 vCPU and 2GB of RAM, and adjust it on the fly as necessary. Note that according to Microsoft TechNet Article 732060, you might need to restart the VM when upgrading from a single (uni)processor to a multiprocessor configuration. However, multi-multi expansions are typically seamless.
Be aware of VMware KB 2040375, in which enabling CPU Hot-Plug in a virtual machine automatically disables vNUMA of the same virtual machine, and
the virtual machine will be started without virtual NUMA and instead it will use Uniform Memory Access with interleaved memory access.
So you will either a) want a separate template for large VMs or b) configure VMs from scratch. It also means you will need a much more defined design based on requirements, since you can't adjust on the fly.
Additionally, you will want to utilize something like the VMware OS Optimization Tool—I call it the vOOT. Tools like this will look for services such as "WLAN Autoconfig" and other default startup items in server (and desktop) OSes and optimize (read: disable) them for you, so you can run as streamlined a configuration as possible!
Keep in mind you can run either an analysis only (on a bulk group of machines, or a single machine), or an analysis and optimization at the same time. Hit the above link for a more detailed article on how to use the vOOT.
Continuing my ongoing recap of my recent vSphere 5.5 technical deep-dive, I now shift to Best Practices. This is installment three in this section of the series. To view all the posts in this series, click Geeknicks in the categories list.
Disconnect (Or Even Remove) Removable Devices
What's the big deal with devices? In today's world, everything is connected, right? Right. That's the problem . . .
Here's a few facts about Windows (I don't know about Linux as I have not looked into the subject as much for it, but I suspect it to be similar) removable devices:
Now for a server farm with only a few dozen VMs, this isn't a big issue; but at scale it is a different story. Consider the following scenario:
Additionally, many of us use ISO images for our servers; how many admins forget to disconnect the CDROM image when they are finished—more than that, how many forget to deconfigure the CDROM drive from the datastore path? Yes, a VM guest still configured to an ISO path will poll, as far as I know (if anyone knows for sure and has a link to VMware article, I would appreciate it). Don't forget, it can be very easy how to know if VM has a path configured to an ISO.
Best practices? Here are a couple for removable devices that you should consider. Remember, a Best Practice is just a guideline; ultimately, the business requirements and use cases should drive your implementation.
Continuing my ongoing recap of my recent vSphere 5.5 technical deep-dive, I now shift to Best Practices. This is installment two, in the ongoing series. To view all the posts in this series, click Geeknicks in the categories list.
Best Practice | Management #2 | Baselining
Baselining is, like cable management, a skill often downplayed, overlooked, or otherwise ignored—but is absolutely essential to any virtual environment. Consider this scenario:
On Monday, you arrive to a slew of emails: "Dan, I think my blaa-blaa app is not working correctly. It is super-slow." You think about it, do a quick check, and I mean quick, and go get your coffee. Then, you return to your desk and your boss walks over, and says, "I've been hearing a lot of people talking about their problems with blaa-blaa app. What's going on?"
Now, if you haven't been practicing baselining, you won't be able to give anything other than a subjective answer: "Well, they think it is running slowly, but I don't think it is running slowly." There is no empirical data to back up your statement; it is your word against theirs.
Baselining is the answer to the problem of subjectivity in the computing experience. It provides you, as a systems administrator, the ability to 1) know what good performance looks like, 2) determine, by delta comparison, when things are not functioning properly and performing below designed expectations, and, 3) help to correct any erroneous experiences—whether you are wrong, and the app is performing poorly, or perhaps something else is going wrong, on the user's end.
So how does baselining work, and what is the proper way to perform it, to achieve this desired outcome? I'm glad you asked!
In my experience, many people baseline a system before Go Live—that is, without any users on it. Now, that is important. However, a true production baseline must be performed under normal user workload and resource consumption, otherwise how will you know if anything is not operating as expected?
A few things to remember; call them the "gotchas" of performance baselining.
Too often our designs are based on static calculations: I know each VDI session will consume 500MHz of processor and produce .25Mbps of LAN traffic. Scale out. We create a modular "pod" if you will to figure out what we need for hardware, etc. And this is a great practice, and is used by virtually everyone when it comes to sizing.
The problem enters, most often, when VMware admins and other Sysadmins take what is meant to be a sizing guideline and turn it into a baseline—a function for which it was never intended. The sizing guideline is for purchasing and initial configuration; it is a place to start, whereas a baseline is really supposed to be representative of the end-state at go-live under normal operating conditions.
So how do you baseline? Well, you use the same tools, you just use them properly. You can use simulated user loads, but in my experience users are less predictable than we think. It is best, if possible, to perform your final baseline under normal user workload conditions, if possible over several days. And you should be able to summarize it as such—keep it simple for your users and boss's boss:
VDI baseline: On an average Monday, we have
Make sure that you have a baseline not just for servers in general, but for specific workloads, too. In this case above, I used VDI as an example.
Make sense? So avoid the trap of laziness, and do the hard work of optimizing and recording a baseline, and your future self will thank you greatly for it. And by the way, a tool like VMware vSphere Operations Manager or NetApp OnCommand Performance Manager can assist you greatly along the way.
Note: The video of the same GIF above is below, in case you would like to share that instead.