VMware or Hyper-V? Part 1: Which hypervisor will work best for your environment?

If you’ve gone through the comparison of Cloud vs Virtualization, and decided that Virtualization is the best fit for you – you’re still not done. The next step is deciding which hypervisor to use for your virtualization. If a significant portion of your environment is Windows, then your primary choices are VMware or Hyper-V.

Comparing VMware and Hyper-V is less of a “which is better” question than a “which would be better for you?”   You can start by looking at a Hypervisor comparison chart, and while that might narrow the differences between the two, it will not rule one out unless you need to run an OS specific to VMware or Hyper-V.

Considerations when choosing between VMware and Hyper-V include:

  1. Which operating systems do your Virtual Machines (VMs) need to run?

    If you’re mostly Windows, with a few Linux installs, either will work for you. If you need to support a wide variety of operating systems on your VMs, VMware has much more breadth – see the Hypervisor comparison chart for more details.

  2. Hardware compatibility

    If you’ve already got a significant investment in server hardware, it makes sense to continue to use as much of that hardware as possible for Virtualization hosts. VMware’s Compatibility Guide provides a search that will let you enter your existing server, and tell you which versions of VMware are supported. Got a Dell PowerEdge T110? You should be able to run ESXi 5.5 on it.

    Hyper-V is installed on Windows 2008 or Windows 2012 and it has the same basic installation requirements as the OS. Installing the Hyper-V role does require additional processor support for virtualization. Microsoft also maintains a Windows Server Catalog to identify servers that are compatible with Windows 2012 and Hyper-V.

  3. Ease of use

    Hands-on comparison of VMware and Hyper-V are often biased due to the loyalties and experiences of the reviewer. Additionally, updates to management tools and version capabilities change frequently – for example, one major complaint about the free ESXi 5.1 hypervisor was that it was limited to 32 GB memory – as of 5.5, that limitation has been removed. So, keep those factors in mind when you read through the following sample of hypervisor comparisons:

    Hyper-V 2012 versus VMWare vSphere 5
    Real Hyper-V vs. VMware comparison: What you actually get for free
    Setting up your Hacking Playground – VMWare vs HyperV
    Hyper-V R2eality: VMs not so hot after all…


    Additionally, take a look at the instructions for the same task for each of the platforms. For example – implementing High Availability (HA) in VMware vs. configuring HA in Hyper-V .

Most reviewers find it easier to implement complex virtualization features with VMware, and these features tend to work better with VMware – provided you’ve paid for the VMware licenses and you’re running on supported hardware. Hyper-V wins points for a wider range of supported hardware, and the ability to configure advanced features without requiring license fees – but it may not be able to do everything VMware does and is more difficult to configure.

In our next post, we’ll take a look at how costs for VMware and Hyper-V compare – including the free editions of both, and what you get when you pay for the licenses.

Should You Get Rid of Your Server Room?

A Spiceworks survey of SMB IT professionals in 2013 had 60% of the respondents currently utilizing the Cloud, and projected that the number would reach 66% in the next 6 months. In addition, 72% of the respondents were using server virtualization and that number was increasing to 80% in the next 6 months.

Online surveys are not based on a strict random sampling methodology. If you’ve got a Cloud or Virtualization environment and receive a survey you are more likely to respond – that inherent bias is going to push the resulting values higher. This does not mean that Cloud and Virtualization implementations aren’t useful or worth considering. All it means is that marketers spin results. There are valid reasons to move some (or all) of your applications to Virtualization or the Cloud beyond whatever marketers are telling you, but you’re not necessarily flirting with disaster if you are not part of this movement. Local bare metal servers, storage arrays, and networking equipment will continue to be a valid, secure and reliable infrastructure model, and one that will still be in widespread use many years from now.

That being said, Virtualization or Cloud Computing can increase uptime, maximize resource usage and minimize the number of servers you have to maintain. As local servers age out and hardware is replaced in the future you will probably use one or both technologies as part of your hardware replacement strategy. There will be certain servers that won’t work with virtualization or the Cloud – for example, Microsoft clusters, I/O sensitive databases, and that one server in the back closet with your last remaining fax card – but some portion of your servers will work as virtual machines.

For those servers that can be virtualized, you will be faced with the question – which is better, virtualization or Cloud IaaS? And the answer to that is – it depends. Sometimes local Virtualization will be the best option, and sometimes Cloud computing will be the answer. The two technologies can both provide virtual machines, but have distinct use cases, advantages and drawbacks. In order to help you decide which technology better fits your needs, we’ve put together a new White Paper, Which is Better – Virtualization or Cloud IaaS?, that describes key factors to consider when planning a migration to either Virtualization or Cloud IaaS, and the differences inherent in the technologies.

So – should you get rid of your server room? Definitely “Not right now”, but probably “Yes, eventually we’ll get rid of some of our servers”. And to get to that “yes” , take your time in developing your strategy, and remember that a hybrid mix of local bare metal servers, local Virtualized servers, and remote Cloud servers may well be the best model to best fit all your IT needs.

Is the Cloud Still a Secure Option?

One of the fundamental prerequisites for any IT Infrastructure is that it is secure, and doubts about security have plagued Cloud vendors for as long as they’ve existed. While there have been data breaches, Cloud computing has proved secure enough over the past few years for it to be considered a valid option for many organizations. However, the destruction of Code Spaces on June 17th at the hands of hackers who gained access to their Control Panel at AWS was a worst case scenario brought to life, and brought Cloud security questions back into focus.

The hack at Code Spaces wasn’t targeted toward collecting confidential data, but rather to the control of the client’s Cloud resources. The hackers gained control of Code Spaces management utilities, demanded ransom, and then deleted data when Code Spaces tried to regain control. We’ve known these details since shortly after the incident occurred – what we don’t know is exactly how the hackers were able to get control of Code Spaces resources. Was there a flaw in Amazon’s security, or in Code Spaces implementation, or was this simply inevitable due to the intrinsically public nature of the Cloud?

We’re not likely to get a detailed answer for how this happened until investigations have been completed. What we do know is that management utilities in a public Cloud are by definition publicly accessible, and that they must be locked down to be secure. Amazon specifically states that security is a shared responsibility, and they provide detailed guidance on security practices for their public Cloud. Amazon provides the equivalent of an isolated IT environment in a locked room, and provides access to administrative tools for controlling that environment – it is up to the user to lock down administrative access using the provided role assignment and multifactor authentication (MFA) tools.

Building an environment in the Cloud is relatively easy – that’s one of the selling points. For new Cloud subscribers with very basic admin needs, a complex security model may seem to be overkill – but as sites grow in scale, complexity, and number of admin users, the need for locking down administrative access becomes more acute. In a small, relatively basic security model, roles may be configured too broadly, and administrative roles may have far more permissions than needed for basic daily tasks. If the security model is not updated as the site grows, a set of compromised administrator’s credentials could cripple the Cloud infrastructure.

Using MFA should prevent a hacker from accessing control tools even if they gain access to administrator credentials. The hacker should only be able to obtain an MFA access code if they had the device that generated the code. But, in a scenario where an administrator keeps password information on a device that is also used as an MFA device (e.g. a smartphone), all it would take would be one stolen phone to provide access to Cloud Management tools.

But even with roles defined properly and MFA in use, and administrator’s laptops and smartphones properly secured, there is always the possibility that something could go wrong with your Cloud environment. Maybe not through hacking – maybe through administrator error, or natural disaster, or a hosting company closing its doors. That’s where backups come in, and that was the fundamental flaw in Code Spaces infrastructure. They had backups in multiple locations, but the multiple locations were all within AWS, and under the control of the compromised control panel. Backups to an outside location would have allowed them to rebuild. There would have been downtime, and some lost data, but they would still be in business.

Keep in mind that while the data was deleted, it was not compromised. The hackers were able to delete the S3 buckets, but they didn’t read them. Code Spaces and AWS did succeed in maintaining the confidentiality of the data, if not its continued existence. In those terms, the Cloud did successfully maintain security. However in terms of maintaining data integrity, some as yet unidentified portion of the shared security architecture failed. The moral of this story will likely end up that the Cloud can indeed be used securely, but that both Cloud users and vendors need to pay strict attention to security guidelines. In the case of Code Spaces a detail was missed somewhere and they paid the price.

Can Ransomware Devastate your Data in the Cloud?

Security concerns have always been an issue in Cloud adoption. Any time your servers and data are not physically under your control, you have to ask questions about how access to those servers is handled, and how the data on those servers is secured.

For applications that aren’t hosted in the Cloud data breach problems exist as well, Cloud based applications didn’t seem to have any significant vulnerabilities beyond those of other web based applications.

At least, that was until last week. On June 17th, Cloud based service provider Code Spaces had an intruder gain access to their Amazon control panel. On the Code Spaces home page, they provided the details of the attack, and outlined the repercussions for their company. Basically, an intruder gained access to Code Spaces’ Amazon EC2 control panel and demanded ransom in order to leave the site. When Code Spaces tried to lock the intruder out, the intruder began deleting customer data. By the time Code Spaces had removed the intruder, most of their data and backups had been partially or completely deleted.

It took only 12 hours from the time the DDOS attack began to the time it ended with Code Spaces regaining control. Given that DDOS attacks are not uncommon, it was certainly less than 12 hours before they realized they had an intruder and formulated a plan to deal with the intruder. Since this is a fairly new security scenario, it is unlikely that the company’s backup plans (retrieved from the Internet Archive) included dealing with intentional malicious deletions, and they also trusted that redundant Cloud based backups would be sufficient.

Code Spaces provided SVN and Git hosting, and Project Management to its customers, and stated that their priority was to get as much data back as possible. They went on to state:

Code Spaces will not be able to operate beyond this point, the cost of resolving this issue to date and the expected cost of refunding customers who have been left without the service they paid for will put Code Spaces in a irreversible position both financially and in terms of ongoing credibility.

As such at this point in time we have no alternative but to cease trading and concentrate on supporting our affected customers in exporting any remaining data they have left with us.

In the company’s twitter feed, they say that they will publish a “full detailed report” soon. Until then, this incident brings up a lot of questions for Cloud users. How exactly did the intruder gain access to the control panel? Was there a security hole on Amazon’s part, or a user error on Code Spaces part? If someone does gain unauthorized access to your Cloud control panel – how can you lock them out before they cause any damage? Is there a safe way to keep all of your backups in the Cloud, or is an offsite backup still a necessity?

In the midst of the marketing hype surrounding Cloud based computing, Code Spaces will serve as an example of a worst case scenario. Hopefully Cloud users will pay enough attention to the details of how Code Space was hacked to avoid similar problems, and look more closely at whether the Cloud is sufficient for their needs.

Monitoring the Cloud for End User Experience

Using the Cloud for all or part of your computing infrastructure doesn’t mean you can ignore monitoring. If you’re using Cloud based SaaS applications, or you have web applications hosted in the Cloud, you still need to verify that those resources are available and responsive. This doesn’t mean you have to do a deep dive into DevOps optimization – but you should verify the applications are performing for your users.

From the perspective of a Cloud user, how often a backend server needs to be migrated, or when a noisy neighbor slows an application down doesn’t matter. The Cloud obscures the details of the problem, and the user just cares that your web page took longer to load than they were willing to wait – and, oh, look – another cute kitten video.

The minimal areas you should monitor for Cloud performance from a user perspective are:

  • Verification of SLA agreements
    We’ve discussed Cloud SLAs before, and pointed out that the compensation many vendors offer is typically not enough to offset losses and is only available if you notice the outage, plus excludes maintenance windows. If your application needs to be available 24/7, you should be checking that you can access it 24/7, and check with the vendor if it’s not available when it should be. And, of course, documenting your outages so you can cash in on the SLA agreement if needed.

  • Application responsiveness
    SLA agreements only refer to uptime. If the application is available, but too slow for users to wait for, then it is effectively unusable. A responsiveness check should involve whatever functionality your application provides. If your users can log in, enter data, search, update records, etc – that is what you should be testing. You can create macros that will automate this, and then archive the data for trend analysis.

  • Optimizing resource reservations
    One of the draws of the Cloud is that you pay for what you use. However, if you reserve resources beforehand, you can pay a lower rate than you would for ad hoc resource requests. If you’re using IaaS to host your requests, keep an eye on the basic server monitoring metrics – CPU, disk, network and memory – and use your observations to fine tune the basic resources you request from the Cloud provider.

  • Pinpointing application problems
    Just because you can’t get to a Cloud application doesn’t mean it’s the vendor’s fault. The internet is between your users and the Cloud servers. If your DNS provider’s servers go down – or are attacked – users won’t be able to find your application. If your ISP has an outage, the application will be there, but users inside your organization won’t be able to get to it using your network. Or, the problem could just be that a switch or a router has died, or your network bandwidth usage is too high.

    Chart out the points of failure between your local network and your Cloud, and monitor them so that you can keep track of the cause of application failures. You can fix problems on your internal network but for external problems, keep track of when and where they occur, and of the vendor’s response.

Getting Locked Out of the Cloud

I am in no conceivable way a designer, and admittedly don’t understand most of the features available in Adobe’s Creative Suite.  If asked to edit an image, I’d be more likely to load it up in GIMP than Photoshop, and would probably get it kicked back for a re-edit because someone with a better sense of aesthetics found the end result lacking.  With that disclaimer out of the way, I can empathize with those who rely on Creative Suite for a living.  Putting together a visually appealing product is exacting, time-consuming work even for the talented, and that work is made much easier with the right tools.  I have been assured that Creative Suite is usually the correct tool for whatever design project needs to be done.

Up until the release of Creative Suite 6, the software followed a traditional model:  pay for the software, install it, license it, and then use it locally.  The most recent release of Creative Suite has taken this model to the cloud with “Creative Cloud” – the software is still installed locally, but the licensing fee is paid on a monthly basis, with an internet connection required to validate the license at least once every 30 days.  An annual option is available as well, with a 99 day validation interval.  If the license is not validated in the allotted time, you’re locked out of your software.

In theory, this is an ideal model for Adobe.  Creative Suite is expensive, and monthly licensing fees can be more economical for users who only need the Suite for a short term project.  Also, by controlling the licensing, you can control access to the product rather than policing software installations to combat piracy.

For customers who rely on Creative Cloud, the Cloud model has not been as advantageous.  While having a Cloud based repository for projects can help with portability and collaboration, the overall cost for the software has increased, and customers must provide at least intermittent internet connection to validate the software license.  Complaints over cost, required internet connectivity, and various bugs notwithstanding, the model mostly worked, and designers continued to use Creative Suite in its Creative Cloud form.

However, on May 14th the Cloud model came to a grinding halt when routine database maintenance caused Adobe logins to be unavailable for 24 hours.  During the outage, the Twitter account for Adobe Customer Care advised customers to take their computer offline, access the software, and then go back online.  After the outage was resolved, customers who were still having problems were advised to sign out, reboot, and then reconnect.

Social media being what it is, there were many unverifiable but plausible tweets during the outage complaining about missed deadlines and resulting financial losses.  Creative Cloud is, after all, professionally designed software used by professionals with deadlines and reputations to maintain.  Tweets asking about compensation were told that none was available (as per the terms of service), but a later report by Reuters quoted Adobe as saying that compensation would be considered on a case by case basis -  with no details about exactly what that compensation might be, or how users might be asked to verify that they deserve compensation.

Cloud outages are not uncommon, and every user has a different definition of what they consider to be a “critical” application, which requires very careful consideration before being migrated to a Cloud.  Based on tweets on the AdobeCare Twitter feed and posts in the Adobe forums, many Creative Cloud customers would have preferred locally installed software to avoid this exact problem.  Adobe users who have been bitten by this will be justifiably reluctant in the future to opt for any form of Cloud based application.

Can you still get patches for Windows XP?

Were you using XP in Feb. 2010?  If you were, you might remember a patch issued by Microsoft that caused a blue screen on many computers.   That incident should serve as a reminder that even patches that are explicitly intended for the exact OS version you’re running can still end up rendering your system inoperable.  It’s also something to keep in mind before installing patches that “should be” ok on your version of XP.

Home and Pro XP patches do exist – Microsoft is still creating them – but only for large organizations that have paid for extended support.  Outside of extreme circumstances like last month’s IE advisory from US-CERT, Home and XP Pro patches are out of reach for most XP holdouts.

However, Microsoft is continuing to publicly distribute patches for embedded versions of XP up to 2019, and Windows 2003 up to July 14, 2015.  Embedded XP is under the covers of gaming consoles, cash registers, ATMS, etc., and shares the same basic underlying code as the 32 bit Home and Pro XP editions.  64 bit XP shares basic code with Windows 2003.  It was probably inevitable that someone would figure out a way to try to substitute patches that are almost the right version and available, for patches that are the right version, but are not available.

Enter  a German message board  detailing registry edits and patch hacks to make embedded XP and 2003 patches available to Home and Pro XP users.  The hacks work by making Microsoft Update think that the patch is being installed on a supported system. The 32 bit XP hack is done by altering a registry value to make Microsoft think that embedded POSReady2009 is installed.  The 64 bit hack is more complex and involves downloading Windows 2003 patches manually and modifying them to work around the OS version check.

The author of the original post includes the following disclaimer:

ATTENTION: Use it you [sic] own risk! These updates are not tested on a regular XP system and could damage your system

ZDNet reported on the hack a few days after it was posted, and included a statement they received from Microsoft:

We recently became aware of a hack that purportedly aims to provide security updates to Windows XP customers. The security updates that could be installed are intended for Windows Embedded and Windows Server 2003 customers and do not fully protect Windows XP customers. Windows XP customers also run a significant risk of functionality issues with their machines if they install these updates, as they are not tested against Windows XP. The best way for Windows XP customers to protect their systems is to upgrade to a more modern operating system, like Windows 7 or Windows 8.1.

This isn’t just a case of a pro forma “don’t do this, spend money on the new version” warning from Microsoft.  Microsoft doesn’t know what would happen if you apply the wrong version of a patch to your XP system, and they aren’t going to test that scenario because it’s not something they support.  It is possible that the patches might work perfectly fine using this hack.  Or you could hit a BSOD and end up needing to reinstall the OS.  They don’t know, and they aren’t going to help you rescue your box if the patches fail.

Ultimately, almost the right version of a patch can be worse than no patch at all.  XP can run without patches, but you will need to be more security conscious, keep everything backed up, and be ready to completely rebuild the system.  Or, better yet, try rebuilding the computer as Linux.

Users are the Weakest Link in Cloud Security

On Monday, Dropbox confirmed a security vulnerability with Dropbox files shared via hyperlink.   In the confirmation, they described the vulnerability as follows:

  1. The user uploads a file to Dropbox that contains a link to a 3rd party website.
  2. The user sends a link to the Dropbox file to a recipient, who uses the link to access the file.
  3. The recipient clicks on the link in the file to view the 3rd party website.
  4. The 3rd party website owner checks their access logs and sees the Dropbox link as the “referrer” to their site – and can then click on the Dropbox link and gain access to the file.

Dropbox reacted by disabling access to all previously existing file links, examining and re-enabling access for links that were not vulnerable, and patching the vulnerability in future links.  Users were told that they could re-create links, but the effect was that pre-existing links suddenly broke.  Website links, presentations, cloud based documents – either needed to be recreated, or re-enabled if they were judged not vulnerable.  (Which of course begs the question of how do they judge whether or not the Dropbox content is vulnerable without examining it?)

Dropbox could have handled this better.  By disabling links across the board with little notice, users were left to clean up the mess rather than being given the opportunity to fix the problem for themselves, if they even considered it a problem.  It was yet another demonstration that Cloud services are not yet 100% foolproof or reliable, and are ultimately subject to the business needs of the vendor.

Dropbox updated the post the next day to confirm a second vulnerability in which the user inadvertently pastes the URL for the Dropbox file into a search engine:

  1. The user uploads a file to Dropbox and sends the link to a recipient.
  2. The recipient (inadvertently) pastes the link into a browser search engine rather than the browser URL field.
  3. The search engine makes a best guess for keywords in the URL and displays ads based on those keywords.
  4. The owners of the displayed ads check the search terms for which their ads were displayed, and see the Dropbox URL.

Dropbox’s reaction to this was:

This is well known and we don’t consider it a vulnerability. We urge everyone to be careful about providing shared links to third parties like search engines

Yes – I agree – while the first vulnerability was up to Dropbox to fix, this second vulnerability is the user’s fault.  Many users have become accustomed to using the search engine as an address bar without knowing or caring that anything they paste there is subject to becoming part of data analyzed by the search engine.  The convenience of having one place to type what you want (url or search term) becomes habit, and habit takes precedence over conscious thought.  That becomes a security vulnerability waiting to happen when you use the internet to access confidential data.

Should Dropbox make their links utterly secure and idiot-proof?  In the first case, that’s probably overkill for most Dropbox items, and in the second case it’s not possible, because there will always be ways to use things that were never intended or imagined.  Dropbox and similar Cloud services provide a convenient way to make your data accessible over multiple devices.  Cloud services can provide encryption and access control, and the vendors are responsible for making sure that those services are not vulnerable, and patching any Heartbleed type vulnerabilities that are detected.

What is done with security after it has been established is up to the user.  Users control who they give access to, and allowing access to confidential data by creating and distributing an unsecured URL to that data is a security failure on the part of the user, not the vendor.

Critical IE Vulnerability Patch Includes Support For End-Of-Life XP

On April 26, FireEye reported a vulnerability in Internet Explorer versions 6 through 11 that was actively being exploited in IE versions 9 through 11. On April 28th the US Computer Emergency Readiness Team (US-CERT) recommended either using suggestions from Microsoft to mitigate the risk or switching to a different browser until Microsoft had patches available for the issue.  Given the publicity over this problem, and the ease with which users can switch browsers, this was obviously a high priority for Microsoft.

Typically, Microsoft will issue patches on the second Tuesday of every month but will make an exception for critical issues.  In Security Advisory 2963983, Microsoft announced that an out of band patch would be released at 10AM PDT, May 1st.  The patch Security Update for Internet Explorer (2965111) includes updates for all affected browser versions on Microsoft OS versions ranging from XP to Windows 2012. If you use patch management software rather than letting Windows automatically update, it will show up as Critical Security Update (KB2964358), with a prerequisite of KB2929437 for IE 11.

The patch suite has a different patch for each possible combination of Windows OS and IE version supported.  Since some customers have purchased extended XP support, Microsoft had to create an XP version of the patch but only needed to distribute it to customers on extended support.  Given the effort spent in conveying the dangers of running XP without support, it was surprising that they would provide a patch to all XP systems for the first big security bug to hit after the 4/8/14 cutoff date.  The rationale explained in a Microsoft Blog post  was:

We made this exception based on the proximity to the end of support for Windows XP.

This rationale makes sense to me in the context that the bug was in existence and being exploited at the time XP support ended, even if FireEye reports that IE 9, 10 and 11 are being targeted rather than the IE 6, 7 or 8 that are supported on XP.  The potential for it to be exploited is there, so an argument can be made for patching a pre-existing, high profile vulnerability.

An argument can also be made that customers running XP are still potential Windows 7, 8, or beyond customers, and alienating them would be counterproductive at a time when Apple is lowering prices, Google is running the backend Cloud for inexpensive Chromebooks, and free Linux distributions can be installed on XP hardware.  Users running unsupported XP have a temporary reprieve, but this was a highly publicized vulnerability that should have them thinking about the next patch or update they will need that will not be in such close proximity to the end of XP support.

Linux and the Death of XP

A few years ago it became obvious that my old faithful Windows XP Thinkpad, with a whopping 1.5 GB RAM, was not up to running all the software I needed it to run.  It was, of course, a great excuse to buy a  Windows 7 Thinkpad T520 before they “updated” the keyboard.  However, it also had the advantage of leaving me with a working, if sluggish, spare laptop.   Since then I’ve used the spare laptop to test out multiple Linux distributions, and it has performed far better on limited memory with Linux than it did with XP.

On 4/8/14, XP reached the end of Microsoft support, and anyone using XP will have to figure out what to do with their XP computers.  If the hardware has enough resources, there is the option of upgrading to a newer version of Windows – if it doesn’t, it could be set up as a VDI thin client, or you could take the security risk of running an unsupported system.  Or, like my old laptop, you could install Linux on it.

There are significant advantages to implementing Linux desktops:

  • No licensing costs
  • Malware and virus free (not completely, but much more so than Windows)
  • Open source software for pretty much everything you need
  • Runs on computers with low resources
  • Capable of displaying a Windows-like desktop (an advantage if I want anyone else in my family to use the computer)

That being said, I will admit that there are issues with Linux in a Windows business environment. One major issue is that Linux does not connect to all Windows applications seamlessly.  There may be Linux substitute applications (OpenOffice/LibreOffice), and packages that can be configured to work with Microsoft software (Evolution email client connecting to Exchange), but  there will always be some native Windows applications to which Linux will have problems connecting: Terminal Server Gateway RDP connections are problematic, and GoToMeeting lists Linux as a “may work” platform, but they won’t provide customer support if it doesn’t work.  Linux does have its own versions of RDP and web meetings, but the idea is to fit Linux desktops into the existing Windows environment, not the other way around.  Fitting Linux into a Windows world can take a significant amount of research along with trial and error.

Although, with some exceptions, application interoperability for Linux is better now than it was a few years ago.  There are several factors that are making business applications more Linux friendly:

  • Vendors are seeing demand to access software from Macs, tablets and mobile devices, so developers are testing that software works on more than just Windows alone.
  • The Microsoft Open Specifications Promise allows open source software to use the same document standards as Office – so LibreOffice, or OpenOffice, or whichever version you choose, can read from and write to the same format as MS Office.
  • Software as a Service (SaaS) can take software that was previously installed on Windows and make it available to anyone with a browser.  It doesn’t matter if you have a Mac version or a Linux version – if you’ve got a Cloud SaaS version, you’re set as long as they’re running an acceptable browser.

A second major issue hindering Linux installations is technical support.  Existing support staff will probably need Linux training – while Linux itself is not difficult, getting it to play nicely with Windows can be.  The “free” aspect of Linux is appealing, but it may take paid support to get the initial configuration set up correctly.  There are commercial Linux vendors (RedHat, SUSE, and Ubuntu) who provide server support, but desktop support is more limited.  SUSE Linux claims to have “the most interoperable Linux desktop available today.” , listing key features of Office, Exchange, and Silverlight support, and does provide paid desktop support, which may be an attractive alternative to consultants when you’re just starting with Linux.

My opinion is admittedly biased – I would very much like to see more Linux desktops in use.  There is only so much havoc that can be wrought on Linux by users as long as you don’t let them near the root password.  It would have been nice to see the commercial Linux vendors capitalize on the publicity surrounding Windows XP’s death, and try to present Linux as a viable alternative.  But, given the technical difficulties, perhaps the Linux vendors are busy enough supporting Linux servers without the added problems of getting them to work with Windows.