Should Automatic Updates Be Enabled On Your Computer?

In the best of all possible worlds every software update would work perfectly and there would be no question about whether you should enable automatic updates. However updates can and have caused significant problems ranging from annoying errors to blue screen crashes, which raises the question of whether automatic updates should be used at all.

When complaining about patch problems Microsoft is an easy and obvious target. They issue patches on a known schedule and have an install base that is diverse enough that it’s impossible for them to test every patch with every software permutation. Software incompatibilities are inevitable and problems are widely publicized but Microsoft eventually sorts out the problems and withdraws or reissues patches. In the case of Microsoft patches it is usually best to wait a few days to check for problems in the field before installation.

Patching third party software is just as critical as patching your operating system and these patches can have problems as well. As reported by The Register an update to Symantec’s Norton Internet Security (NIS) via their Live Update just before Labor Day weekend caused browsers to crash, mainly on systems running XP. Since Microsoft no longer provides patches for XP, third party security products (such as NIS) may be used as the primary line of defense for XP users, and the interim advice quoted on Symantec forums of upgrading to Windows 7 or turning off browser protection was not particularly helpful. Eventually a fix was disseminated through Live Update and the problem was purported to have been caused by older hardware rather than XP itself.

One of the significant differences between Microsoft OS patches and third party software patches is that a problem with an OS patch has a greater possibility of causing a system crash, while a third party program patch would be more likely to cause the program being patched to have an issue. Security software updates virus definitions may need to be disseminated quickly, and even with the possibility of software problems you’re better off with the updated definitions.

This prompts the questions – what should be automatically updated? By default most third party software is configured to update automatically. Should you go through each of your programs and reconfigure them to only install patches that you have approved?

The answer is that it depends:

  • Do you have alternate software with the same functionality?
    If you have Chrome, IE, Firefox and Opera, and a bad patch takes out one of them the others are still available to search and download fixes.
  •  

  • Is your operating system configured in some way that may not have been tested when the patch was created?
    Are you running relatively old hardware? Do you have an English language OS with Asian and/or Cyrillic fonts installed? Have you tweaked the settings in your antivirus program? Make sure you’ve backed your system up before applying any patches and that you know how to restore it. As we discussed in a previous post, not every software permutation can be tested and incompatibilities can cause blue screens.
  •  

  • Is your OS officially supported for the software?
    Programs written for XP may work on newer Windows versions but OS updates could break dependencies in legacy software. Keep track of OS updates and check the functionality of legacy software to determine if a patch needs to be rolled back to keep the software working.
  •  

  • Is the software frequently patched? Are critical vulnerabilities being patched?
    Adobe Reader and Adobe Flash Player are both patched frequently and patched for critical vulnerabilities. Java Runtime (JRE) can also be a high vulnerability target on your computer. Adobe products and JRE should be configured for auto updates as should updates for virus definitions.

Keep in mind that the possibility of a patch causing a problem is relatively minor and even a patched system will never be completely safe. There will always be a gap between when newly discovered vulnerabilities start to be exploited and when patches are available for them. The only way to address that gap is by training your users on what they should not be doing on the internet.

512k Day Isn’t the Cloud’s Only Network Problem

August 12th was 512k day – the day when the IPv4 BGP routing table entries reached 512k routes. This matters because there are routers that have a default limit of 512k IPv4 routes, and if these routers haven’t been modified to increase this limit they could crash or fail to load new routes. There are fixes for this problem and not all routers are affected, but even with advanced notice there were still outages and slowdowns when the number of routes passed 512k.

The thing to note about this outage isn’t just that known problems were not addressed, it’s even if your network was configured perfectly, and a remote resource vendor was providing a promised 99.999% uptime, there still could be a service outage from your perspective if there was a problem in the network between you and your provider. This is especially important when considering moving your resources to the cloud. While a cloud based EMR system may provide tremendous benefits at a cost effective rate, a doctor can’t do their job if there is a problem with a hospital’s internet connection and they can’t get test results or medical history.

The 512k limit wasn’t the first network disruption and it will not be the last. As soon as a clever way is developed to manage the internet, an even more clever way will be found to hack it. Basic infrastructure equipment that has been working for years without any issues can become a victim of specifications that are now obsolete. In short, there is no way to guarantee uptime for anything accessed over the internet. Accepting the possibility of downtime is the price you pay for the economies of Cloud computing.

If you can accept the possibility of downtime and Cloud computing is a fit for your company, one main criteria in selecting a Cloud provider should be its availability as seen from your site. Test to verify that you can consistently access the provider with minimal latency during your trial period, and archive the data to create your own availability SLA reports for each prospective vendor. Also collect data on network bandwidth for each provider as well, and extrapolate the data from your test environment up to an estimated usage in a full deployment.

To look at a provider’s performance over a longer term, services such as downdetector.com aggregate user reports of downtime and can provide an archive of previous issues. Downtime from the perspective of previous clients can also provide insight into how a Cloud vendor handles support issues after you’ve implemented their software. Detailed post-mortems of issues are welcome but not the norm. However support through Twitter and updates via Facebook are common and provide a history of previous issues.

Should network outages mean you have to rule out Cloud computing? The possibility of outages is definitely a factor to consider, but how significant those outages are to you will depend on your network, the Cloud provider, and how sensitive you are to losing services.

Windows Patch Problems

The Windows August Update released on 8/12/14 included 4 updates for Windows 7, 8 and 8.1 that were linked to blue screens. Since the release all 4 patches have been pulled back by Microsoft, but if you have Automatic Updates configured on your computer and the patches were applied Microsoft has provided manual instructions on removing the patches (see section on Mitigations). Please note that the removal instructions are done in safe mode – if your computer won’t boot to safe mode you may need to resort to whatever recovery utilities came with your PC.

If you have Automatic Updates configured to download patches and ask before installing, check the list of recommended patches and make sure the following patches are not selected for installation:

  • 2982791  MS14-045: Description of the security update for kernel-mode drivers: August 12, 2014
  • 2970228  Update to support the new currency symbol for the Russian ruble in Windows
  • 2975719  August 2014 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2
  • 2975331  August 2014 update rollup for Windows RT, Windows 8, and Windows Server 2012

This isn’t the first time patches have been released and then pulled back or needed to be patched themselves:

This is by no means a complete list, but it illustrates that patches intended to make a system perform better and run more securely can have unintended consequences. The problem is not that the patches haven’t been tested before release, but rather that there is no way to test every possible system permutation. For example, the April 2013 issue was caused by a Brazilian third party banking security software, and the most recent patch problem happened if “OpenType Font files are installed in non-standard font directories that are recorded in the registry with fully qualified filenames” .

Does the chance of a crash mean you should disable updates? Of course not – that would be leaving your computer vulnerable to security problems. It does mean that you should disable automatic updates and make sure the updates must be approved before installation. In addition check for reports of issues with updates before installing them and only apply patches intended for your system.

VMware or Hyper-V? Part 3: Virtualization Licensing Costs

In the last post we looked at the cost for licensing Microsoft operating systems on virtual machines – and noted that the licensing costs were the same regardless of the hypervisor used. The cost difference for hypervisors is primarily based on licensing advanced features – so to determine your licensing costs, you need to determine which advanced features are required in your environment.

If you look at the features available in the free hypervisors versions, Hyper-V provides more functionality than ESXi. For example, Live Migration and Failover Clustering are available with Hyper-V, and the corresponding features in VMware – vMotion and VMware HA – are not available until you purchase a VMware “Essentials Plus” license. It is possible that all the virtualization features you require are available in the free version of Hyper-V or VMware. However VMware and Hyper-V implement features differently, and you may find paying for vMotion to be a better fit than free Live Migration. Or that free Failover Clustering works just as well for you as paying for VMware HA. Fortunately, both VMware’s vSphere and Hyper-V’s System Center Virtual Machine Manager (SCVMM) provide trial versions so you can test feature implementation for yourself and decide if free is good enough, or if a licensed feature is worth paying for.

Once you have your trial environments in place, it can still be difficult to compare “advanced” features not only because similar features have different names, but also because functions don’t overlap completely, and it is not possible to do an apples-to-apples comparison. The agnostic hypervisor comparison application at Virtualization Matrix does a very good job of normalizing virtualization features for comparison without biasing the feature descriptions – and it also provides a useful outline of which features are provided by each licensing model.

Hyper-V Licensing

Hyper-V advanced features are available through the Virtual Machine Manager (VMM) component in Microsoft’s System Center (SC). SC uses a Management License model that charges based on the number of processors and managed operating system environments (OSEs). Much like the Windows 2012 Standard and Datacenter Editions described in the previous post, System Center has Standard and Datacenter Management License Editions:

Hyper-V without System Center System Center 2012 R2
Standard Edition
System Center 2012 R2 Datacenter Edition
Cost Free $1323 $3607
Maximum number of processors No processor restrictions 2 physical processors 2 physical processors
OSE Management Licenses No OSE Management licenses required 2 OSE Management licenses Unlimited OSE Management licenses

In addition to the base licensing above how you manage the combination of processor/OSE count in System Center Standard/Datacenter Editions can have a significant influence on your licensing costs. For example, the following comparison shows the cost for System Center to manage 12 OSEs on either 2 or 6 processors, using both Datacenter and Standard licenses:

# Managed OSEs # Physical Processors # Datacenter licenses needed Datacenter License Cost # Standard Licenses needed Standard License Cost
12 2 1 $3607 6 $7938
12 6 3 $10821 6 $7938

 

Datacenter licenses are more cost effective in environments where you are running more than 2 OSEs per physical processor, while Standard licenses become more cost effective at 2 or fewer OSEs per physical processor As a result, Standard licenses can be less expensive if your VMs are assigned multiple virtual processors, while Datacenter licenses will be less expensive for VMs provisioned with fewer processors.

vSphere Licensing

“vSphere” is the name for the overall VMware management environment, including the ESXi hypervisor, the vCenter management server, and any other VMware virtualization components (e.g. vSan storage, or VMware NSX virtual networking).  vSphere licensing is available as Essentials or Essential Plus Kits for small business environments, or Standard, Enterprise and Enterprise Plus Editions for larger environments. The Essentials Plus Kit and Standard Edition provide the bulk of the functionality needed for small business environments (vMotion, High Availability, vSphere Replication, etc.), while higher end performance tuning features like Flash Read Cache are only available at the Enterprise and Enterprise Plus level.

vSphere Essentials Kits have pricing targeting smaller organizations. Essentials Kits include a vCenter license which can be applied up to 3 servers with 2 processors each. The vSphere Standard, Enterprise and Enterprise Plus Edition licenses are per processor, and each processor license requires at least 1 year of initial support and at least 1 vCenter license :

vSphere Edition Cost # Physical Processors Basic Support Production Support
Essentials Kit $495 3 servers / 2 processors each $65 $299/incident
Essentials Plus Kit $4495 3 servers / 2 processors each $944 $1124
Standard (requires vCenter) $995 1 $273 $323
Enterprise (requires vCenter) $2875 1 $604 $719
Enterprise Plus (requires vCenter) $3495 1 $734 $874
vCenter Foundation $1495 N/A $545 $645
vCenter Standard $4995 N/A $1049 $1249

 

Comparing Costs

If Hyper-V without SCVMM provides all the virtualization features you need, and works in your environment, it’s hard to argue with a price of “free”. However, if Hyper-V doesn’t work well when you test it in your environment, or if you need features that aren’t available in the free Hyper-V install, then vSphere licensing costs may well be comparable to System Center Virtual Machine Manager costs.


Previous posts in our VMware or Hyper-V series:

Part 1: Which hypervisor will work best for your environment?
Part 2: VM Operating System Licenses

VMware or Hyper-V? Part 2: VM Operating System Licenses

In our last post we looked at some of the environmental factors that play into a choice between VMware and Hyper-V: required OS support, hardware compatibility, and ease of use. This post and the next will look at what is often the deciding factor in selecting a hypervisor: Cost.

Software costs can be broken down into the cost for advanced hypervisor features and the cost for licensing virtual machine operating systems. The basic ESXi and Hyper-V hypervisors are free, but you will pay for advanced hypervisor functions – and that will be the topic of the next post. In this post, we’ll take a look at the cost of licensing Windows operating systems running on virtual machines.

As of Windows 2012, Microsoft has changed its licensing model for Windows virtual machine operating systems (OSEs) in Standard and Datacenter edition to a processor based model. The previous model was per-server licensing – your agreement included a specified number of licenses or multiple activation keys, and you could install as many operating systems as you had licenses. You can still apply per server licenses for Windows 2012 Essentials Edition to virtual machine OSEs, but Windows Server 2012 Standard and Datacenter Editions now also include licenses specifically for virtual machine OSEs. The three Windows Server 2012 Editions which focus on virtualization are:

Hyper-V Server 2012 Windows Server 2012 Standard Edition Windows Server 2012 Datacenter Edition
Hyper-V installation Console install (command line only – no GUI) Installed as server role in Windows Server 2012 Installed as server role in Windows Server 2012
Cost Free $882 $4809 or $6155, References vary
Maximum number of processors No processor restrictions 2 processors 2 processors
Available licenses for virtual machine operating systems No VM OSE licenses 2 VM OSE licenses Unlimited VM OSE licenses

Please note – prices will vary depending on your licensing agreement and reseller.

Hyper-V Server 2012 is the Windows equivalent of VMware’s ESXi: it is free, and can be used for basic VM management functions. It is a console installation, with just a command prompt as an interface, so management is done either remotely, or through the command line. There are no licenses for virtual machines OSEs provided with Hyper-V Server 2012, so an OSE license will need to be provided for every Windows virtual machine run on Hyper-V Server 2012.

Windows 2012 Standard and Datacenter Edition provide two advantages over the minimal Hyper-V 2012 Server Edition. First, with Hyper-V installed on Standard or Datacenter, you have access to Hyper-V management tools directly on the same server running Hyper-V. Second, both Editions include licenses for virtual machine OSEs: Standard Edition provides licensing for 2 VM OSEs, and Datacenter Edition provides licensing for an unlimited number of VM OSEs.

However, the virtual machine OSE licenses provided by Standard and Datacenter Editions are not restricted to using Hyper-V: they can be used with any hypervisor. Standard and Datacenter licenses are assigned to hardware – specifically they are assigned to processors. It doesn’t matter if the hypervisor running on the processors is Windows 2012′s Hyper-V, or VMware’s ESXi, or Red Hat’s RHEV – the virtual machines managed by the hypervisor can use the licenses provided by Standard or Datacenter Edition.

There are, of course, additional rules as to how Standard and Datacenter Edition licenses can be applied. Microsoft provides a pricing information and FAQ PDF that outlines the basics of Standard and Datacenter Edition licenses:

  • The Standard/Datacenter Edition licenses are processor based and each will apply to 2 processors. If your server has more than 2 processors, you will need to apply enough additional Edition licenses to cover all the processors
  • There is no limit on the number of cores per processor
  • You cannot mix Standard and Datacenter licenses
  • If a virtualization host has multiple Standard Edition licenses, you get 2x VM OSE licenses for each Standard Edition license
  • If a virtual machine moves from one virtualization host to another, the virtual machine OSE license does not move with it – since the OSE licenses for the virtual machines are tied to the processor, the new virtualization server must also have a license for the OSE for that virtual machine. This is not an issue if both virtualization hosts have the unlimited Datacenter Edition licensing for VM OSEs.

Licensing is a spectacularly confusing topic, and you need to carefully read the fine print of your current licensing agreement to determine exactly which and how many Windows operating systems you’re allowed to run as virtual machines. However, the costs will be the same regardless of the hypervisor in use. When comparing overall software costs between Hyper-V and VMware, licensing for advanced virtualization features will be the primary factor – and we will look at that in our next post.


Additional posts in our VMware or Hyper-V series:

Part 1: Which hypervisor will work best for your environment?
Part 3: Virtualization Licensing Costs

VMware or Hyper-V? Part 1: Which hypervisor will work best for your environment?

If you’ve gone through the comparison of Cloud vs Virtualization, and decided that Virtualization is the best fit for you – you’re still not done. The next step is deciding which hypervisor to use for your virtualization. If a significant portion of your environment is Windows, then your primary choices are VMware or Hyper-V.

Comparing VMware and Hyper-V is less of a “which is better” question than a “which would be better for you?”   You can start by looking at a Hypervisor comparison chart, and while that might narrow the differences between the two, it will not rule one out unless you need to run an OS specific to VMware or Hyper-V.

Considerations when choosing between VMware and Hyper-V include:

    1. Which operating systems do your Virtual Machines (VMs) need to run?

      If you’re mostly Windows, with a few Linux installs, either will work for you. If you need to support a wide variety of operating systems on your VMs, VMware has much more breadth – see the Hypervisor comparison chart for more details.

 

    1. Hardware compatibility

      If you’ve already got a significant investment in server hardware, it makes sense to continue to use as much of that hardware as possible for Virtualization hosts. VMware’s Compatibility Guide provides a search that will let you enter your existing server, and tell you which versions of VMware are supported. Got a Dell PowerEdge T110? You should be able to run ESXi 5.5 on it.

      Hyper-V is installed on Windows 2008 or Windows 2012 and it has the same basic installation requirements as the OS. Installing the Hyper-V role does require additional processor support for virtualization. Microsoft also maintains a Windows Server Catalog to identify servers that are compatible with Windows 2012 and Hyper-V.

 

  1. Ease of use

    Hands-on comparison of VMware and Hyper-V are often biased due to the loyalties and experiences of the reviewer. Additionally, updates to management tools and version capabilities change frequently – for example, one major complaint about the free ESXi 5.1 hypervisor was that it was limited to 32 GB memory – as of 5.5, that limitation has been removed. So, keep those factors in mind when you read through the following sample of hypervisor comparisons:

    Hyper-V 2012 versus VMWare vSphere 5
    Real Hyper-V vs. VMware comparison: What you actually get for free
    Setting up your Hacking Playground – VMWare vs HyperV
    Hyper-V R2eality: VMs not so hot after all…
    Additionally, take a look at the instructions for the same task for each of the platforms. For example – implementing High Availability (HA) in VMware vs. configuring HA in Hyper-V .

Most reviewers find it easier to implement complex virtualization features with VMware, and these features tend to work better with VMware – provided you’ve paid for the VMware licenses and you’re running on supported hardware. Hyper-V wins points for a wider range of supported hardware, and the ability to configure advanced features without requiring license fees – but it may not be able to do everything VMware does and is more difficult to configure.

In our next post, we’ll take a look at how costs for VMware and Hyper-V compare – including the free editions of both, and what you get when you pay for the licenses.


Additional posts in our VMware or Hyper-V series:

Part 2: VM Operating System Licenses
Part 3: Virtualization Licensing Costs

Should You Get Rid of Your Server Room?

A Spiceworks survey of SMB IT professionals in 2013 had 60% of the respondents currently utilizing the Cloud, and projected that the number would reach 66% in the next 6 months. In addition, 72% of the respondents were using server virtualization and that number was increasing to 80% in the next 6 months.

Online surveys are not based on a strict random sampling methodology. If you’ve got a Cloud or Virtualization environment and receive a survey you are more likely to respond – that inherent bias is going to push the resulting values higher. This does not mean that Cloud and Virtualization implementations aren’t useful or worth considering. All it means is that marketers spin results. There are valid reasons to move some (or all) of your applications to Virtualization or the Cloud beyond whatever marketers are telling you, but you’re not necessarily flirting with disaster if you are not part of this movement. Local bare metal servers, storage arrays, and networking equipment will continue to be a valid, secure and reliable infrastructure model, and one that will still be in widespread use many years from now.

That being said, Virtualization or Cloud Computing can increase uptime, maximize resource usage and minimize the number of servers you have to maintain. As local servers age out and hardware is replaced in the future you will probably use one or both technologies as part of your hardware replacement strategy. There will be certain servers that won’t work with virtualization or the Cloud – for example, Microsoft clusters, I/O sensitive databases, and that one server in the back closet with your last remaining fax card – but some portion of your servers will work as virtual machines.

For those servers that can be virtualized, you will be faced with the question – which is better, virtualization or Cloud IaaS? And the answer to that is – it depends. Sometimes local Virtualization will be the best option, and sometimes Cloud computing will be the answer. The two technologies can both provide virtual machines, but have distinct use cases, advantages and drawbacks. In order to help you decide which technology better fits your needs, we’ve put together a new White Paper, Which is Better – Virtualization or Cloud IaaS?, that describes key factors to consider when planning a migration to either Virtualization or Cloud IaaS, and the differences inherent in the technologies.

So – should you get rid of your server room? Definitely “Not right now”, but probably “Yes, eventually we’ll get rid of some of our servers”. And to get to that “yes” , take your time in developing your strategy, and remember that a hybrid mix of local bare metal servers, local Virtualized servers, and remote Cloud servers may well be the best model to best fit all your IT needs.

Is the Cloud Still a Secure Option?

One of the fundamental prerequisites for any IT Infrastructure is that it is secure, and doubts about security have plagued Cloud vendors for as long as they’ve existed. While there have been data breaches, Cloud computing has proved secure enough over the past few years for it to be considered a valid option for many organizations. However, the destruction of Code Spaces on June 17th at the hands of hackers who gained access to their Control Panel at AWS was a worst case scenario brought to life, and brought Cloud security questions back into focus.

The hack at Code Spaces wasn’t targeted toward collecting confidential data, but rather to the control of the client’s Cloud resources. The hackers gained control of Code Spaces management utilities, demanded ransom, and then deleted data when Code Spaces tried to regain control. We’ve known these details since shortly after the incident occurred – what we don’t know is exactly how the hackers were able to get control of Code Spaces resources. Was there a flaw in Amazon’s security, or in Code Spaces implementation, or was this simply inevitable due to the intrinsically public nature of the Cloud?

We’re not likely to get a detailed answer for how this happened until investigations have been completed. What we do know is that management utilities in a public Cloud are by definition publicly accessible, and that they must be locked down to be secure. Amazon specifically states that security is a shared responsibility, and they provide detailed guidance on security practices for their public Cloud. Amazon provides the equivalent of an isolated IT environment in a locked room, and provides access to administrative tools for controlling that environment – it is up to the user to lock down administrative access using the provided role assignment and multifactor authentication (MFA) tools.

Building an environment in the Cloud is relatively easy – that’s one of the selling points. For new Cloud subscribers with very basic admin needs, a complex security model may seem to be overkill – but as sites grow in scale, complexity, and number of admin users, the need for locking down administrative access becomes more acute. In a small, relatively basic security model, roles may be configured too broadly, and administrative roles may have far more permissions than needed for basic daily tasks. If the security model is not updated as the site grows, a set of compromised administrator’s credentials could cripple the Cloud infrastructure.

Using MFA should prevent a hacker from accessing control tools even if they gain access to administrator credentials. The hacker should only be able to obtain an MFA access code if they had the device that generated the code. But, in a scenario where an administrator keeps password information on a device that is also used as an MFA device (e.g. a smartphone), all it would take would be one stolen phone to provide access to Cloud Management tools.

But even with roles defined properly and MFA in use, and administrator’s laptops and smartphones properly secured, there is always the possibility that something could go wrong with your Cloud environment. Maybe not through hacking – maybe through administrator error, or natural disaster, or a hosting company closing its doors. That’s where backups come in, and that was the fundamental flaw in Code Spaces infrastructure. They had backups in multiple locations, but the multiple locations were all within AWS, and under the control of the compromised control panel. Backups to an outside location would have allowed them to rebuild. There would have been downtime, and some lost data, but they would still be in business.

Keep in mind that while the data was deleted, it was not compromised. The hackers were able to delete the S3 buckets, but they didn’t read them. Code Spaces and AWS did succeed in maintaining the confidentiality of the data, if not its continued existence. In those terms, the Cloud did successfully maintain security. However in terms of maintaining data integrity, some as yet unidentified portion of the shared security architecture failed. The moral of this story will likely end up that the Cloud can indeed be used securely, but that both Cloud users and vendors need to pay strict attention to security guidelines. In the case of Code Spaces a detail was missed somewhere and they paid the price.

Can Ransomware Devastate your Data in the Cloud?

Security concerns have always been an issue in Cloud adoption. Any time your servers and data are not physically under your control, you have to ask questions about how access to those servers is handled, and how the data on those servers is secured.

For applications that aren’t hosted in the Cloud data breach problems exist as well, Cloud based applications didn’t seem to have any significant vulnerabilities beyond those of other web based applications.

At least, that was until last week. On June 17th, Cloud based service provider Code Spaces had an intruder gain access to their Amazon control panel. On the Code Spaces home page, they provided the details of the attack, and outlined the repercussions for their company. Basically, an intruder gained access to Code Spaces’ Amazon EC2 control panel and demanded ransom in order to leave the site. When Code Spaces tried to lock the intruder out, the intruder began deleting customer data. By the time Code Spaces had removed the intruder, most of their data and backups had been partially or completely deleted.

It took only 12 hours from the time the DDOS attack began to the time it ended with Code Spaces regaining control. Given that DDOS attacks are not uncommon, it was certainly less than 12 hours before they realized they had an intruder and formulated a plan to deal with the intruder. Since this is a fairly new security scenario, it is unlikely that the company’s backup plans (retrieved from the Internet Archive) included dealing with intentional malicious deletions, and they also trusted that redundant Cloud based backups would be sufficient.

Code Spaces provided SVN and Git hosting, and Project Management to its customers, and stated that their priority was to get as much data back as possible. They went on to state:

Code Spaces will not be able to operate beyond this point, the cost of resolving this issue to date and the expected cost of refunding customers who have been left without the service they paid for will put Code Spaces in a irreversible position both financially and in terms of ongoing credibility.

As such at this point in time we have no alternative but to cease trading and concentrate on supporting our affected customers in exporting any remaining data they have left with us.

In the company’s twitter feed, they say that they will publish a “full detailed report” soon. Until then, this incident brings up a lot of questions for Cloud users. How exactly did the intruder gain access to the control panel? Was there a security hole on Amazon’s part, or a user error on Code Spaces part? If someone does gain unauthorized access to your Cloud control panel – how can you lock them out before they cause any damage? Is there a safe way to keep all of your backups in the Cloud, or is an offsite backup still a necessity?

In the midst of the marketing hype surrounding Cloud based computing, Code Spaces will serve as an example of a worst case scenario. Hopefully Cloud users will pay enough attention to the details of how Code Space was hacked to avoid similar problems, and look more closely at whether the Cloud is sufficient for their needs.

Monitoring the Cloud for End User Experience

Using the Cloud for all or part of your computing infrastructure doesn’t mean you can ignore monitoring. If you’re using Cloud based SaaS applications, or you have web applications hosted in the Cloud, you still need to verify that those resources are available and responsive. This doesn’t mean you have to do a deep dive into DevOps optimization – but you should verify the applications are performing for your users.

From the perspective of a Cloud user, how often a backend server needs to be migrated, or when a noisy neighbor slows an application down doesn’t matter. The Cloud obscures the details of the problem, and the user just cares that your web page took longer to load than they were willing to wait – and, oh, look – another cute kitten video.

The minimal areas you should monitor for Cloud performance from a user perspective are:

  • Verification of SLA agreements
    We’ve discussed Cloud SLAs before, and pointed out that the compensation many vendors offer is typically not enough to offset losses and is only available if you notice the outage, plus excludes maintenance windows. If your application needs to be available 24/7, you should be checking that you can access it 24/7, and check with the vendor if it’s not available when it should be. And, of course, documenting your outages so you can cash in on the SLA agreement if needed.

  • Application responsiveness
    SLA agreements only refer to uptime. If the application is available, but too slow for users to wait for, then it is effectively unusable. A responsiveness check should involve whatever functionality your application provides. If your users can log in, enter data, search, update records, etc – that is what you should be testing. You can create macros that will automate this, and then archive the data for trend analysis.

  • Optimizing resource reservations
    One of the draws of the Cloud is that you pay for what you use. However, if you reserve resources beforehand, you can pay a lower rate than you would for ad hoc resource requests. If you’re using IaaS to host your requests, keep an eye on the basic server monitoring metrics – CPU, disk, network and memory – and use your observations to fine tune the basic resources you request from the Cloud provider.

  • Pinpointing application problems
    Just because you can’t get to a Cloud application doesn’t mean it’s the vendor’s fault. The internet is between your users and the Cloud servers. If your DNS provider’s servers go down – or are attacked – users won’t be able to find your application. If your ISP has an outage, the application will be there, but users inside your organization won’t be able to get to it using your network. Or, the problem could just be that a switch or a router has died, or your network bandwidth usage is too high.

    Chart out the points of failure between your local network and your Cloud, and monitor them so that you can keep track of the cause of application failures. You can fix problems on your internal network but for external problems, keep track of when and where they occur, and of the vendor’s response.