Posted on August 30, 2017 by Mark Johnson
Connect with Mark Johnson on LinkedIn!
I talk often with IT leaders who have heard about the ability to turn cloud servers off when not necessary and believe that will make a huge difference for their costs. After all, if a workload only runs 9-5 M-F for 50 weeks a year, there is automatically a 78% savings! (2000 hours divided by 8760 hours in a full year.) However, is 40 hours a week a good assumption for most enterprise applications? Let’s take a look.
Nationwide organizations have at least three timezones to cover, and major production applications probably need to be available for the person who comes in an hour early and the one who stays an hour late. That’s already 13 hours a day, and what about batch processes and backups? If those run for at least a couple of hours every night, then we’re talking 15 hours a day during the week.
Weekends can be saved, though, right? Unfortunately, not usually. Most IT orgs do patch testing and install, new module installs, and upgrades over the weekends to avoid disruptions when most people are at work. Even if we only consider a 12-hour day every other weekend, that’s an additional 312 hours a year. The total is now up to 4062 hours—surely a conservative estimate for production enterprise workloads. Global organizations will have even more active hours.
However, assuming our systems are running approximately 50% of the time throughout the year, we could save 50% of our infrastructure costs by using on-demand instances and shutting them down during the off hours. Not bad! When our systems are on, though, we are paying the full “rack rate.” Bummer. All cloud service providers that I know of offer substantial discounts, but only for guaranteed use. As one example, Amazon Web Services calls these “reserved instances” and offers up to a 75% discount. Even for this example, where we tried to maximize the time we could keep the system off, there is significantly more savings for committing to the resources than we get from turning HW off, even though we are paying even when it is not being used. In looking at dozens of various scenarios, I find this more often than not—for major systems.
Now, there are some workloads (development, test, training) that you can stop often enough to make on-demand pricing work, and this is why we usually recommend hybrid cloud environments for a complete software lifecycle system. Dev and test in a public cloud with pay-as-you-go and production in a private cloud that is optimized for cost and performance. Gartner has stated that “IaaS for steady-state workloads…may be more expensive, than an internal private cloud.” IDC research shows both public and private cloud spending growing strongly over the next 5 years. Using technologies that seamlessly manage and integrate public and private different deployment models is critical—and the subject for a future blog post!
Mark Johnson, Vice President Enterprise Cloud Strategy, Mythics Inc.