When you’re up against deadlines to turn in a software project, you probably are focused on ensuring that you meet the functionality requirements set out in the design specification. If you have enough time, you might consider trying to maximize performance. You might also try to document your code thoroughly so that anyone taking over the project doesn’t need to run Windows for Telepaths to work out what your subroutines actually do. But there is probably one area that you don’t consider: the cost of your code.

You mean what it costs to write the code, right? No.

Er, how about what it costs to compile? You’re getting warmer...

What it costs to support? No, colder again.

OK, you win. What costs do you mean?

I mean what it costs to run your code. In the bad old days, when clouds were just white fluffy things in the sky and all applications ran on real hardware in a server room somewhere or on users’ PCs, then cost simply wasn’t a factor. Sure, you might generate more costs if your application needed beefier hardware to run, but that came out of the cable-pluggers’ capital budget, and we all know that computer hardware needs changing every other year, so the bean-counters didn’t twig. A survey by Avanade showed that 50% of IT departments don’t even budget for the cost of electricity to run their IT systems. For more information, see this Avanade News Release, at http://www.avanade.com/_uploaded/pdf/pressrelease/globalenergyfindingsrelease541750.pdf.

But one big change that cloud computing brings will definitely affect how you think about writing code for good, because cloud computing providers don’t make that mistake.

Cloud computing offers many advantages for developers, particularly the fact that you can test your applications in an environment that exactly mirrors the live environment. You simply upload your code into the cloud, connect to the service endpoints and you’re up and running. Until you get the bill, that is.

The big difference is that with cloud computing, you’re renting computing power in a data center somewhere. As far as you’re concerned, it could be on Saturn. Except that the latency figures might be a bit excessive. If you’ve accidentally opened one of those magazines your network administrator takes with him to the bathroom, you might know that these data centers contain racks and racks of servers, all with lots of twinkling lights. If you’ve ever been to a data center, you’ll know that they can be very hot near the server fans, much colder around the cooling vents, and noisy everywhere. All this activity results from removing the heat that the servers produce. But that heat doesn’t get there all by itself – the servers create it from the electricity they use. What’s more, it requires even more electricity to remove that heat.

When data centers started appearing, organizations would charge customers by the U. 1U costs this much, 2U costs twice as much, and so on. That approach was fine in a world where each 4U server had maybe four processors. But customers (and data centers) found that there were ways of getting a lot more processing power out of those 4U enclosures, and so blade computing was born. Unfortunately, blade computing also meant that instead of drawing an amp or so of current, each 4U unit was drawing 10 amps or more.

If electrical power was cheap and likely to remain so for ever, then the power issue wouldn’t be a concern. But power costs do rise and are likely to do so (at least until a working fusion generator makes its appearance). In fact, electrical power is the largest variable cost for a typical data center. The more servers they run, the bigger the power bill. And the more compute-intensive applications they run, the greater their costs. If you build a new data center, 85% of the capital equipment costs result from providing power and cooling. Data centers would be quite cheap to build if you didn’t need to have this power and cooling infrastructure.

As more data center providers have become aware of the central importance of this link between electrical power usage and their profitability, they have changed their charging models. Instead of fixed fees per month based on size, increasing numbers of service providers are charging their customers for the power that their applications use. This new charging policy is therefore directly proportional to the number of users you service. More users means higher costs, but this time, there’s no hiding these costs from the bean counters.

If you deploy applications into the cloud, it is highly likely that your service provider will be charging you based on the energy that you use. Although you don’t see electricity itemized as kw/hr, you are billed for CPU, RAM, storage and network resources, all of which consume electricity. The more powerful processor with more memory costs more, not just because the cost of the components, but because they consume more electricity. In many ways, this is an excellent business model, as you don’t have to buy the hardware, maintain it, depreciate it, and finally, replace it. You simply pay for what you use. Or putting it another way, you pay for the resources you use. And this is the point at which you need to ask yourself: How much does my code cost? When power usage directly affects the cost of running your applications, a power-efficient program is more likely to be a profitable one.

With Windows Azure, users are charged by the CPU, RAM, storage, and network capacity they use. The chargeback model is as follows:

  • Compute = $0.12 / hour
  • Storage = $0.15 / GB stored / month
  • Storage transactions = $0.01 / 10K
  • Data transfers = $0.10 in / $0.15 out / GB - ($0.30 in / $0.45 out / GB in Asia)*

When you’ve optimized applications, you’ve most likely been optimizing for performance. Because users never complain when an application is too fast, right? But with a cloud-based model where your organization is charged for the power they consume within the hosting data center, suddenly there is a new imperative: to write code that minimizes power consumption.

So how do you do you write code that minimizes power usage? Surely there are too many variables to consider? If I just write efficient code, won’t that do? And how will I know if I’m writing power-efficient code?

It is possible that future versions of Visual Studio will include options for checking your code for power usage. Until that time, following these recommendations should help minimize the running costs of your applications within a cloud-based environment.

  1. Reduce or eliminate accesses to the hard disk. Use buffering or batch up I/O requests.
  2. Do not use timers and polling to check for process completion. Each time the application polls, it wakes up the processor. Use event triggering to notify completion of a process instead.
  3. Make intelligent use of multiple threads to reduce computation times, but do not generate threads that the application cannot use effectively.
  4. With multiple threads, ensure the threads are balanced and one is not taking all the resources.
  5. Monitor carefully for memory leaks and free up unused memory.
  6. Use additional tools to identify and profile power usage.

For more ideas on how to reduce the memory, check out the following resources and tools:

Energy-Efficient Software Checklist, at http://software.intel.com/en-us/articles/energy-efficient-software-checklist/.

Creating Energy-Efficient Software, at http://software.intel.com/en-us/articles/creating-energy-efficient-software-part-1/.

Intel PowerInformer, at http://software.intel.com/en-us/articles/intel-powerinformer/.

Application Energy Toolkit, at http://software.intel.com/en-us/articles/application-energy-toolkit/.