reducing server power consumption
Limiting Energy Waste
Servers in data centres waste a substantial amount of energy. The reason is that servers are deployed and configured for peak capacity, performance and reliability, usually at the expense of efficiency. Such waste unnecessarily increases capital and operational expenditures, and can result in finite resources (particularly power and space) being exhausted, thereby creating a situation where the organisation might outgrow its data centre(s)
However, there are several steps IT managers can (and should) take to improve overall server efficiency—sometimes dramatically—without adversely impacting on capacity, performance or reliability. Here are the four steps that afford the highest return on investment.
Poor server utilisation is one of the biggest sources of waste in most data centres. Consolidating and/or virtualizing as many servers as possible can increase overall utilisation from around 10 percent (typical of dedicated servers) to between 20 percent and 30 percent. The significant reductions in both capital and operational expenditures have motivated most organisations to virtualize at least some of their servers, and those with aggressive efforts have discovered another major benefit: the ability to reclaim a considerable amount of both rack space and stranded power.
Continuously Match Server Capacity to the Actual Load
Even the best virtualized and most recently-refreshed server configurations waste power during periods of low application demand. Total server power consumption can be reduced by up to 50 percent by matching online capacity (measured in cluster size) to actual load in real-time. Runbooks can be used to automate the steps involved in resizing clusters and/or de-/re-activating servers, whether on a predetermined schedule or dynamically in response to changing loads.
The savings here are not trivial. Both the U.S. Department of Energy and Gartner have observed that the cost to power a typical server over its useful life can now exceed the original capital expenditure. Gartner also notes that it can cost over $50,000 annually to power a single rack of servers. So reducing the power consumed while servers are “idle” or clusters are lightly utilised holds the potential to deliver significant savings while continuing to satisfy application performance objectives. Furthermore, dynamic management can increase application capacity way beyond the original cluster allocation, supporting even unforeseeable spikes in demand therefore increasing the reliability of the application dramatically.
Determine Actual Power Consumption under Various Loads
Another obvious way to reduce power consumption is to utilise more energy-efficient equipment. Most IT departments are, therefore, starting to improve energy efficiency when adding capacity and/or during routine technology refresh cycles. To help IT managers make more fully-informed decisions, Underwriters Laboratories created a new performance standard (UL2640) based on the PAR4 Efficiency Rating. PAR4 provides an accurate method for determining both absolute and normalised (over time) energy efficiency for both new and existing equipment.
According to UL, “With the introduction of the new standard, IT professionals for the first time can make valid comparison between servers, better calculate total cost of server ownership, and make better decisions about the life and management of their servers". To calculate server performance using the UL2640 standard, a series of standardised tests is performed, including a Power-On Spike Test, a Boot Cycle Test and a Benchmark. The Benchmark results determine the server’s power consumption under various loads, measures transactions per watt in seconds, which is a particularly meaningful metric for comparing legacy servers with newer ones, and new models with one another, when making purchasing decisions and allows data centre managers to use actual idle/peak power consumption for allocation of space and power.
Load-balance by “Following the Moon”
Although many organisations now operate redundant data centres to satisfy business continuity needs, very few currently take full advantage of this powerful configuration. Having multiple, strategically-located data centres enables loads to be shifted to where power is currently the most stable and the least expensive.Because power is invariably the most abundant and least expensive at night (when outside air temperature is also at its lowest), such a “follow the moon” strategy can result in considerable savings. Integrating virtualized and load balanced applications across multiple data centres allows data centre managers to shift and shed capacity on demand to maximise application availability while minimising power and operating cost. The same functionality can also be used during demand response requests to benefit from utility incentives supporting the stability of the power grid, ultimately increasing the reliability of the applications.