In the first article, it was determined that the solar facility, which consumed 7 acres of land to present enough photo-volaic arrays to generate 1 Mega-Watt, would only provide some 16% of the power required for the Data-Centre ! Now, conventional wisdom would dictate that you have P-V panels as one component of the system. A common thing to do would be to add in some wind-turbines to augment the system. Where I live in Southern Alberta, if it isn't so sunny, it is often VERY windy, and vice-versa !
But there are other sources as well. The use of hydro-generation is becoming common. There are solutions that see simple hydro-electric dams, like Google's solution in Oregon, through to more "experimental" solutions, which see energy created by the action of waves or tides. These last two really aren't suitable for an individual Data-Centre, however.
So presuming we could double the output from 16% to 32% of the power required, then looking at ways to conserve or re-use the consumed energy seems prudent. From a conservation perspective, the major manufacturers of computer processors (Intel, AMD, etc.) are very consumed with the idea of lowering the number of watts consumed per processor core. This translates into lower power consumption, which is in turn converted to heat, which must be dealt with.
The other paradigm that must be looked at is the actual computing model. Since the dawn of the computing age, we have seen a constant oscillation between centralized computing - the mainframe, and distributed computing - what we call "open systems". The underlying principle is that a number of open systems could equal the computing power of a single mainframe, and would cost less money to acquire. This could translate into more parallel computing, whereby many processors can work on the same task at the same time, and therefore complete it faster. Or it could address issues of high-availability, whereby a single processing core being offline doesn't stop the processing altogether.
The last two years I worked for Red Hat, I travelled the world talking to customers about the costs of this paradigm. A client in the mid-west (can't name them under NDA, sorry) had performed a study with some startling results:
- 1 Mainframe core capable of running Enterprise Linux could provide the same processing capabilities as ~250 Distributed core
- ~250 rack-mounted computers would occupy some 17 42U racks, taking up about 2000 square feet, allowing for room to get to the backs & sides
- 1 Mainframe would occupy ~ 24 square feet
- That's a difference of taking an entire house for your computing needs vs taking the bathroom !
- Cooling the space required for the 17 racks of computing equipment would take 17 times the amount of cooling required for the mainframe
- From a manpower point of view, the racks of computers would require 4-5 Systems Administrators to manage, while the Mainframe would only require 1, although the client did mention that there would always be two staff for the systems, in case the first one got hit by a bus !
- Finally, the client tallied up the costs, and estimated they would SAVE some $1.5 M USD by staying on the mainframe platform, rather than moving their workloads to Distributed platforms.
As you can see, before we even examine ways to reduce the power consumed & what to do with the heat by-product, we have to consider ways to AVOID even creating the heat...
The three R's are: Reduce, Re-Use, Recycle. The fourth should be renew...
The opinions expressed are purely those of the author. Opinions are like noses - everyone has one, and they are entitled to it !
No comments:
Post a Comment