MercoPress, en Español

Montevideo, December 22nd 2024 - 07:17 UTC

 

 

Google uncloaks once-secret server

Saturday, April 11th 2009 - 18:38 UTC
Full article
Urs Hoelzle, Google's vice president of operations Urs Hoelzle, Google's vice president of operations

Google is tight-lipped about its computing operations, but the company for the first time on Wednesday revealed the hardware at the core of its Internet might at a conference here about the increasingly prominent issue of data center efficiency.

Most companies buy servers from the likes of Dell, Hewlett-Packard, IBM, or Sun Microsystems. But Google, which has hundreds of thousands of servers and considers running them part of its core expertise, designs and builds its own. Ben Jai, who designed many of Google's servers, unveiled a modern Google server before the hungry eyes of a technically sophisticated audience.

Google's big surprise: each server has its own 12-volt battery to supply power if there's a problem with the main source of electricity. The company also revealed for the first time that since 2005, its data centers have been composed of standard shipping containers--each with 1,160 servers and a power consumption that can reach 250 kilowatts.

It may sound geeky, but a number of attendees--the kind of folks who run data centers packed with thousands of servers for a living--were surprised not only by Google's built-in battery approach, but by the fact that the company has kept it secret for years. Jai said in an interview that Google has been using the design since 2005 and now is in its sixth or seventh generation of design. “It was our Manhattan Project,” Jai said of the design.

Google has an obsessive focus on energy efficiency and now is sharing more of its experience with the world. With the recession pressuring operations budgets, environmental concerns waxing, and energy prices and constraints increasing, the time is ripe for Google to do more efficiency evangelism, said Urs Hoelzle, Google's vice president of operations.

“There wasn't much benefit in trying to preach if people weren't interested in it,” said Hoelzle, but now attitudes have changed.

The company also focuses on data center issues such as power distribution, cooling, and ensuring hot and cool air don't intermingle, said Chris Malone, who's involved in the data center design and efficiency measurement. Google's data centers now have reached efficiency levels that the Environmental Protection Agency hopes will be attainable in 2011 using advanced technology.

“We've achieved this now by application of best practices and some innovations--nothing really inaccessible to the rest of the market,” Malone said.

Why built-in batteries?

Why is the battery approach significant? Money.

Typical data centers rely on large, centralized machines called uninterruptible power supplies (UPS)--essentially giant batteries that kick in when the main supply fails and before generators have time to kick in. Building the power supply into the server is cheaper and means costs are matched directly to the number of servers, Jai said. “This is much cheaper than huge centralized UPS,” he said. “Therefore no wasted capacity.”

Efficiency is another financial factor. Large UPSs can reach 92 to 95 percent efficiency, meaning that a large amount of power is squandered. The server-mounted batteries do better, Jai said: “We were able to measure our actual usage to greater than 99.9 percent efficiency.”

The Google server was 3.5 inches thick--2U, or 2 rack units, in data center parlance. It had two processors, two hard drives, and eight memory slots mounted on a motherboard built by Gigabyte. Google uses x86 processors from both AMD and Intel, Jai said, and Google uses the battery design on its network equipment, too.

Efficiency is important not just because improving it cuts power consumption costs, but also because inefficiencies typically produce waste heat that requires yet more expense in cooling.

Costs add up

Google operates servers at a tremendous scale, and these costs add up quickly.

Jai has borne a lot of the burden himself. He was the only electrical engineer on the server design job from 2003 to 2005, he said. “I worked 14-hour days for two and a half years,” he said, before more employees were hired to share the work.

Google has patents on the built-in battery design, “but I think we'd be willing to license them to vendors,” Hoelzle said.

Another illustration of Google's obsession with efficiency comes through power supply design. Power supplies convert conventional AC (alternating current--what you get from a wall socket) electricity into the DC (direct current--what you get from a battery) electricity, and typical power supplies provide computers with both 5-volt and 12-volt DC power. Google's designs supply only 12-volt power, with the necessary conversions taking place on the motherboard.

That adds $1 or $2 to the cost of the motherboard, but it's worth it not just because the power supply is cheaper, but because the power supply can be run closer to its peak capacity, which means it runs much more efficiently. Google even pays attention to the greater efficiency of transmitting power over copper wires at 12 volts compared to 5 volts.

Google also revealed new performance results for data center energy efficiency measured by a standard called power usage effectiveness. PUE, developed by a consortium called the Green Grid, measures how much power goes directly to computing compared to ancillary services such as lighting and cooling. A perfect score of 1 means no power goes to the extra costs; 1.5 means that ancillary services consume half the power devoted to computing.

Google's PUE scores are enviably low, but the company is working to lower them further. In the third quarter of 2008, Google's PUE was 1.21, but it dropped to 1.20 for the fourth quarter and to 1.19 for the first quarter of 2009 through March 15, Malone said.

Older Google facilities generally have higher PUEs, he said; the best has a score of 1.12. When the weather gets warmer, Google notices is that it's harder to keep servers cool.

Shipping containers

Most people buy computers one at a time, but Google thinks on a very different scale. Jimmy Clidaras revealed that the core of the company's data centers are composed of standard 1AAA shipping containers packed with 1,160 servers each, with many containers in each data center.

Modular data centers are not unique to Google; Sun Microsystems and Rackable Systems both sell them. But Google started using them in 2005.

Google's first experiments had some rough patches, though, Clidaras said--for example when they found the first crane they used wasn't big enough to actually lift one. Overall, Google's choices have been driven by a broad analysis on cost that encompasses software, hardware, and facilities. ”Early on, there was an emphasis on the dollar per (search) query,“ Hoelzle said. ”We were forced to focus. Revenue per query is very low.“

Mainstream servers with x86 processors were the only option, he added. ”Ten years ago...it was clear the only way to make (search) work as free product was to run on relatively cheap hardware. You can't run it on a mainframe. The margins just don't work out,” he said.

Operating at Google's scale has its challenges, but it also has its silver linings. For example, a given investment on research can be applied to a larger amount of infrastructure, yielding return faster, Hoelzle said.

By Stephen Shankland for Cnet.News

Top Comments

Disclaimer & comment rules

Commenting for this story is now closed.
If you have a Facebook account, become a fan and comment on our Facebook Page!