[CSG Spring 2006] Key lessons from schools that have developed new data centers

Advertisements

Jim Pepin – USC is moving into an old garment factory downtown LA. They’re building on the fourth (top) floor so that there’s no possibility that the data center can be innundated by water from floors above.They’re dedicating about 5,000 sq ft to administrative computing (payroll and personnel). There will be two data prep rooms for building stuff out – one 800 sq ft, one 300 sq ft. This will all be 24 inch raised floor. There will be about 4500 sq ft for high perfomance computing with 7 50 ton air conditioners, with 1.2 megawatts of air and power.

Jim notes that people under 40 aren’t used to having water under the floor in a data center, unlike guys that grew up with mainframes, and that’s creating some debate in the data center design world.

Shel from Berkeley is talking about their new data center. It took them 7 years to justify need, 2 years to get budget approval, 18 months to build base building, 6 months to build the data center, and 1 weekend to move in. The facility is on the periphery of the campus, and researchers indicated that if they couldn’t come to touch their machines that would be an issue – but they never do actually come. It would have been cheaper to build further away.

They built redundant power (2500 KVA).

They’re on the third floor of a building, which required a unique seismic design.

Ten percent of the cost was moving out of the old building. They planned for 75 hours downtime, it ended up being 62 hours for about 400 servers.

$11.7 million for data center and move. base building $23 million. Electrical service was $4 million.

Shel got signing authority for any data center space on campus. That allows them to move more folks into the central data center. There will still be some departmental data centers, but fewer of them.

They went to 18 inch raised floors in most spaces rather than 24 inch. All the cabling is in ladder racks above, tied down.

Each four cabinets have 96 fiber and 96 copper connections. There’s conversation on the back channel about whether that’s enough network capacity in an era of high density blade servers. Walter notes there that “for an IBM 14 blade chassis, we’d want to have something like 37 copper ports per chassis: We’d use switches for uplinks (8 ports) but for iSCSI SAN, we’d want each blade to have a direct connection to the switching backplane (28 ports) plus one management port.”

Lessons – Don’t rely on your campus design tema or architects. 65-80 wattt per sq ft is plenty – if you have a separate ultra high density room. Design for expansion or modular – you will need it. A full load bank for testing is a good thing. Standardize, standardize, standardize. Design for lights out. Use the move opportunity to plan change.

Michigan – Is building a new 10,000 sq ft data center which will also house Internet2. A big chunk of the electrical service is outside the building. They’re building 24 fiber connections to each half-row of racks. Clusters will have switching built into the racks for their own distribution.

east half of room is “server class” space – low density 160 watts per sq ft. Going with Liebert XD power stuff. The high density part of the room will be Liebert XDO planning average of 240 watts per sq ft across the entire space with ability to go to 300 in small areas.

2 mw for equipment, 2 mw for cooling, lighting, etc.

There’s some discussion about whether people are providing UPS to research clusters or not.

Michigan is using flywheels to provide complete equipment power until generators can come up – flywheel can carry load for about 25 seconds, generators are guaranteed to be up in 12-13 seconds. They decided not to go with batteries for the leaking and exploding reasons.

There will be ~ 400 fibers on three paths between campus and the facility.

Took three years of discussion with strategic deans to get them to agree that they would stop building local data centers and move into this one. They’ll be subsidizing about half of the operating costs centrally. Electricity will be paid for by customers – priced to drive behavior. Shel notes that at Berkeley they charge $8 per rack unit per month, all inclusive. Shel is willing to provide the cost model if people are interested in seeing what’s included.

Harvard – They were faced with an eighteen month eviction notice. They were looking for 4-5k sq ft – they ended up with 5k sq ft of computer floor, plus a staging area. Facility designed for central administrative computing, not research computing. Now FAS is bringing in lots of research computing.

They built the room at 55 watts per sq ft. The back channel thinks that sounds awfully low. They have dual 450 Kva UPS, and they’re adding a third.

Technorati Tags: ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s