Crafting data centres: the critical infrastructure behind the cloud

By on

This article appeared in the September 2017 issue of CRN magazine.

Subscribe now

Crafting data centres: the critical infrastructure behind the cloud

If you think it’s easy to build a data centre, think again. The process takes years, and is full of tedious details that have to be done right or the ramifications can be huge.

“Typically, we expect a 24-month lead time from the beginning of the planning phase until the commissioning for a large facility,” says Andrew Kirker, general manager of data centres at Schneider Electric Pacific. “The build time alone is typically 12 to 15 months, unless some form of prefabrication is used.”

A fully specified data centre needs to have land, electrical capacity, communications links to major telcos, generators, fire detection and suppression, building security, UPS, air conditioning, mechanical plant, and then all the standard facilities for a building, such as places to sit, bathrooms, break rooms, etcetera. And that’s not even the full list!

The location of facilities also has a large bearing on what kind of data centre is possible. “The location of data centres used to be heavily CBD-centric,” says Kirker. Veterans of the industry will recall the restrictions imposed on new infrastructure some years ago due to a lack of power capacity at certain substations. “These days, we don’t visit and hug the boxes as much, so location is less critical,” Kirker says.

However, the ecosystem of components required for data centres does provide restrictions on what is possible. For example, network interconnection is a major requirement for data centres, hence the concentration of facilities close to major subsea cable landing points in Sydney.

“We had a client building in three locations, and in one they hadn’t secured land yet,” says Mark Deguara, data centre solutions director for Vertiv ANZ. “The thermal technology that they ended up choosing became a function of the land they could get.”

Modular facilities

Just as in other technology areas, such as the automotive industry, the advancements at the high-end gradually make their way into more affordable products. While premium features were once only available in the exclusive model from a single manufacturer, air bags, anti-lock brakes, CD players and Bluetooth have all become ubiquitous.

For example, Vertiv has created an integrated, self-contained 19-inch rack unit called the SmartCabinet, which provides UPS protected power, sealed-unit cooling, and centralised monitoring. It’s eerily similar to the idea of converged infrastructure like the Dell EMC VxBlock or NetApp FlexPod, but for the rack itself.

Canberra Data Centres chief executive Greg Boorer says CDC has taken a more modular approach to data centre design, architecture and construction. “Where it could take up to 24 months to build a new data centre from scratch, we’ve got it down to about 10 months. And we’re talking 18–20 megawatt data centres.”

By taking a modular approach to the internal construction of the data centre, CDC has been able to minimise the amount of capital required to pre-build the next stage of its data centres. It can also offer heavily isolated multi-tenancy inside the data centre footprint. This has proved vital for CDC to support classified security workloads for government, and to support Microsoft’s new Canberra-based Azure regions. 

“Even someone as large as Microsoft will have more than just dedicated power,” Boorer says. “They’ll have lots of instances of dedicated footprints so that even if something was to go wrong in one small part of their deployment, it wouldn’t impact power or cooling on any part of the rest of their deployment.”

Redundancy

Redundancy has been the watch-word for Micron21. The Australian co-location provider grew out of a family printing business in Melbourne and in March became the first Australian company to achieve Tier IV rating from the Uptime Institute.

Achieving 100 percent redundancy in its high-density data centre meant backup systems everywhere, all supplied by Vertiv. Micron21 relies on a free-cooling chilled water solution and has deployed a separate refrigerant solution, with the two systems designed to run independently. The chilled water system itself is designed with two independent loops to offer redundancy within the system. 

This is far from the only example of redundancy atop redundancy, managing director James Braunegg explains. “We have done it across power, cooling, network – I’m even trying to do it across my staff!” he laughs. “We have three switch rooms and each room is fully redundant in and of itself – two UPS, two or three switchboards, two air-con units using different forms of technology. Anything in that room can fail.”

Braunegg adds that Micron21 “over-engineered the power redundancy” over and above Tier IV certification to build a truly mission-critical environment.

“On the network side, we have done the same. Within our core network, we have two racks but half our networking equipment is on one rack and half is in another rack. In each rack we have two core switches, two routers, two firewalls, two DDoS appliances, two load balancers, two of everything – everything is redundant within that rack but the rack is still a single point of failure so we have another rack in another part of the data centre.”

Micron21 has just invested in a massive haul of top-flight Cisco tin to offer blazing fast uplink capacity to each rack, supplied by Westcon-Comstor. Previously each rack had two 10-gigabit links; now they will have twelve 100-gigabit connections, offering 1.2-terabits of uplink capacity to each rack.

Energy efficiency

Data centre builders are embracing efficient technologies, and manufacturers are working hard to create efficient components for them to use. “Just in the last five years on the thermal side, there’s been a huge improvement in the efficiency of energy,” says Vertiv’s Mark Deguara.

“There is a greater focus on energy efficiency,” says Schneider Electric’s Kirker. The desire for greater efficiency appears to be more economic than ideological, but measurements like NABERS ratings and PUE figures are used by customers to compare options when they’re shopping for a provider.

As competition increases and margins get squeezed, the incentives to become more efficient also increase – if only to protect the profits from selling data centre space. There is a social and marketing benefit in being able to demonstrate a greener approach, which helps to reinforce the purely economic argument in a kind of virtuous cycle.

People are becoming more environmentally conscious, and customers want an array of more efficient options. The market is moving toward greater efficiency and the economic and environmental benefits that flow from that.

Invisible infrastructure

With all of that critical infrastructure invisible (until it breaks), people don’t really understand how it works, and tend to pretend that it’s easy to do. Unfortunately, that isn’t the case.

“People think that it’s easy to build and run a co-location facility,” says Kirker, “That it’s just a big hotel for racks. Both the build and run are highly complex, and the stakes are very high for failure.”

“People just put in equipment like a UPS and think they’re secure,” says Vertiv’s Deguara. But increasing the compute workload will increase your power consumption, which places extra load on the UPS and generates more heat. That heat needs to be cooled, which also takes more power. “It’s actually an ecosystem, and all the pieces have to work together,” Deguara notes.

Next time you need to find somewhere to store a rack of gear, or even when you pick a cloud region from a pull-down box, spare a thought for all the hard work that went into making sure the electrons keep flowing to the hardware that hosts your software.  

Copyright © CRN Australia. All rights reserved.
Tags:

Most Read Articles

You must be a registered member of CRN to post a comment.
| Register

Poll

Does the government do enough to procure from local IT providers?
Yes
No
View poll archive

Log In

Username / Email:
Password:
  |  Forgot your password?