WO2011051655A2 - A data centre - Google Patents

A data centre Download PDF

Info

Publication number
WO2011051655A2
WO2011051655A2 PCT/GB2010/001966 GB2010001966W WO2011051655A2 WO 2011051655 A2 WO2011051655 A2 WO 2011051655A2 GB 2010001966 W GB2010001966 W GB 2010001966W WO 2011051655 A2 WO2011051655 A2 WO 2011051655A2
Authority
WO
WIPO (PCT)
Prior art keywords
data centre
hall
centre hall
sections
section
Prior art date
Application number
PCT/GB2010/001966
Other languages
French (fr)
Other versions
WO2011051655A3 (en
Inventor
Guy Ruddock
Neil Frederick Ashdown
Original Assignee
Colt Technology Services Group Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Colt Technology Services Group Limited filed Critical Colt Technology Services Group Limited
Publication of WO2011051655A2 publication Critical patent/WO2011051655A2/en
Publication of WO2011051655A3 publication Critical patent/WO2011051655A3/en

Links

Classifications

    • EFIXED CONSTRUCTIONS
    • E04BUILDING
    • E04HBUILDINGS OR LIKE STRUCTURES FOR PARTICULAR PURPOSES; SWIMMING OR SPLASH BATHS OR POOLS; MASTS; FENCING; TENTS OR CANOPIES, IN GENERAL
    • E04H5/00Buildings or groups of buildings for industrial or agricultural purposes
    • E04H5/02Buildings or groups of buildings for industrial purposes, e.g. for power-plants or factories
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/14Mounting supporting structure in casing or on frame or rack
    • H05K7/1485Servers; Data center rooms, e.g. 19-inch computer racks
    • H05K7/1497Rooms for data centers; Shipping containers therefor
    • EFIXED CONSTRUCTIONS
    • E04BUILDING
    • E04HBUILDINGS OR LIKE STRUCTURES FOR PARTICULAR PURPOSES; SWIMMING OR SPLASH BATHS OR POOLS; MASTS; FENCING; TENTS OR CANOPIES, IN GENERAL
    • E04H5/00Buildings or groups of buildings for industrial or agricultural purposes
    • E04H2005/005Buildings for data processing centers

Definitions

  • the present invention relates to a data centre, a data centre hall, a section for forming a data centre hall, a kit of parts and a method of building a data centre hall.
  • the present invention also relates to a data centre hall climate control system, and a method of controlling the climate of a data centre hall.
  • the present invention also relates to a data centre hall floor, and a method of installing a data centre hall floor.
  • the present invention also relates to a data centre electrical power supply, a method of supplying electrical power to a data centre hall and a method of making connections in a data centre.
  • Data centres are large, specially designed facilities, which include one or more halls housing computer systems, typically servers arranged in racks.
  • Data centres include other equipment required to operate and protect the computer systems, such as communications systems, environmental controls (for example, air cooling and fire suppression), power supplies and security.
  • communications systems for example, Ethernet systems
  • environmental controls for example, air cooling and fire suppression
  • power supplies for example, power supplies and security.
  • redundancy in the equipment in a data centre so that the data centre can continue to operate even if some of the equipment fails.
  • Data centres (sometimes called server farms) are used for many purposes including running core business applications for large businesses or providing backups for data away from the site of a business. Data centres are critical for the successful operation of many businesses and data stored on them is often confidential. Because of this, data centres usually have high levels of security and are very discretely located.
  • the type of data centre described herein is intended to meet at least tier 3 of this standard. That is so called
  • Known data centres halls are typically located in warehouse-type buildings and they are designed and constructed specifically to fit the building available.
  • the time to fit-out a building into a data centre hall takes a long time (typically 18 months for a 500m 2 data centre hall).
  • the wiring alone can take several months to install by a highly specialised workforce.
  • a lot of space is required around the building during construction to house the facilities, such as materials, waste and office space, required to construct the data centre hall.
  • Optimising space utilisation within the data centre hall for every bespoke construction is difficult because buildings are not generally specifically designed to house a data centre hall and space can be wasted.
  • Optimising cooling arrangements for every bespoke construction is also time consuming and expensive because they have to be optimised for each installation. These factors add to building and running costs or a sacrifice in performance.
  • Portable data centres have been installed in shipping containers by a number of organisations such as the Sun Microsystems, Inc. arrangement described in international patent application No. WO-A-2008/033921 , the Google, Inc.
  • the portable data centre 10 of Figure 1 uses a standardised 20 feet
  • the container 12 more usually used to transport goods by ship or lorry.
  • the container includes two sidewalls 14 of corrugated steel on opposing sides that are joined on their upper edges to a steel roof 16 and on their bottom edges to a steel base 18.
  • the ends 22 and 24 of the container are provided with doors 26 (in Figure 1 , shown open at one end 22, and closed and not visible at one end 24).
  • the sidewalls, base, roof and doors of this sort of shipping container provide the structural strength.
  • the structural integrity of the container is provided by these components together. If one of these components is not in place, and intact, then the container will not be able to withstand heavy loads, such as that of the computer equipment inside it, when it is moved.
  • the interior of the shipping container is provided with a row of racks of computers 27 along each of the sidewalls 1 . Typically, 8 to 10 racks are provided. Heat exchanger/ cooling fan towers 28 are also provided along one sidewall. Both sidewalls of the container (only one side is shown in Figure 1 ) have a power terminal 34, a data connection port 36 and a chilled water connection port 38.
  • This arrangement provides a complete data centre in a single container that simply needs to have the power terminal 34 connected to a suitable power supply, the data connection port 36 connected to a communications link, and a chilled water supply to be connected to the chilled water connection port 38 for it to operate.
  • Cooling, networking and power facilities are provided from a centralised source, along with security arrangements, for the whole facility.
  • Data centres include climate control arrangements to maintain the computer equipment inside the data halls at acceptable temperature and humidity levels. It is necessary to control the temperature of computer equipment as it operates most efficiently at a particular temperature. It is necessary to control the humidity of the environment around the computer equipment so that the relative humidity is not less than a particular level, typically around 20%. This is because at low humidity static electricity build-ups, which, when discharged, can damage solid state computer equipment.
  • Known data centres typically have a climate control arrangement with a Power Unit Efficiency of between 1 .6 to 2.1.
  • a typical known climate control system 40 for a data centre hall is illustrated in
  • Figure 2 It has three different ways of controlling the climate of the data centre hall depending on the outside temperature and humidity.
  • the outside temperature is low and humidity is high, it uses fresh air cooling. That is to say, it uses air from outside the data centre hall to cool the interior of the data hall.
  • air in the data centre hall is recirculated and mechanical cooling is used in the form of a compressor to reduce the temperature of the air and/or to increase its humidity.
  • the data centre hall is cooled by a mixture of fresh air from outside and recirculated warmed air heated in the data centre hall.
  • the climate control system 40 is arranged as follows. It comprises an inlet duct 42 having an inlet port 44 on the outside of the data centre hall, for fresh air to enter the data centre hall from outside, and an outlet port 46 on the inside of the data centre hall.
  • the climate control system also includes an outlet duct 48 with an outlet port 50 on the outside of the data centre hall and an outlet port 52 on the inside of the data centre hall for exhausting air from inside the data hall to outside the data hall.
  • the inlet duct 42 and the outlet duct 48 are joined between their respective inlet and outlet ports 44, 50 by a return duct 54.
  • a damper or variable valve (an inlet damper 56) is located between the inlet port 44 of the inlet duct 42 and where the return duct 54 joins with the inlet duct to control air flow into the climate control system 40.
  • a damper or variable valve (an outlet damper 58) is also located between the outlet port 50 of the outlet duct 48 and where the return duct 54 joins with the outlet duct to control air leaving the climate control system.
  • Another damper or variable valve (a return damper 60) is located in the return duct 54 to control recirculation of the air in the climate control system.
  • a mechanical cooling arrangement 62 in the form of a chiller or air conditioner and a supply fan 64 are located in the inlet duct 42 between where the return duct 54 joins with the inlet duct 42 and the outlet port 46 of the inlet duct to suck through and cool and/or humidify air entering the data centre hall.
  • Various sensors are provided outside and inside the data centre hall (not shown) for measuring temperature and relative humidity in various locations.
  • the sensors are in communication connection with a controller and the controller is also in communication connection with the dampers, the chiller, and the supply fan in order to control them in the three regimes described above.
  • the controller controls the climate control system 40 as follows to use fresh air cooling (direct free cooling).
  • the inlet 56 and outlet 58 dampers are open (thus, the inlet 44 and outlet 50 ducts are open) and the return damper 60 is closed (thus, the return duct 54 is closed). This is shown by the dashed lines in Figure 2.
  • the supply fan 64 and the extractor fan 66 are on and the chiller 62 is off. As a result, cool air is moved from outside the data hall through the inlet duct 44 into the data hall where it cools the computer equipment and the air is itself heated.
  • the heated air is extracted from the data hall, through the outlet duct 52, and exhausted from the outlet port 50 of the outlet duct 48. In this way, cool fresh air is moved from outside the data centre hall into the data hall, and hot air is moved from inside the data centre hall to outside. No energy is used by the chiller.
  • the controller controls the climate control system 40 as follows to use a chiller to cool and humidify recirculating air.
  • the inlet 56 and outlet 58 dampers are closed and the return damper 60 is open (shown by the solid lines in Figure 2); the supply fan 64 is on; and the chiller 62 is on.
  • the inlet and outlet dampers are closed and the return damper is open, air is recirculated from inside the data centre hall and no fresh air enters the data centre hall from outside. Hot air is sucked from the data centre hall through the outlet duct 48 and through the return duct 54 and into the inlet duct 42.
  • the air is then sucked through the chiller (which cools and/or humidifies the air) by the supply fan and the cool air blown back into the interior of the data centre hall. Considerable energy is used by the chiller.
  • the controller controls the climate control system 40 as follows to use a mixture of fresh and recirculating heated air to cool the data centre hall.
  • the inlet 56, outlet 58 and return 60 dampers are partially open; the supply fan 64 is on;; and the chiller 62 is off.
  • cold fresh air enters the data centre hall from outside through the inlet duct 44 where it is heated by hot air from the return duct 54 that has passed through the outlet duct 48 having originated from the interior of the data centre hall. No energy is used by the chiller.
  • the output port 46 of the inlet duct 42 enters into the data centre hall under a raised floor 70 of the data centre hall, as illustrated in Figure 3.
  • the raised floor typically comprises square tiles 72 of side 600mm. These tiles are held up by a leg or strut 74 located at each corner of a each tile that is placed on the actual floor of the data centre hall.
  • Racks for the computer equipment for the data centre hall is located on the raised floor tiles and air from the climate control system enters the interior of the data centre hall through vents in the raised floor (not shown).
  • Cabling for the data centre is also located under the raised floor in trays. The legs holding the raised floor obstruct the access for the cabling. It is difficult to access the under floor area, and in particular to install the cabling there.
  • Data centre sites typically have such large electrical power requirements that they have their own electricity substations to supply energy from power stations.
  • a typical data centre site has the same electrical power requirements as a small town.
  • Data centre sites are provided with generators, usually powered by a diesel engine with a maximum power output of typically around 1000kVA, to power the whole data centre site in the event of a power failure from the data centre's electricity substation. Diesel engines of these generators would typically power a ship.
  • Data centres also have so-called uninterruptible power supplies (UPSs), each in the form of a set of batteries, to provide electrical power for the whole data centre site immediately after power fails from the transformers, but before there is time for power to be delivered from the generators. This is because it typically takes 10 to 30 seconds after starting a generator before the generator can provide adequate power.
  • UPSs uninterruptible power supplies
  • all the batteries of the UPSs are, in normal use, recharged from the electricity substation and, if no power is supplied from the electricity substation, the batteries are recharged from the generator that is operating.
  • Each set of batteries typically stores enough energy to power the whole data centre site for 5 to 10 minutes, in case there are delays in starting the generators.
  • the type of very large diesel engine generator typically used is difficult to transport to data centre sites, particularly as these sites can be located in discrete remote areas where access may only be by narrow minor road. Furthermore, they also take up a lot of space on the site. Typically, around 10% of power to a data centre is used in charging the batteries of the UPSs.
  • Figure 4 illustrates a typical known power supply arrangement 78 for a data centre.
  • the data centre has power provided from two sources or sides 80, 82. In Figure 4, this is illustrated as source or side A 80 and source or side B 82.
  • the data centre site is powered from both side A and side B operating at part load, typically about half maximum load. In the event of power failure from either side A or side B, the other side operates at about maximum load to power the entire site alone.
  • the electrical power provision for the data centre comprises a pair of switchgear arrangements 84,86, one 84 providing the source A and the other 86 the source B.
  • Each switchgear arrangement 84,86 has a transformer 88,90 and a pair of generators 92,92',94,94' electrically connected to it.
  • a UPS 96,98 is also electrically connected to each switchgear and to data centre hall 99.
  • UPSs 96,98 are used to power the data centre hall 99 in the meantime.
  • each of the UPSs can power the entire data centre hall. While the generators are starting, typically, both UPSs are run at half maximum load to power the whole data centre hall. In the event of failure of one of the UPSs, then the other UPS is used at about maximum load to power the whole data centre hall.
  • the UPSs 96,98 are housed in separate buildings to the data centre hall 99 and have their own dedicated cooling arrangements.
  • a preferred embodiment of the invention is described in more detail below and takes the form of a data centre hall comprising a plurality of sections each defining a space within connected together to define a chamber within.
  • the chamber is suitable for housing racks each comprising one or more computers.
  • This arrangement is quick and simple to assemble on site.
  • Figure 1 is a perspective view of a known data centre using a shipping container
  • Figure 2 is a schematic view of a known climate control arrangement of a known data centre
  • Figure 3 is an isometric view of a known floor arrangement for a data centre hall
  • Figure 4 (prior art) is a schematic of a known electrical power supply arrangement for a data centre
  • Figure 5 is a plan view of a data centre hall made from the sections of Figures 6A, 6B and 6C embodying an aspect of the present invention
  • Figures 6A, 6B and 6C are plan views of sections forming the data centre hall of Figure 5 embodying an aspect of the present invention
  • Figure 7 is a plan view of another data centre hall made from the sections of Figures
  • Figure 8A is an isometric view of a framework for the section of Figure 6C;
  • Figure 8B is art isometric view of a section of a data centre hall embodying an aspect of the present invention
  • Figure 9A is plan view of part of a floor of a data centre hall embodying an aspect of the present invention.
  • Figure 9B is a view from one end of part of the floor of Figure 9A;
  • Figure 10 is a cross-sectional view from one side of part of a data centre hall embodying an aspect of the present invention
  • Figure 11 is a cross-sectional view of the data centre hall of Figure 10 viewed in a direction perpendicular to that of Figure 10;
  • Figure 11 A is a cross-sectional view of part of the data centre hall of Figure 11 ;
  • Figure 12 is a schematic view of a climate control arrangement embodying an aspect of the present invention.
  • Figure 13 is schematic view of electrical power arrangements for a data centre embodying an aspect of the present invention.
  • Figure 14 is a schematic view from above of a section of a data centre hall embodying an aspect of the present invention.
  • Figure 15A (prior art) is a schematic plan view of known data centre site.
  • Figure 15B is a schematic plan view of a data centre site embodying an aspect of the present invention to approximately the same scale as the data centre site of
  • the data centre hall may be located outside or within a building such as a warehouse.
  • the data centre hall 00 of Figure 5 has a plurality of sections (in this example, three) 102,104,106 each defining a space or interior space within. They are connected together to define a single chamber, room or contiguous space 108 within.
  • the space defined by each section and by the chamber is suitable for housing racks 110 each comprising one or more computers.
  • the environment control provided by each unit of the chamber is adapted such that it controls the environment of the chamber as a whole. Environment control forms, for example, one or more of the following: security, fire detection, fire suppression, air flow, humidity control, air temperature control, and climate control.
  • the sections are not themselves self contained; or individually fully functioning data centres or data centre halls. The sections only function as a data centre hall once they are joined together to form the single chamber.
  • the sections forming the data centre hall are constructed at a factory away from the site on which the data centre is to be built.
  • the sections are provided in a kit of parts for assembly on a site suitable for a data centre.
  • the build time on-site for the complete data centre is typically a little over two months for a 500m 2 data centre hall.
  • Example sections 102,104,106 for connecting together to form a data centre hall 100 are shown in Figures 6A, 6B, and 6C.
  • the sections each include a base 112 and a roof above (not shown in Figures 6A, 6B, and 6C).
  • the roofs and the bases are rectangular.
  • Each of the sections has sidewalls 114 (shown as solid lines in Figures 6A, 6B, and 6C), but they each include at least one open side 116 (no sidewall, shown as broken lines in Figures 6A, 6B, and 6C). They also all have side walls along both their transverse edges.
  • the section 102 in Figure 6A is a "left hand section". It includes a side wall 114 along one longitudinal side (the left hand side) and the other longitudinal side 116 is fully open.
  • the side wall 114 along the longitudinal side includes an opening 118 at each end for a door and a stairway for accessing the interior or chamber of the data centre hall.
  • the section 104 in Figure 6B is a "centre section" in which both longitudinal sides are fully open 116.
  • the section 106 in Figure 6C is a "right hand section". It includes a side wall 114 along one longitudinal side (the right hand side or other side to that of the "left hand section") and the other longitudinal side is fully open 116.
  • the side wall along the longitudinal side includes an opening at each end for a door 20 and a stairway for accessing the interior of the data centre hall.
  • each section includes racks 110 for housing computers.
  • the racks may be pre-installed at the factory or installed on site.
  • the sections 102,104,106 are typically about 16.8m long.
  • the centre sections 104 are typically about 3.2m wide.
  • the left hand and right hand sections 102,106 are typically about 3.3m wide (to include the thickness of the longitudinal side wall 114). This means that the sections can each be transported by lorry on normal roads.
  • the UPSs discussed below
  • switchgear and batteries for the sections can also be delivered in standard 6m long containers.
  • Other dimensions of the sections 102,104,106 are possible. For example, each section could be between 26m and 12m long, between 5m and 2.2m wide, and between 5m and 2.5m high, preferably between 20m and 14m long, between 4.5m and 2.5m wide and between 4m and 3m high.
  • a bigger data centre hall 200 may be built by connecting together the open sides of a plurality of centre sections 104 (in this example, the open sides of ten centre sections are connected together) with one left hand 102 and one right hand section 106 to provide a 12 section data centre hall with a single chamber or contiguous space of 500m 2 .
  • a typical 1000m 2 data centre hall may be made by providing two of these 500m 2 data centre halls together.
  • Figure 8A shows a frame or framework 300 forming the supporting structure of a section 106 (in this case, a right hand section, but the other sections are constructed in a similar way).
  • a framework for the supporting structure means that sections can be made with one or two open sides that have enough structural strength to be transported complete with racks of computers inside.
  • the framework arrangement of the sections provides sufficient strength to the sections such that one section can be placed on top of the other (directly on top of the other) without requiring a separate mezzanine between the sections to take the weight of the upper section.
  • a typical 1000m 2 data centre hall may be made by providing two 500m 2 data centre halls each using 12 sections as described above one on top of the other.
  • the frames supporting the structure of a section 106 comprise beams 302, such as I beams with an I shape cross section.
  • the base 304 and roof 306 each comprise a pair of beams 308 extending longitudinally with further beams 310, spaced apart in the longitudinal direction, extending transversely between them.
  • the base and roof are spaced apart vertically by beams 312 extending vertically forming columns at each corner, and also by beams 314 extending vertically and spaced apart along the longitudinal edges of the base and roof.
  • the beams are made from steel.
  • Rigid sheets are fixed to the beams of the base 304 and roof 306 to form a continuous surface across them. Rigid sheets are also fixed to the vertically extending beams 312, 314 to form side walls, in the places where they are not open. The sheets are not shown in Figure 6. The edges of the sheets may be sealed to weatherproof the units when the units are to be located outdoors.
  • Beams 316 spaced vertically from the base 304 each extend in a transverse direction across each section and they are spaced apart in the longitudinal direction.
  • the beams 316 are each mechanically connected to support beams 315 that extend longitudinally along each side of each section.
  • the beams 316 support a floor 317, spaced from the base 304, that is illustrated in Figures 9A and 9B discussed further below.
  • the section may also include additional sidewalls 350 along the transverse ends of the section and an additional roof 352 as illustrated in Figure 8B. These provide additional weather protection to the data centre hall so that it can be located without requiring use of a building to put it in.
  • the section 104 shown in Figure 8B is a centre section in which both longitudinal sides are open. The same concept could be applied to "left hand” and “right hand” sections too. These sections would also have a sidewall of the type described below along one longitudinal, closed, external side.
  • the additional sidewalls 350 are rain screen walling or rain screen panels in the form of trapezoidal single skin rain screen panels. They are spaced from the adjacent sidewalls 11 of the data hall section 104. This provides ventilation.
  • the rain screen panels are fixed to beams, in particular, Z purlins attached to external gantry modules forming the side walls of the data hall section.
  • the additional roof, roof module or roof system 352 is also constructed from panels and, in particular, trapezoidal single skin rain screen panels.
  • the panels used for the roof are fixed to pre-formed modules that are fastened, for example, bolted to the top of the data hall sections 104. These provide a slope from the middle of the roof to the edges to allow rain to run off.
  • Each data hall section has its own roof module that is joined and sealed to adjacent data hall sections.
  • FIG 9A shows a partially assembled floor 317.
  • Tiles 319 with a square shape of side 600mm are placed on the beams 316 to form the floor.
  • Two opposing edges of each tile are placed on adjacent beams (as shown in Figure 9B).
  • Tiles are placed in this way, side-by-side, across all of the beams 316.
  • At one longitudinal edge a longer, rectangular tile is used as the width of the section is not exactly divisible by the side length of the square tiles. In this way, beams supported between sides of a data centre hall support the floor.
  • the floor 316 is located above the base 304, such that there is a space 318, and in particular a clear and open space, between the base and the floor.
  • the space is 0.8m high in this example data centre hall. Adjoining floors of all the sections connect together to form a continuous floor through the whole chamber.
  • Racks 110 for servers and other computer equipment are fastened to the upper surface of the floor as illustrated in Figure 10.
  • 270 600mm racks are provided in a 500m 2 hall.
  • 17 racks are provided in each row of racks in each section. This allows adequate air flow while also allowing adequate escape routes in the data centre hall.
  • between 12 and 17 racks may be provided in each row.
  • Parts of the cabling, in particular for communications, and cooling systems for the data centre hall are located in the space 318 between the base and the floor of the data centre hall as also illustrated in Figure 11.
  • the cabling is simply placed on the base, which advantageously causes the air flow in the space between the base and the floor to be turbulent.
  • the under-floor space is unobstructed. It does not have struts or legs for supporting the floor tile as in the prior art arrangements described above. Therefore, it is easy to install the cabling and, furthermore, cabling racks are not required.
  • the racks 110 for servers and other computer equipment extend
  • each section 102,104,106 longitudinally along each section 102,104,106 (see Figure 11 ) and the racks are spaced apart through the chamber (see Figure 10).
  • a single row of racks extends through each section.
  • aisles 322 are formed in the spaces between the racks.
  • the aisles are alternately so-called cold aisles 324, which contain air cooled by the data centres cooling or climate control system for cooling the computer equipment, and hot aisles 326, which contain hot air that has been heated by the computer equipment.
  • Each cold aisle has a ceiling 328 above it, which is located between, and fixed and sealed to the top edges 330 of neighbouring racks (or between a rack and a sidewall of a unit if the cold aisle is at the end of the racks).
  • Doors or other closures such as downwardly hanging plastics strips or butcher's blinds are located at the end of each aisle.
  • Cold air enters the cold aisles from the space 318 under the floor 316 through vents or grates 332 in the floor of the cold aisles.
  • Hot air exits the hot aisles directly into the chamber.
  • Front surfaces of servers in the racks face the cold aisles and rear surfaces of servers face the hot aisles. In this way, cold air is blown through the servers to cool them and is ejected into the hot aisles.
  • the cooling and humidity control system or climate control system 333 for the data centre hall is provided in cabinets 334 connected to a transverse side wall of each section 104. They are integral with the data centre hall. By locating these outside the walls of the data centre hall, maintenance is simplified.
  • the climate control arrangement of Figure 12 additionally includes a wrap round coil 380 containing a heat transfer agent in the form of, for example, a solution that is at least part ethylene glycol.
  • a heat transfer agent in the form of, for example, a solution that is at least part ethylene glycol.
  • This arrangement uses rejected heat from the data centre hall to provide the latent heat of evaporation. In other words, it allows the humidity of the air to be increased without requiring mechanical cooling through a chiller or an air conditioner resulting in a considerable energy saving over the prior art system illustrated in Figure 2. This is so-called indirect free cooling.
  • the wrap round coil arrangement 380 comprises a first coil or heat exchanger 382 and a second coil or heat exchanger 384 joined together in a circuit by tubing 386. Inside the tubing is the heat transfer agent in the form of ethylene glycol. A pump 388 is located in the circuit for pumping the heat transfer agent around the circuit.
  • the first coil is located in the inlet duct 42 between the entrance or inlet port 44 of the inlet duct and the inlet damper 56 and the first coil extends across the inlet duct.
  • the second coil 384 is located in the inlet duct between where the return duct 54 joins the inlet duct and the chiller 62.
  • the control regime of the controller (not shown) is the same as that of the prior art example of Figure 2 in many respects. However, there is one further or fourth operating regime where the outside temperature is low (between, for example, 17°C and 27°C) and relative humidity in the data centre hall drops to a low (typically less than 35%).
  • the heat transfer agent in the wrap round coil 380 is used to cool the data centre hall while maintaining humidity. To do this, in this regime, the pump 388 in the wrap round coil circuit is started (it is stopped in the other three cooling regimes), and air from the data centre hall is recirculated through the system.
  • the inlet 56 and outlet 58 dampers are closed and the return damper 60 is opened. Outside air contacts the first coil 382, which cools the heat transfer agent within the circuit of the wrap round coil.
  • the cooled heat transfer agent is pumped around the circuit by the pump and it cools the hotter air in contact with the second coil 384 and is itself heated.
  • the heated heat transfer agent is then pumped though the circuit and through the first coil again and cooled and the cycle continues. In this way, the data centre hall is cooled and humidity is maintained. While energy is used by the pump in the wrap round coil it is considerably less than the energy used by the chiller of the prior art arrangement of Figure 2 described above.
  • Each cabinet 334 comprises a floor 336 spaced from a base 338 of the cabinet forming a duct 340.
  • a through hole 342 provided from the duct 340 that aligns with a through hole 346 in the transverse side wall of the data centre hall providing a fluid path into the space between the base 304 and the floor 316 of the data centre hall. This corresponds to outlet port 46 into the data centre hall of Figure 12.
  • Another through hole 344 is provided through an upper surface of the transverse side wall of the data centre hall.
  • a through hole 348 in the upper part of the side wall of the cabinet aligns with the through hole in the upper surface of the transverse side wall of the data centre hall. This corresponds to outlet port 52 of Figure 12 for air exiting the data centre hall.
  • the data centre is provided with electrical power from a high voltage electricity sub-station located on site.
  • back-up power is provided by a plurality of generators each powered by a diesel engine.
  • the data centre has a plurality of UPSs including batteries to provide electrical power for the whole data centre site immediately after power fails from the electricity substation, but before there is time for power to be delivered from the generators.
  • the data centre is powered by the generators, it is controlled by a controller such that the batteries of the UPSs are not charged.
  • the electrical power provision for the data centre is illustrated in more detail in Figure 13.
  • the data centre has power provided from two sources or sides 402, 404. In Figure 13, this is illustrated as source or side A 492 and source or side B 404.
  • the data centre site is powered from both side A and side B operating at part load, typically about half maximum load. In the event of power failure from either side A or side B, the other side operates at about maximum load to power the entire site alone.
  • the electrical power provision for the data centre comprises a pair of switchgear arrangements 406,408 one providing the source A and the other the source B.
  • Each switchgear arrangement 406, 408 has a transformer 410,412 electrically connected to it and a pair of generators 414, 414', 416, 416'.
  • Each of the switchgear arrangements is electrically connected to each section 104 of the data centre hall.
  • Each section 104 of the data centre hall is illustrated in Figure 14.
  • Each section comprises a pair of busbars 488, 490, one electrically connected to the A source switchgear and the other to the B source switchgear.
  • Computer equipment in the rack or racks 110 of each section is adapted to operate with either one or both of a pair of power supplies and is electrically connected to both busbars of the pair of busbars.
  • Each section of the data centre hall 104 is provided with a pair of UPSs 484,486 each comprising a set of batteries.
  • One of the pair of UPSs is electrically connected to one busbar of the pair of busbars and the other UPS is electrically connected to the other of the pair of busbars.
  • a plurality of electrical energy storage devices or UPSs are provided each having a busbar electrically connected or directly electrically connected to it.
  • the busbars are provided to some, a subset or not all of the computers of a data centre hall. Said some computers receive electrical energy from one or more of the plurality of electrical energy storage devices.
  • the whole data centre site is normally powered from the two transformers 410,412 one on the A source or side 402 and one on the B source or side 404 and each transformer operates at part load, typically about half maximum load.
  • the other transformer of the other source or side operates at a load, typically about maximum load, to power the whole data centre site. In this way, redundancy in the transformers is provided.
  • a controller controls the generators 414, 414', 416, 416' (typically powered by diesel engines) to power the whole data centre site. In this situation, the generators all operate at part load, typically about half maximum load. In the event of failure of one generator 414, 414', 416, 416', the power provided by the other generator of the pair is increased, typically to about maximum load to compensate for the loss of power from the other generator and the data centre is still powered by both A and B side generators. If both generators of a pair fail, then the other pair of generators operate at increased load to compensate, typically at about their maximum power output to power the whole data centre site and thus generators of only one source or side A or B are used to power the data centre site.
  • each section 104 operates at part load, typically at about half maximum load to power its section during a "mains" power failure while the generators are given time to generate adequate power. If one UPS of a pair fails, then the other UPS is run at an increased load (typically, at about maximum load) to compensate to power the whole section. In this way, each section has redundancy in its UPS provision.
  • each UPS has a maximum power rating of around 80kVA to power each 17 rack section.
  • the data centre site includes an electricity substation 80 or principal electrical energy supply to supply energy to the data centre site from power stations.
  • Two transformers of the electricity substation each have a typical maximum power of around 2000kVA.
  • the generators 480,482 each powered by a diesel engine have a typical maximum load of 850kVA.
  • the generators do not charge batteries of the UPS system during a "mains" power or principal electrical energy supply failure; the UPS batteries or chargeable electrical energy storage devices are only charged from the "mains” or electricity sub-station.
  • an electrical storage device of the data centre is charged only by a principal electrical energy supply of the data centre and never by an electricity generator of the data centre.
  • the pair of UPSs 484 are each housed in a cabinet 450 located on a transverse side wall 114 of each section 104 (as illustrated in Figures 6A, 6B, 6C) opposite to the transverse side wall on which the cabinet 334 for the cooling system or room cooling unit (RCU) is located.
  • Electrical power is provided into the busbar system of the data centre through a cable 452 in the top of the cabinet into the roof 306 of the data centre hall.
  • the cabinets are arranged in a similar way to those of the cooling arrangements.
  • the UPSs or electrical energy storage devices are integral with the data centre hall. There is a fluid path from the climate control system to the interior of the data centre hall via the electrical energy storage device or UPS. In this way, the climate control system of the data centre hall controls the climate of both the data centre hall and the UPS.
  • Each UPS cabinet 450 comprises a floor 453 spaced from a base 454 of the cabinet forming a duct 456.
  • a fan 458 is located in the duct.
  • the UPSs are mounted on the floor, which comprises vents or grilles 460 allowing air flow from the duct into a space around the UPSs 484.
  • a through hole 464 provided into the duct and another through hole 466 is provided through an upper surface.
  • the through holes 464,466 in the cabinet align with through holes 468,470 in the transverse side wall of the data centre.
  • the through hole into the duct aligns with a through hole 468 into the space between the base 304 and the floor 316 of the data centre hall, and the through hole in an upper surface of the cabinet side wall aligns with a through hole 470 into the chamber of the data centre.
  • a flow of cool air provided from the climate control arrangement described above, in the space between the floor 316 and base 304 of the data centre hall is sucked, by the fan 458 in the duct 456, through the aligned through holes 464,468 in the sidewalls 114 of the data centre hall and cabinet 450 and into the duct of the cabinet.
  • This cool air flows through the vents or grilles 460 in the floor 453 of the cabinet.
  • the cool air cools the UPSs 484,486 and becomes heated.
  • the heated air is then vented into the chamber of the data centre hall through the aligned through holes 466,470 in the upper portion of the side wall of the cabinet and the side wall of the data centre hall.
  • This cooling arrangement means that the UPSs 484,486 do not require their own climate control arrangement to cool them.
  • the electrical energy storage device or UPSs 484, 486 of each section 104 is located at one end of the section.
  • the UPSs of neighbouring sections are located at opposite ends of their section.
  • the climate control system or room cooling unit 333 as described above of each section is located at the other end of the section to the UPS. In this way, along each side of the data centre hall it alternates between UPS and climate control system along the ends of the sections.
  • the many electrical and mechanical connections 492 between the main electrical infrastructure components of the data centre such as the connections between electricity generator 414, 414', 416, 416' and switchgear 406, 408; connections between switchgear and UPS 484, 486 or energy storage device; and connections between switchgear and the data centre hall connection are made using plug and socket arrangements.
  • the plug and socket connections may be, for example, Vam (registered trade mark) connectors or Powerlock connectors. This arrangement makes the data centre particularly easy and quick to construct.
  • Figures 15A and 15B illustrate the footprint 500 of all the facilities required for a typical data centre with a 2000m 2 data hall using a known system using an existing building (shown shaded in Figure 15A) and the footprint 502 using the arrangement described herein ( Figure 15B).
  • the footprint is reduced by approximately 50%, which is a substantial reduction in the size of site required.

Abstract

A data centre hall (100) comprising a plurality of sections (102,104, 106) each defining a space within connected together to define a chamber (108) within. The chamber (108) is suitable for housing racks (110) each comprising one or more computers.

Description

A DATA CENTRE
The present invention relates to a data centre, a data centre hall, a section for forming a data centre hall, a kit of parts and a method of building a data centre hall. The present invention also relates to a data centre hall climate control system, and a method of controlling the climate of a data centre hall. The present invention also relates to a data centre hall floor, and a method of installing a data centre hall floor. The present invention also relates to a data centre electrical power supply, a method of supplying electrical power to a data centre hall and a method of making connections in a data centre.
BACKGROUND OF THE INVENTION
Data centres (data centers) are large, specially designed facilities, which include one or more halls housing computer systems, typically servers arranged in racks. Data centres include other equipment required to operate and protect the computer systems, such as communications systems, environmental controls (for example, air cooling and fire suppression), power supplies and security. There is redundancy in the equipment in a data centre so that the data centre can continue to operate even if some of the equipment fails.
Data centres (sometimes called server farms) are used for many purposes including running core business applications for large businesses or providing backups for data away from the site of a business. Data centres are critical for the successful operation of many businesses and data stored on them is often confidential. Because of this, data centres usually have high levels of security and are very discretely located.
TIA-942 Data Center Standards Overview describes the reliability
requirements for data centres in a tiered structure. The type of data centre described herein is intended to meet at least tier 3 of this standard. That is so called
"concurrently maintainable" with at least 99.982% availability.
Known data centres halls are typically located in warehouse-type buildings and they are designed and constructed specifically to fit the building available. The time to fit-out a building into a data centre hall takes a long time (typically 18 months for a 500m2 data centre hall). For example, the wiring alone can take several months to install by a highly specialised workforce. Furthermore, a lot of space is required around the building during construction to house the facilities, such as materials, waste and office space, required to construct the data centre hall. Optimising space utilisation within the data centre hall for every bespoke construction is difficult because buildings are not generally specifically designed to house a data centre hall and space can be wasted. Optimising cooling arrangements for every bespoke construction is also time consuming and expensive because they have to be optimised for each installation. These factors add to building and running costs or a sacrifice in performance.
Portable data centres have been installed in shipping containers by a number of organisations such as the Sun Microsystems, Inc. arrangement described in international patent application No. WO-A-2008/033921 , the Google, Inc.
arrangement described in international patent application No. WO 2007/139560, and the Rackable Systems, Inc. system described in international patent application No. WO 2008/039773. The Sun Microsystems, Inc. shipping container arrangement of international patent application No. WO-A-2008/033921 is illustrated in Figure 1.
The portable data centre 10 of Figure 1 uses a standardised 20 feet
(approximately 6.5m) long by 8 feet (approximately 2.5m) high by 8 feet
(approximately 2.5m) wide container 12 more usually used to transport goods by ship or lorry. The container includes two sidewalls 14 of corrugated steel on opposing sides that are joined on their upper edges to a steel roof 16 and on their bottom edges to a steel base 18. The ends 22 and 24 of the container are provided with doors 26 (in Figure 1 , shown open at one end 22, and closed and not visible at one end 24). The sidewalls, base, roof and doors of this sort of shipping container provide the structural strength. The structural integrity of the container is provided by these components together. If one of these components is not in place, and intact, then the container will not be able to withstand heavy loads, such as that of the computer equipment inside it, when it is moved.
The interior of the shipping container is provided with a row of racks of computers 27 along each of the sidewalls 1 . Typically, 8 to 10 racks are provided. Heat exchanger/ cooling fan towers 28 are also provided along one sidewall. Both sidewalls of the container (only one side is shown in Figure 1 ) have a power terminal 34, a data connection port 36 and a chilled water connection port 38.
This arrangement provides a complete data centre in a single container that simply needs to have the power terminal 34 connected to a suitable power supply, the data connection port 36 connected to a communications link, and a chilled water supply to be connected to the chilled water connection port 38 for it to operate.
By way of example, it has been reported that Microsoft have installed up to
220 of this shipping container-type data centre on a single site around Chicago, USA. Cooling, networking and power facilities are provided from a centralised source, along with security arrangements, for the whole facility.
While this shipping container-type arrangement allows for a large amount of data centre floor space to be assembled quickly on a particular site, there are still considerable inefficiencies. As each data centre is a separate unit, they each have their own redundancy built-in to deal with equipment failure. Furthermore, because of the space limitations of the container, access to the interior of the container is awkward to fit and later access the data centre equipment to repair or replace it; specialist lifting equipment is needed. Indeed, the shipping container data centres in Microsoft's facility mentioned above are reported as being designed not to be maintained, and are expected to gradually die off, with a gradual decline in capacity.
Various arrangements are known for joining together shipping container-type arrangements. In US patent application No. US2009/0229194, for example, an arrangement is described that uses ISO steel containers. More than two of these ISO steel container modules can be configured to form a single space within on a single storey. The construction is such that there are no intermediate room columns. These ISO steel containers are a steel shell cube made of a steel sheet structure (walls, base, and ceiling). The sidewalls are made of crimping steel sheets joined by a continuous weld. The modular containers each include an inner modular room forming an IT modular security room. Single containers, each fonning a portable data centre, can be stacked vertically.
Another example is the system of US 7278273 in which a modular data centre is described that has separate computing modules interconnected by office modules. Each of these modules is an intermodal shipping container and is largely self contained.
Data centres include climate control arrangements to maintain the computer equipment inside the data halls at acceptable temperature and humidity levels. It is necessary to control the temperature of computer equipment as it operates most efficiently at a particular temperature. It is necessary to control the humidity of the environment around the computer equipment so that the relative humidity is not less than a particular level, typically around 20%. This is because at low humidity static electricity build-ups, which, when discharged, can damage solid state computer equipment. Known data centres typically have a climate control arrangement with a Power Unit Efficiency of between 1 .6 to 2.1.
A typical known climate control system 40 for a data centre hall is illustrated in
Figure 2. It has three different ways of controlling the climate of the data centre hall depending on the outside temperature and humidity. When the outside temperature is low and humidity is high, it uses fresh air cooling. That is to say, it uses air from outside the data centre hall to cool the interior of the data hall. When the outside temperature is high and/or the outside relative humidity is too low, air in the data centre hall is recirculated and mechanical cooling is used in the form of a compressor to reduce the temperature of the air and/or to increase its humidity. If the outside temperature is very low, then the data centre hall is cooled by a mixture of fresh air from outside and recirculated warmed air heated in the data centre hall.
In order to operate in these three regimes, the climate control system 40 is arranged as follows. It comprises an inlet duct 42 having an inlet port 44 on the outside of the data centre hall, for fresh air to enter the data centre hall from outside, and an outlet port 46 on the inside of the data centre hall. The climate control system also includes an outlet duct 48 with an outlet port 50 on the outside of the data centre hall and an outlet port 52 on the inside of the data centre hall for exhausting air from inside the data hall to outside the data hall. The inlet duct 42 and the outlet duct 48 are joined between their respective inlet and outlet ports 44, 50 by a return duct 54.
A damper or variable valve (an inlet damper 56) is located between the inlet port 44 of the inlet duct 42 and where the return duct 54 joins with the inlet duct to control air flow into the climate control system 40. A damper or variable valve (an outlet damper 58) is also located between the outlet port 50 of the outlet duct 48 and where the return duct 54 joins with the outlet duct to control air leaving the climate control system. Another damper or variable valve (a return damper 60) is located in the return duct 54 to control recirculation of the air in the climate control system. A mechanical cooling arrangement 62 in the form of a chiller or air conditioner and a supply fan 64 are located in the inlet duct 42 between where the return duct 54 joins with the inlet duct 42 and the outlet port 46 of the inlet duct to suck through and cool and/or humidify air entering the data centre hall.
Various sensors are provided outside and inside the data centre hall (not shown) for measuring temperature and relative humidity in various locations. The sensors are in communication connection with a controller and the controller is also in communication connection with the dampers, the chiller, and the supply fan in order to control them in the three regimes described above.
As measured by the sensors, when the outside temperature is low (between 17°C and 27°C) and relative humidity is more than 20%, the controller controls the climate control system 40 as follows to use fresh air cooling (direct free cooling). The inlet 56 and outlet 58 dampers are open (thus, the inlet 44 and outlet 50 ducts are open) and the return damper 60 is closed (thus, the return duct 54 is closed). This is shown by the dashed lines in Figure 2. The supply fan 64 and the extractor fan 66 are on and the chiller 62 is off. As a result, cool air is moved from outside the data hall through the inlet duct 44 into the data hall where it cools the computer equipment and the air is itself heated. The heated air is extracted from the data hall, through the outlet duct 52, and exhausted from the outlet port 50 of the outlet duct 48. In this way, cool fresh air is moved from outside the data centre hall into the data hall, and hot air is moved from inside the data centre hall to outside. No energy is used by the chiller.
When the outside temperature is high (above 27°C) and/or the relative humidity is less than 20%, the controller controls the climate control system 40 as follows to use a chiller to cool and humidify recirculating air. The inlet 56 and outlet 58 dampers are closed and the return damper 60 is open (shown by the solid lines in Figure 2); the supply fan 64 is on; and the chiller 62 is on. As the inlet and outlet dampers are closed and the return damper is open, air is recirculated from inside the data centre hall and no fresh air enters the data centre hall from outside. Hot air is sucked from the data centre hall through the outlet duct 48 and through the return duct 54 and into the inlet duct 42. The air is then sucked through the chiller (which cools and/or humidifies the air) by the supply fan and the cool air blown back into the interior of the data centre hall. Considerable energy is used by the chiller.
When the outside temperature is very low (below 17°C) and the relative humidity is more than 20%, the controller controls the climate control system 40 as follows to use a mixture of fresh and recirculating heated air to cool the data centre hall. The inlet 56, outlet 58 and return 60 dampers are partially open; the supply fan 64 is on;; and the chiller 62 is off. In this way, cold fresh air enters the data centre hall from outside through the inlet duct 44 where it is heated by hot air from the return duct 54 that has passed through the outlet duct 48 having originated from the interior of the data centre hall. No energy is used by the chiller.
It is desirable to reduce the power requirements of data centre hall cooling and humidity control systems for cost and environmental reasons. Typically, around 25% of the power consumed by a data centre is used by the climate control system. As described above, air conditioning can be used to cool and control the humidity in data centres. However, this type of mechanical cooling uses a lot of power. Fresh air cooling, using air from outside the data centre to cool the interior of the data centre has been adopted to reduce power use. However, known fresh air cooling arrangements used in data centres cannot control the humidity of the air without using mechanical cooling arrangements, such as an air conditioner. As a result, a power heavy air conditioning is used even at relatively low temperatures in order to provide adequate humidity levels. Arrangements described herein address this problem.
The output port 46 of the inlet duct 42 enters into the data centre hall under a raised floor 70 of the data centre hall, as illustrated in Figure 3. In known data centre halls, the raised floor typically comprises square tiles 72 of side 600mm. These tiles are held up by a leg or strut 74 located at each corner of a each tile that is placed on the actual floor of the data centre hall. Racks for the computer equipment for the data centre hall is located on the raised floor tiles and air from the climate control system enters the interior of the data centre hall through vents in the raised floor (not shown). Cabling for the data centre is also located under the raised floor in trays. The legs holding the raised floor obstruct the access for the cabling. It is difficult to access the under floor area, and in particular to install the cabling there.
Data centre sites typically have such large electrical power requirements that they have their own electricity substations to supply energy from power stations. A typical data centre site has the same electrical power requirements as a small town. Data centre sites are provided with generators, usually powered by a diesel engine with a maximum power output of typically around 1000kVA, to power the whole data centre site in the event of a power failure from the data centre's electricity substation. Diesel engines of these generators would typically power a ship. Data centres also have so-called uninterruptible power supplies (UPSs), each in the form of a set of batteries, to provide electrical power for the whole data centre site immediately after power fails from the transformers, but before there is time for power to be delivered from the generators. This is because it typically takes 10 to 30 seconds after starting a generator before the generator can provide adequate power. In known
arrangements, all the batteries of the UPSs are, in normal use, recharged from the electricity substation and, if no power is supplied from the electricity substation, the batteries are recharged from the generator that is operating. Each set of batteries typically stores enough energy to power the whole data centre site for 5 to 10 minutes, in case there are delays in starting the generators. The type of very large diesel engine generator typically used is difficult to transport to data centre sites, particularly as these sites can be located in discrete remote areas where access may only be by narrow minor road. Furthermore, they also take up a lot of space on the site. Typically, around 10% of power to a data centre is used in charging the batteries of the UPSs. Figure 4 illustrates a typical known power supply arrangement 78 for a data centre. To provide redundancy in the power supply arrangement 78 for the data centre site, the data centre has power provided from two sources or sides 80, 82. In Figure 4, this is illustrated as source or side A 80 and source or side B 82. In normal use, the data centre site is powered from both side A and side B operating at part load, typically about half maximum load. In the event of power failure from either side A or side B, the other side operates at about maximum load to power the entire site alone.
As illustrated in Figure 4, the electrical power provision for the data centre comprises a pair of switchgear arrangements 84,86, one 84 providing the source A and the other 86 the source B. Each switchgear arrangement 84,86 has a transformer 88,90 and a pair of generators 92,92',94,94' electrically connected to it. A UPS 96,98 is also electrically connected to each switchgear and to data centre hall 99.
In normal use, electrical power is provided to the data centre hall 99 from both of the transformers 88,90, operating at about half maximum load. At the same time, the UPSs 96,98 are charged. If one transformer 88,90 fails, then the other transformer 88,90 operates at around maximum load to power the whole data centre hall 99. If no power is available from both transformers 88,90
(because no power is available from the electricity grid, for example) then all four generators 92,92',94,94' are started, and they are all run at around half load to power the whole data centre hall 99. If one of the generators on one side (A or B) fails, then the other generator on the same side operates at about maximum load to compensate. While the generators are running, they also charge the UPSs.
As mentioned above, as it takes a short time, typically 10 to 30 seconds, to start a generator 92, 92', 94,94' before the generator can provide adequate power, UPSs 96,98 are used to power the data centre hall 99 in the meantime. In the typical data centre hall of Figure 4, each of the UPSs can power the entire data centre hall. While the generators are starting, typically, both UPSs are run at half maximum load to power the whole data centre hall. In the event of failure of one of the UPSs, then the other UPS is used at about maximum load to power the whole data centre hall.
The UPSs 96,98 are housed in separate buildings to the data centre hall 99 and have their own dedicated cooling arrangements.
The many electrical and mechanical connections between the main electrical infrastructure components of a data centre, such as the connections between generator and switchgear; connections between switchgear and UPS; and connections between switchgear and the data centre hall connection are known to be made using bolts. These are time consuming to fasten using a tool and require a moderate amount of skill to fasten satisfactorily.
Examples of the system described herein address the problems of the prior art.
SUMMARY OF THE INVENTION
The invention is defined in the independent claims below to which reference should now be made. Advantageous features are set forth in the dependent claims.
A preferred embodiment of the invention is described in more detail below and takes the form of a data centre hall comprising a plurality of sections each defining a space within connected together to define a chamber within. The chamber is suitable for housing racks each comprising one or more computers.
This arrangement is quick and simple to assemble on site.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will be described in more detail, by way of example, with reference to the accompanying drawings, in which:
Figure 1 (prior art) is a perspective view of a known data centre using a shipping container;
Figure 2 (prior art) is a schematic view of a known climate control arrangement of a known data centre;
Figure 3 (prior art) is an isometric view of a known floor arrangement for a data centre hall;
Figure 4 (prior art) is a schematic of a known electrical power supply arrangement for a data centre;
Figure 5 is a plan view of a data centre hall made from the sections of Figures 6A, 6B and 6C embodying an aspect of the present invention;
Figures 6A, 6B and 6C are plan views of sections forming the data centre hall of Figure 5 embodying an aspect of the present invention;
Figure 7 is a plan view of another data centre hall made from the sections of Figures
6A, 6B and 6C embodying an aspect of the present invention;
Figure 8A is an isometric view of a framework for the section of Figure 6C; Figure 8B is art isometric view of a section of a data centre hall embodying an aspect of the present invention;
Figure 9A is plan view of part of a floor of a data centre hall embodying an aspect of the present invention;
Figure 9B is a view from one end of part of the floor of Figure 9A;
Figure 10 is a cross-sectional view from one side of part of a data centre hall embodying an aspect of the present invention;
Figure 11 is a cross-sectional view of the data centre hall of Figure 10 viewed in a direction perpendicular to that of Figure 10;
Figure 11 A is a cross-sectional view of part of the data centre hall of Figure 11 ;
Figure 12 is a schematic view of a climate control arrangement embodying an aspect of the present invention;
Figure 13 is schematic view of electrical power arrangements for a data centre embodying an aspect of the present invention;
Figure 14 is a schematic view from above of a section of a data centre hall embodying an aspect of the present invention;
Figure 15A (prior art) is a schematic plan view of known data centre site; and
Figure 15B is a schematic plan view of a data centre site embodying an aspect of the present invention to approximately the same scale as the data centre site of
Figure 15A.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
An example data centre hall 100 and example sections 102,104,106 forming it will now be described with reference to Figures 5, 6A, 6B, and 6C. The data centre hall may be located outside or within a building such as a warehouse.
The data centre hall 00 of Figure 5 has a plurality of sections (in this example, three) 102,104,106 each defining a space or interior space within. They are connected together to define a single chamber, room or contiguous space 108 within. The space defined by each section and by the chamber is suitable for housing racks 110 each comprising one or more computers. The environment control provided by each unit of the chamber is adapted such that it controls the environment of the chamber as a whole. Environment control forms, for example, one or more of the following: security, fire detection, fire suppression, air flow, humidity control, air temperature control, and climate control. The sections are not themselves self contained; or individually fully functioning data centres or data centre halls. The sections only function as a data centre hall once they are joined together to form the single chamber. The sections forming the data centre hall are constructed at a factory away from the site on which the data centre is to be built. The sections are provided in a kit of parts for assembly on a site suitable for a data centre. The build time on-site for the complete data centre is typically a little over two months for a 500m2 data centre hall.
Example sections 102,104,106 for connecting together to form a data centre hall 100 are shown in Figures 6A, 6B, and 6C. The sections each include a base 112 and a roof above (not shown in Figures 6A, 6B, and 6C). The roofs and the bases are rectangular. Each of the sections has sidewalls 114 (shown as solid lines in Figures 6A, 6B, and 6C), but they each include at least one open side 116 (no sidewall, shown as broken lines in Figures 6A, 6B, and 6C). They also all have side walls along both their transverse edges.
The section 102 in Figure 6A is a "left hand section". It includes a side wall 114 along one longitudinal side (the left hand side) and the other longitudinal side 116 is fully open. The side wall 114 along the longitudinal side includes an opening 118 at each end for a door and a stairway for accessing the interior or chamber of the data centre hall.
The section 104 in Figure 6B is a "centre section" in which both longitudinal sides are fully open 116.
The section 106 in Figure 6C is a "right hand section". It includes a side wall 114 along one longitudinal side (the right hand side or other side to that of the "left hand section") and the other longitudinal side is fully open 116. The side wall along the longitudinal side includes an opening at each end for a door 20 and a stairway for accessing the interior of the data centre hall.
In order to build the data centre hall 100 of Figure 5, the open side 116 of a right hand unit 106 is connected to one open side 116 of a centre section 104 and the open side 116 of a left hand section 102 is connected to the other open side 116 of the centre section 104 and thus a single chamber or contiguous space is formed within. The interior or chamber of each section includes racks 110 for housing computers. The racks may be pre-installed at the factory or installed on site.
The sections 102,104,106 are typically about 16.8m long. The centre sections 104 are typically about 3.2m wide. The left hand and right hand sections 102,106 are typically about 3.3m wide (to include the thickness of the longitudinal side wall 114). This means that the sections can each be transported by lorry on normal roads. The UPSs (discussed below), switchgear and batteries for the sections can also be delivered in standard 6m long containers. Other dimensions of the sections 102,104,106 are possible. For example, each section could be between 26m and 12m long, between 5m and 2.2m wide, and between 5m and 2.5m high, preferably between 20m and 14m long, between 4.5m and 2.5m wide and between 4m and 3m high.
As shown in Figure 7, a bigger data centre hall 200 may be built by connecting together the open sides of a plurality of centre sections 104 (in this example, the open sides of ten centre sections are connected together) with one left hand 102 and one right hand section 106 to provide a 12 section data centre hall with a single chamber or contiguous space of 500m2. A typical 1000m2 data centre hall may be made by providing two of these 500m2 data centre halls together.
Figure 8A shows a frame or framework 300 forming the supporting structure of a section 106 (in this case, a right hand section, but the other sections are constructed in a similar way). Using a framework for the supporting structure means that sections can be made with one or two open sides that have enough structural strength to be transported complete with racks of computers inside. Furthermore, the framework arrangement of the sections provides sufficient strength to the sections such that one section can be placed on top of the other (directly on top of the other) without requiring a separate mezzanine between the sections to take the weight of the upper section. In this way, a two storey or level data centre can be easily built. So, for example, a typical 1000m2 data centre hall may be made by providing two 500m2 data centre halls each using 12 sections as described above one on top of the other.
The frames supporting the structure of a section 106 comprise beams 302, such as I beams with an I shape cross section. The base 304 and roof 306 each comprise a pair of beams 308 extending longitudinally with further beams 310, spaced apart in the longitudinal direction, extending transversely between them. The base and roof are spaced apart vertically by beams 312 extending vertically forming columns at each corner, and also by beams 314 extending vertically and spaced apart along the longitudinal edges of the base and roof. The beams are made from steel.
Rigid sheets are fixed to the beams of the base 304 and roof 306 to form a continuous surface across them. Rigid sheets are also fixed to the vertically extending beams 312, 314 to form side walls, in the places where they are not open. The sheets are not shown in Figure 6. The edges of the sheets may be sealed to weatherproof the units when the units are to be located outdoors. Beams 316 spaced vertically from the base 304 each extend in a transverse direction across each section and they are spaced apart in the longitudinal direction. The beams 316 are each mechanically connected to support beams 315 that extend longitudinally along each side of each section. The beams 316 support a floor 317, spaced from the base 304, that is illustrated in Figures 9A and 9B discussed further below.
In addition to the sheets forming the roof, base and sidewalls of the section as described above, the section may also include additional sidewalls 350 along the transverse ends of the section and an additional roof 352 as illustrated in Figure 8B. These provide additional weather protection to the data centre hall so that it can be located without requiring use of a building to put it in. The section 104 shown in Figure 8B is a centre section in which both longitudinal sides are open. The same concept could be applied to "left hand" and "right hand" sections too. These sections would also have a sidewall of the type described below along one longitudinal, closed, external side.
The additional sidewalls 350 are rain screen walling or rain screen panels in the form of trapezoidal single skin rain screen panels. They are spaced from the adjacent sidewalls 11 of the data hall section 104. This provides ventilation. The rain screen panels are fixed to beams, in particular, Z purlins attached to external gantry modules forming the side walls of the data hall section.
The additional roof, roof module or roof system 352 is also constructed from panels and, in particular, trapezoidal single skin rain screen panels. The panels used for the roof are fixed to pre-formed modules that are fastened, for example, bolted to the top of the data hall sections 104. These provide a slope from the middle of the roof to the edges to allow rain to run off. Each data hall section has its own roof module that is joined and sealed to adjacent data hall sections.
The floor 317 touched on above will now be described in more detail. Figure 9A shows a partially assembled floor 317. Tiles 319 with a square shape of side 600mm are placed on the beams 316 to form the floor. Two opposing edges of each tile are placed on adjacent beams (as shown in Figure 9B). Tiles are placed in this way, side-by-side, across all of the beams 316. At one longitudinal edge a longer, rectangular tile is used as the width of the section is not exactly divisible by the side length of the square tiles. In this way, beams supported between sides of a data centre hall support the floor.
Referring back to Figure 8, the floor 316 is located above the base 304, such that there is a space 318, and in particular a clear and open space, between the base and the floor. The space is 0.8m high in this example data centre hall. Adjoining floors of all the sections connect together to form a continuous floor through the whole chamber.
Racks 110 for servers and other computer equipment are fastened to the upper surface of the floor as illustrated in Figure 10. Typically, 270 600mm racks are provided in a 500m2 hall. Preferably, 17 racks are provided in each row of racks in each section. This allows adequate air flow while also allowing adequate escape routes in the data centre hall. Alternatively, between 12 and 17 racks may be provided in each row.
Parts of the cabling, in particular for communications, and cooling systems for the data centre hall are located in the space 318 between the base and the floor of the data centre hall as also illustrated in Figure 11. The cabling is simply placed on the base, which advantageously causes the air flow in the space between the base and the floor to be turbulent. The under-floor space is unobstructed. It does not have struts or legs for supporting the floor tile as in the prior art arrangements described above. Therefore, it is easy to install the cabling and, furthermore, cabling racks are not required.
The racks 110 for servers and other computer equipment extend
longitudinally along each section 102,104,106 (see Figure 11 ) and the racks are spaced apart through the chamber (see Figure 10). In this example, a single row of racks extends through each section. As illustrated in Figure 10, aisles 322 are formed in the spaces between the racks. The aisles are alternately so-called cold aisles 324, which contain air cooled by the data centres cooling or climate control system for cooling the computer equipment, and hot aisles 326, which contain hot air that has been heated by the computer equipment. Each cold aisle has a ceiling 328 above it, which is located between, and fixed and sealed to the top edges 330 of neighbouring racks (or between a rack and a sidewall of a unit if the cold aisle is at the end of the racks). Doors or other closures (not shown) such as downwardly hanging plastics strips or butcher's blinds are located at the end of each aisle. Cold air enters the cold aisles from the space 318 under the floor 316 through vents or grates 332 in the floor of the cold aisles. Hot air exits the hot aisles directly into the chamber. Front surfaces of servers in the racks face the cold aisles and rear surfaces of servers face the hot aisles. In this way, cold air is blown through the servers to cool them and is ejected into the hot aisles.
As shown in Figure 11 , the cooling and humidity control system or climate control system 333 for the data centre hall is provided in cabinets 334 connected to a transverse side wall of each section 104. They are integral with the data centre hall. By locating these outside the walls of the data centre hall, maintenance is simplified.
The cooling and humidity control, climate control arrangement or room cooling unit 333 housed in the cabinet 334 shown in detail in Figure 11 A is illustrated in the schematic of Figure 2. It is similar to the prior art system illustrated in Figure 2 in many respects and like features in the Figures have been given like reference numerals.
Compared to the prior art arrangement of Figure 2, the climate control arrangement of Figure 12 additionally includes a wrap round coil 380 containing a heat transfer agent in the form of, for example, a solution that is at least part ethylene glycol. This arrangement uses rejected heat from the data centre hall to provide the latent heat of evaporation. In other words, it allows the humidity of the air to be increased without requiring mechanical cooling through a chiller or an air conditioner resulting in a considerable energy saving over the prior art system illustrated in Figure 2. This is so-called indirect free cooling.
The wrap round coil arrangement 380 comprises a first coil or heat exchanger 382 and a second coil or heat exchanger 384 joined together in a circuit by tubing 386. Inside the tubing is the heat transfer agent in the form of ethylene glycol. A pump 388 is located in the circuit for pumping the heat transfer agent around the circuit. The first coil is located in the inlet duct 42 between the entrance or inlet port 44 of the inlet duct and the inlet damper 56 and the first coil extends across the inlet duct. The second coil 384 is located in the inlet duct between where the return duct 54 joins the inlet duct and the chiller 62.
The control regime of the controller (not shown) is the same as that of the prior art example of Figure 2 in many respects. However, there is one further or fourth operating regime where the outside temperature is low (between, for example, 17°C and 27°C) and relative humidity in the data centre hall drops to a low (typically less than 35%). In this case, rather than switching on the chiller 62 to humidify the air, the heat transfer agent in the wrap round coil 380 is used to cool the data centre hall while maintaining humidity. To do this, in this regime, the pump 388 in the wrap round coil circuit is started (it is stopped in the other three cooling regimes), and air from the data centre hall is recirculated through the system. To do this, the inlet 56 and outlet 58 dampers are closed and the return damper 60 is opened. Outside air contacts the first coil 382, which cools the heat transfer agent within the circuit of the wrap round coil. The cooled heat transfer agent is pumped around the circuit by the pump and it cools the hotter air in contact with the second coil 384 and is itself heated. The heated heat transfer agent is then pumped though the circuit and through the first coil again and cooled and the cycle continues. In this way, the data centre hall is cooled and humidity is maintained. While energy is used by the pump in the wrap round coil it is considerably less than the energy used by the chiller of the prior art arrangement of Figure 2 described above.
With the arrangement of Figure 12, it has been estimated that a data centre in the UK would only require the chiller to be in operation for around 400 to 500 hours per year compared to around 8200 hours per year with the prior art arrangement of Figure 2 resulting in a considerable energy saving. This cooling and humidity control system has a Power Unit Efficiency of less than 1.4.
The features of the schematic of Figure 12 relate to the detailed drawing of the climate control system of Figure 11 A as follows. Each cabinet 334 comprises a floor 336 spaced from a base 338 of the cabinet forming a duct 340. In a sidewall 341 of the cabinet that abuts the transverse side wall 1 4 of the data centre hall, there is a through hole 342 provided from the duct 340 that aligns with a through hole 346 in the transverse side wall of the data centre hall providing a fluid path into the space between the base 304 and the floor 316 of the data centre hall. This corresponds to outlet port 46 into the data centre hall of Figure 12. Another through hole 344 is provided through an upper surface of the transverse side wall of the data centre hall. A through hole 348 in the upper part of the side wall of the cabinet aligns with the through hole in the upper surface of the transverse side wall of the data centre hall. This corresponds to outlet port 52 of Figure 12 for air exiting the data centre hall.
Electrical power is distributed through the data centre hall by a bus bar system. In normal use, the data centre is provided with electrical power from a high voltage electricity sub-station located on site. In the event of failure of the sub-station, back-up power is provided by a plurality of generators each powered by a diesel engine. The data centre has a plurality of UPSs including batteries to provide electrical power for the whole data centre site immediately after power fails from the electricity substation, but before there is time for power to be delivered from the generators. Advantageously, when the data centre is powered by the generators, it is controlled by a controller such that the batteries of the UPSs are not charged.
This reduces the power requirements of the generators with no
disadvantageous effects as the power supply only needs to revert to the electricity substation ("mains power") when the data centre is notified that adequate power can be immediately provided from the substation.
The electrical power provision for the data centre is illustrated in more detail in Figure 13. To provide redundancy in the power supply arrangement 400 for the data centre site, the data centre has power provided from two sources or sides 402, 404. In Figure 13, this is illustrated as source or side A 492 and source or side B 404. In normal use, the data centre site is powered from both side A and side B operating at part load, typically about half maximum load. In the event of power failure from either side A or side B, the other side operates at about maximum load to power the entire site alone.
As illustrated in Figure 13, the electrical power provision for the data centre comprises a pair of switchgear arrangements 406,408 one providing the source A and the other the source B. Each switchgear arrangement 406, 408 has a transformer 410,412 electrically connected to it and a pair of generators 414, 414', 416, 416'. Each of the switchgear arrangements is electrically connected to each section 104 of the data centre hall.
Each section 104 of the data centre hall is illustrated in Figure 14. Each section comprises a pair of busbars 488, 490, one electrically connected to the A source switchgear and the other to the B source switchgear. Computer equipment in the rack or racks 110 of each section is adapted to operate with either one or both of a pair of power supplies and is electrically connected to both busbars of the pair of busbars. Each section of the data centre hall 104 is provided with a pair of UPSs 484,486 each comprising a set of batteries. One of the pair of UPSs is electrically connected to one busbar of the pair of busbars and the other UPS is electrically connected to the other of the pair of busbars. In other words, a plurality of electrical energy storage devices or UPSs are provided each having a busbar electrically connected or directly electrically connected to it. The busbars are provided to some, a subset or not all of the computers of a data centre hall. Said some computers receive electrical energy from one or more of the plurality of electrical energy storage devices.
In use, the whole data centre site is normally powered from the two transformers 410,412 one on the A source or side 402 and one on the B source or side 404 and each transformer operates at part load, typically about half maximum load. In the event of failure of one of the transformers, the other transformer of the other source or side operates at a load, typically about maximum load, to power the whole data centre site. In this way, redundancy in the transformers is provided.
In the event of failure of both transformers 410, 412 (typically this would be when the electricity supply from the electricity grid to both transformers fails), a controller (not shown) controls the generators 414, 414', 416, 416' (typically powered by diesel engines) to power the whole data centre site. In this situation, the generators all operate at part load, typically about half maximum load. In the event of failure of one generator 414, 414', 416, 416', the power provided by the other generator of the pair is increased, typically to about maximum load to compensate for the loss of power from the other generator and the data centre is still powered by both A and B side generators. If both generators of a pair fail, then the other pair of generators operate at increased load to compensate, typically at about their maximum power output to power the whole data centre site and thus generators of only one source or side A or B are used to power the data centre site.
As mentioned above, after failure of the power supply from the
transformers 410,412 it takes a short time (typically about 10 to 20 seconds) before the generators 414, 414', 416, 416' operate at an adequate power level. During this period, the data centre site is powered by UPSs 484,486. The UPSs are controlled by a controller (not shown), such that each pair of UPSs in each section 104 operates at part load, typically at about half maximum load to power its section during a "mains" power failure while the generators are given time to generate adequate power. If one UPS of a pair fails, then the other UPS is run at an increased load (typically, at about maximum load) to compensate to power the whole section. In this way, each section has redundancy in its UPS provision. In effect, the UPSs are moved to the other side of the busbar compared to known arrangements. While this arrangement reduces reliability compared to the UPS provision of the prior art, availability is, however, importantly, increased. In this example, each UPS has a maximum power rating of around 80kVA to power each 17 rack section. In this example, the data centre site includes an electricity substation 80 or principal electrical energy supply to supply energy to the data centre site from power stations. Two transformers of the electricity substation each have a typical maximum power of around 2000kVA. The generators 480,482 each powered by a diesel engine have a typical maximum load of 850kVA. In contrast to the prior art systems, the generators do not charge batteries of the UPS system during a "mains" power or principal electrical energy supply failure; the UPS batteries or chargeable electrical energy storage devices are only charged from the "mains" or electricity sub-station. In other words, an electrical storage device of the data centre is charged only by a principal electrical energy supply of the data centre and never by an electricity generator of the data centre.
This results in a lower power requirement of the generators (in this example, 850kVA) compared to the prior art generators (1000kVA, in this example). These lower power generators are smaller in size (they are typically of a size suitable to power a truck or lorry) and thus easier to transport to site.
Referring back to Figure 11 , in this example, the pair of UPSs 484 (only one is visible in Figure 11 ) are each housed in a cabinet 450 located on a transverse side wall 114 of each section 104 (as illustrated in Figures 6A, 6B, 6C) opposite to the transverse side wall on which the cabinet 334 for the cooling system or room cooling unit (RCU) is located. Electrical power is provided into the busbar system of the data centre through a cable 452 in the top of the cabinet into the roof 306 of the data centre hall. The cabinets are arranged in a similar way to those of the cooling arrangements. The UPSs or electrical energy storage devices are integral with the data centre hall. There is a fluid path from the climate control system to the interior of the data centre hall via the electrical energy storage device or UPS. In this way, the climate control system of the data centre hall controls the climate of both the data centre hall and the UPS.
The fluid path is provided by the following arrangement. Each UPS cabinet 450 comprises a floor 453 spaced from a base 454 of the cabinet forming a duct 456. A fan 458 is located in the duct. The UPSs are mounted on the floor, which comprises vents or grilles 460 allowing air flow from the duct into a space around the UPSs 484. In a sidewall 462 of the cabinet that abuts the transverse side wall 114 of the data centre hall, there is a through hole 464 provided into the duct and another through hole 466 is provided through an upper surface. The through holes 464,466 in the cabinet align with through holes 468,470 in the transverse side wall of the data centre. The through hole into the duct aligns with a through hole 468 into the space between the base 304 and the floor 316 of the data centre hall, and the through hole in an upper surface of the cabinet side wall aligns with a through hole 470 into the chamber of the data centre.
In use, a flow of cool air, provided from the climate control arrangement described above, in the space between the floor 316 and base 304 of the data centre hall is sucked, by the fan 458 in the duct 456, through the aligned through holes 464,468 in the sidewalls 114 of the data centre hall and cabinet 450 and into the duct of the cabinet. This cool air flows through the vents or grilles 460 in the floor 453 of the cabinet. The cool air cools the UPSs 484,486 and becomes heated. The heated air is then vented into the chamber of the data centre hall through the aligned through holes 466,470 in the upper portion of the side wall of the cabinet and the side wall of the data centre hall.
This cooling arrangement means that the UPSs 484,486 do not require their own climate control arrangement to cool them.
As illustrated in Figure 13, the electrical energy storage device or UPSs 484, 486 of each section 104 is located at one end of the section. The UPSs of neighbouring sections are located at opposite ends of their section. The climate control system or room cooling unit 333 as described above of each section is located at the other end of the section to the UPS. In this way, along each side of the data centre hall it alternates between UPS and climate control system along the ends of the sections.
As illustrated in Figure 13, the many electrical and mechanical connections 492 between the main electrical infrastructure components of the data centre, such as the connections between electricity generator 414, 414', 416, 416' and switchgear 406, 408; connections between switchgear and UPS 484, 486 or energy storage device; and connections between switchgear and the data centre hall connection are made using plug and socket arrangements.
There are no such electrical and mechanical connections that require tools (although, of course, some electrical connections could be made using bolts). The plug and socket connections may be, for example, Vam (registered trade mark) connectors or Powerlock connectors. This arrangement makes the data centre particularly easy and quick to construct.
Figures 15A and 15B illustrate the footprint 500 of all the facilities required for a typical data centre with a 2000m2 data hall using a known system using an existing building (shown shaded in Figure 15A) and the footprint 502 using the arrangement described herein (Figure 15B). The footprint is reduced by approximately 50%, which is a substantial reduction in the size of site required.
Examples of the present invention have been described. It will be appreciated that variations and modifications may be made to the examples described within the scope of the present invention. Various separate features have been described and these may be provided alone or features may be incorporated in different combinations of some or all of the features.

Claims

1. A data centre hall comprising:
a plurality of sections each defining a space within connected together to define a chamber within, the chamber being suitable for housing racks each comprising one or more computers, wherein the sections each comprise at least one open side and a frame that provides sufficient strength to the sections such that one section can be placed on top of another.
2. A data centre hall according to claim 1 , wherein at least part of environment control of the chamber is adapted for the whole chamber.
3. A data centre hall according to claim 2, wherein environment control forms one or more of the following: security, fire detection, fire suppression, air flow, humidity control, air temperature control, climate control.
4. A data centre hall according to any preceding claim, wherein the frame of the sections is for supporting sidewalls.
5. A data centre hall according to any preceding claim, wherein the sections each comprise at least two sidewalls.
6. A data centre hall according to any preceding claim, wherein at least one of the at least one open side is partially open.
7. A data centre hall according to any preceding claim, wherein at least one of the at least one open side is fully open.
8. A data centre hall according to any preceding claim, wherein the sections are between 26m and 12m long.
9. A data centre hall according to any preceding claim, wherein the sections are between 20m and 14m long.
10. A data centre hall according to any preceding claim, wherein the sections are substantially 16.8m long.
11. A data centre hall according to any preceding claim, wherein the sections are between 5m and 2.2m wide.
12. A data centre hall according to any preceding claim, wherein the sections are between 4.5m and 2.5m wide.
13. A data centre hall according to any preceding claim, wherein the sections are between substantially 3.3m and 3.2m wide.
14. A data centre hall according to any preceding claim, wherein the sections are between 5m and 2.5m high.
15. A data centre hall according to any preceding claim, wherein the sections are between 4m and 3m high.
16. A data centre hall according to any preceding claim, wherein a base and a roof of each section are spaced apart by columns at each corner of each section.
17. A data centre hall according to any preceding claim, wherein a base and a roof of each section comprise beams extending vertically between them, the beams being spaced apart along edges of the base and roof.
18. A data centre hall according to any preceding claim, wherein another data centre hall according to any preceding claim is located on top of it.
19. A section for forming a data centre hall, the section defining a space within suitable for forming a chamber when connected with one or more other sections, the chamber being suitable for housing racks each comprising one or more computers, wherein the section comprises at least one open side and a frame that provides sufficient strength to the section such that another section can be placed on top of it.
20. A kit of parts, the parts comprising sections defining a space within suitable for forming a chamber when the sections are connected together, the chamber being suitable for housing racks each comprising one or more computers, wherein the sections each comprise at least one open side and a frame that provides sufficient strength to the sections such that one section can be placed on top of another.
21. A method of building a data centre hall comprising connecting together a plurality of sections each defining a space within to define a chamber within, the chamber being suitable for housing racks each comprising one or more computers, wherein the sections each comprise at least one open side and a frame that provides sufficient strength to the sections such that one section can be placed on top of another.
22. A data centre hall climate control system, the climate control system
comprising:
a first heat exchanger and a second heat exchanger, the first heat exchanger and the second heat exchanger being connected together to form a circuit through which a heat transfer agent flows, wherein
the first heat exchanger is in contact with air outside a data centre hall and the second heat exchanger is in contact with air inside the data centre hall; in use, the heat transfer agent flows around the circuit such that the air outside the data centre hall affects the climate of air inside the data centre hall.
23. A data centre hall climate control system according to claim 22, comprising no additional sources of cooling in the circuit.
24. A data centre hall climate control system according to claim 22 or 23, wherein the heat transfer agent comprises ethylene glycol.
25. A data centre hall climate control system according to any of claims 22 to 24, wherein the first heat exchanger comprises a coil.
26. A data centre hall climate control system according to any of claims 22 to 25, wherein the second heat exchanger comprises a coil.
27. A data centre climate control system according to any of claims 22 to 26, comprising a pump for moving the heat transfer agent around the circuit.
28. A data centre hall climate control system according to claim 27, further
comprising a controller to control the pump such that the pump pumps when the temperature outside the data centre hall is within a predetermined range.
29. A data centre hall climate control system according to any of claims 27 or 28, further comprising a controller to control the pump such that the pump pumps when the humidity inside the data centre hall is within a predetermined range.
30. A data centre hall climate control system according to any of claims 22 to 29, wherein the climate of air inside the data centre hall comprises the temperature of air inside the data centre hall.
31. A data centre hall climate control system according to any of claims 22 to 30, wherein the climate of air inside the data centre hall comprises the humidity of air inside the data centre hall.
32. A data centre hall comprising a data centre hall climate control system
according to any of claims 22 to 31.
33. A data centre hall comprising a plurality of data centre hall climate control systems according to any of claims 22 to 31.
34. A data centre hall according to claim 33, wherein the data centre hall
comprises a plurality of sections and one of the plurality of data centre hall climate control systems is provided in each section.
35. A data centre hall according to claim 34, wherein each section defines a
space within suitable for forming a chamber when connected with one or more other sections, the chamber being suitable for housing racks each comprising one or more computers.
36. A data centre hall according to claim 34 or 35, wherein a data centre hall climate control system is located on one end of each section.
37. A data centre hall according to claim 36, wherein neighbouring sections have a data centre hall climate control system located on opposite ends of each section.
38. A method of controlling the climate of a data centre hall, the method
comprising:
moving a heat transfer agent around a circuit, the circuit arranged to be in part in contact with air outside a data centre hall and at least in part in contact with air inside the data centre hall, such that the air outside the data centre hall affects the climate of air inside the data centre hall.
39. A data centre hall floor comprising beams, each beam being supported
between sides of a data centre hall, the beams supporting a floor surface.
40. A data centre hall floor according to claim 39, wherein the floor surface is spaced from a base of the data centre hall.
41. A data centre hall floor according to claim 40, wherein the space between the base and the floor surface is unobstructed.
42. A data entre hall floor according to claim 40 or 41 , wherein the space
between the base and the floor surface comprises cabling.
43. A data centre hall floor according to any of claims 40 to 42, wherein air from a climate control system of the data centre hall flows in the space between the base and the floor surface.
44. A data centre hall floor according to any of claims 39 to 43, wherein the floor surface comprises at least one tile.
45. A data centre hall floor according to any of claims 39 to 44, wherein the data centre hall comprises a plurality of sections.
46. A data centre hall floor according to any of claims 39 to 45, wherein the
beams extend transversely across at least one of the sections.
47. A data centre hall floor according to any of claims 39 to 46, wherein the
sections each define a space within suitable for forming a chamber when connected with one or more other sections, the chamber being suitable for housing racks each comprising one or more computers.
48. A data centre hall comprising a data centre hall floor according to any of claims 39 to 47.
49. A method of installing a data centre hall floor comprising placing a floor
surface on beams, each beam being supported between sides of a data centre hall.
50. A data centre electrical power supply comprising:
a plurality of electrical energy storage devices each having a busbar electrically connected to it and the busbars are provided to some computers of a data centre hall to provide electrical energy to said some computers, wherein said some computers receive electrical energy from one or more of the plurality of electrical energy storage devices.
51. A data centre electrical power supply according to claim 50, wherein the at least one electrical energy storage device comprises at least one battery.
52. A data centre electrical power supply according to claim 50 or 51 , wherein a pair of busbars are provided to a row of racks of computers of the data centre hall.
53. A data centre hall comprising a data centre electrical power supply according to any of claims 50 to 52.
54. A data centre hall according to claim 53, wherein the data centre hall
comprises a plurality of sections.
55. A data centre hall according to claim 54, wherein the sections each define a space within suitable for forming a chamber when connected with one or more other sections, the chamber being suitable for housing racks each comprising one or more computers.
56. A data centre hall according to claim 55, wherein each section comprises a row of racks of computers.
57. A data centre hall according to any of claims 54 to 56, wherein each section comprises the electrical power supply of each busbar of the section.
58. A data centre hall according to claim 57, wherein the electrical power supply of each section is located at one end of the section.
59. A data centre hall according to claim 58, wherein the electrical power supplies of neighbouring sections are located at opposite ends of their section.
60. A data centre hall according to claim 58 or 59, wherein a climate control system is located at the other end of the section to the one of the plurality of electrical power supplies.
61. A data centre hall according to claim 60, wherein the climate control system is the data centre hall climate control system according to any of claims 22 to 31.
62. A method of supplying electrical power to a data centre, the method
comprising:
providing electrical energy to some computers of a data centre hall from one or more of a plurality of electrical energy storage devices each of the plurality of electrical energy storage devices being electrically connected to said some computers by a busbar.
63. A data centre electrical power supply arranged such that each section of a data centre hall is provided with a separate electrical power supply from at least one electrical energy storage device.
64. A data centre electrical power supply according to claim 63, wherein the at least one electrical energy storage device comprises at least one battery.
65. A data centre electrical power supply according to claim 63 or 64, wherein each section of the data centre hall is provided with at least two electrical energy storage devices.
66. A data centre comprising the data centre electrical power supply of any of claims 63 to 65.
67. A method of supplying electrical power to a data centre hall, the method comprising each section of a data centre hall being provided with a separate electrical power supply from at least one electrical energy storage device.
68. A data centre electrical power supply comprising a principal electrical energy supply, an electricity generator and a chargeable electrical energy storage device; wherein the power supply is arranged such that the chargeable electrical energy storage device is charged only by the principal electrical energy supply and never by the electricity generator.
69. A data centre comprising the data centre electrical power supply according to claim 68.
70. A method of supplying electrical power to a data centre, the method
comprising an electrical energy storage device of the data centre being charged only by a principal electrical energy supply of the data centre and never by an electricity generator of the data centre.
71. A data centre in which at least one of the following connections are made by a plug and socket connection:
electricity generator to switchgear connection;
switchgear to electrical energy storage device connection; and
switchgear to data centre hall connection.
72. A method of making connections in a data centre, the method comprising making at least one of the following connections using a plug and socket connection: electricity generator to switchgear connection;
switchgear to electrical energy storage device connection; and
switchgear to data centre hall connection.
73. A data centre hall comprising a climate control system and an electrical
energy storage device, wherein the climate control system controls the climate of the data centre hall and the climate of the electrical energy storage device.
74. A data centre hall according to claim 73, wherein the electrical energy storage device is integral with the data centre hall.
75. A data centre hall according to claim 73 or 74, comprising a fluid path from the climate control system to an interior of the data centre hall via the electrical energy storage device.
76. A data centre hall according to any of claims 73 to 75, wherein the climate control system is arranged according to the data centre climate control system of any of claims 22 to 31.
77. A climate control method comprising controlling climate of a data centre hall and climate of a data centre electrical energy storage device using the same climate control system.
78. A data centre hall comprising a plurality of sections each defining a space within and together defining a chamber, the chamber being suitable for housing racks each comprising one or more computers, wherein one of a plurality of data centre hall climate control systems is provided in each section.
79. A data centre hall according to claim 78, wherein each section includes a climate control system.
80. A climate control method comprising controlling a plurality of data centre hall climate control systems, wherein one of a plurality of data centre hall climate control systems is provided in each section of a data centre hall comprising a plurality of sections each defining a space within and together defining a chamber, the chamber being suitable for housing racks each comprising one or more computers.
81. A data centre hall comprising:
a plurality of sections each defining a space within connected together to define a chamber within, the chamber being suitable for housing racks on a floor surface of the sections, each rack comprising one or more computers; the floor surface of each section being spaced from a base of the section forming a space for cabling of the data centre hall and air flow from a climate control system of the data centre hall.
82. A section for forming a data centre hall, the section defining a space within suitable for forming a chamber when connected with one or more other sections, the chamber being suitable for housing racks on a floor surface; each rack comprising one or more computers, the floor surface being spaced from a base forming a space for cabling of a data centre hall and air flow from a climate control system of a data centre hall.
83. A kit of parts for building a data centre hall, the parts comprising sections defining a space within suitable for forming a chamber when the sections are connected together, the chamber being suitable for housing racks on a floor surface of the sections, each rack comprising one or more computers, the floor surface of each section being spaced from a base of the section forming a space for cabling of the data centre hall and air flow from a climate control system of the data centre hall.
84. A method of building a data centre hall comprising connecting together a plurality of sections each defining a space within to define a chamber within, the chamber being suitable for housing racks on a floor surface of the sections, each rack comprising one or more computers, the floor surface of each section being spaced from a base of the section forming a space for cabling of the data centre hall and air flow from a climate control system of the data centre hall.
85. A data centre hall comprising:
a plurality of sections each comprising at least two sidewalls and a roof defining a space within connected together to define a chamber within, the chamber being suitable for housing racks each comprising one or more computers; and
an additional sidewall adjacent each sidewall of each section and an additional roof adjacent the roof of each section.
86. A data centre hall according to claim 85, wherein each additional sidewall is spaced outwardly from its adjacent sidewall.
87. A section for forming a data centre hall, the section comprising at least two sidewalls and a roof defining a space within suitable for forming a chamber when connected with one or more other sections, the chamber being suitable for housing racks each comprising one or more computers; and an additional sidewall adjacent each sidewall of each section and an additional roof adjacent the roof of each section.
88. A kit of parts for building a data centre hall, the parts comprising sections each comprising at least two sidewalls and a roof defining a space within suitable for forming a chamber when the sections are connected together, the chamber being suitable for housing racks each comprising one or more computers; and
an additional sidewall adjacent each sidewall of each section and an additional roof adjacent the roof of each section.
89. A method of building a data centre hall comprising connecting together: a plurality of sections each comprising at least two sidewalls and a roof defining a space within to define a chamber within, the chamber being suitable for housing racks each comprising one or more computers; and an additional sidewall adjacent each sidewall of each section and an additional roof adjacent the roof of each section.
PCT/GB2010/001966 2009-10-29 2010-10-22 A data centre WO2011051655A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US25607009P 2009-10-29 2009-10-29
US61/256,070 2009-10-29
GB0919010.9 2009-10-29
GB0919010A GB0919010D0 (en) 2009-10-29 2009-10-29 A data centre

Publications (2)

Publication Number Publication Date
WO2011051655A2 true WO2011051655A2 (en) 2011-05-05
WO2011051655A3 WO2011051655A3 (en) 2011-09-15

Family

ID=41434884

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2010/001966 WO2011051655A2 (en) 2009-10-29 2010-10-22 A data centre

Country Status (2)

Country Link
GB (2) GB0919010D0 (en)
WO (1) WO2011051655A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013021182A1 (en) 2011-08-05 2013-02-14 Bripco Bvba Data centre
WO2014096099A2 (en) * 2012-12-19 2014-06-26 Mipco S.A R.L Method of adding a data centre building module to a data centre building
EP2833237A1 (en) * 2012-03-29 2015-02-04 Fujitsu Limited Modular datacenter and method for controlling same
WO2016011186A1 (en) * 2014-07-16 2016-01-21 Alibaba Group Holding Limited Modular data center
US9930812B2 (en) 2010-05-26 2018-03-27 Bripco, Bvba Data centre cooling systems
CN108951854A (en) * 2018-06-01 2018-12-07 中国移动通信集团设计院有限公司 The framed building system of micromodule
EP3968743A1 (en) 2016-01-29 2022-03-16 Bripco Bvba Improvements in and relating to data centres

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013070104A1 (en) * 2011-11-07 2013-05-16 Andal Investments Limited Modular data center and its operation method
ES2523806B1 (en) * 2013-05-31 2015-09-29 Ast Modular, S.L. Modular Data Processing Center
EP2884025A1 (en) * 2013-12-10 2015-06-17 Continental Rail, S.A. An assembly comprising generator built into an standard container
DE102017115693A1 (en) * 2017-07-12 2019-01-17 CoinBau GmbH Temperable module for a data center, data center and process for temperature control of a data center
DE102017119199A1 (en) * 2017-08-22 2019-02-28 SLT Schanze Lufttechnik GmbH & Co. Vermögensverwaltungs-KG Apparatus and method for conditioning the air of a room

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7278273B1 (en) 2003-12-30 2007-10-09 Google Inc. Modular data center
WO2007139560A1 (en) 2006-06-01 2007-12-06 Google, Inc. Modular computing environments
WO2008033921A2 (en) 2006-09-13 2008-03-20 Sun Microsystems, Inc. Operation ready transportable data center in a shipping container
WO2008039773A2 (en) 2006-09-25 2008-04-03 Rackable Systems, Inc. Container-based data center
US20090229194A1 (en) 2008-03-11 2009-09-17 Advanced Shielding Technologies Europe S.I. Portable modular data center

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3925679A (en) * 1973-09-21 1975-12-09 Westinghouse Electric Corp Modular operating centers and methods of building same for use in electric power generating plants and other industrial and commercial plants, processes and systems
JP4483072B2 (en) * 2000-11-20 2010-06-16 株式会社竹中工務店 Data center
US20060082263A1 (en) * 2004-10-15 2006-04-20 American Power Conversion Corporation Mobile data center
EP1981374B1 (en) * 2006-02-10 2011-10-12 American Power Conversion Corporation Storage rack management system and method
US7511959B2 (en) * 2007-04-25 2009-03-31 Hewlett-Packard Development Company, L.P. Scalable computing apparatus
US8434804B2 (en) * 2008-12-04 2013-05-07 I O Data Centers, LLC System and method of providing computer resources
GB2467808B (en) * 2009-06-03 2011-01-12 Moduleco Ltd Data centre

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7278273B1 (en) 2003-12-30 2007-10-09 Google Inc. Modular data center
WO2007139560A1 (en) 2006-06-01 2007-12-06 Google, Inc. Modular computing environments
WO2008033921A2 (en) 2006-09-13 2008-03-20 Sun Microsystems, Inc. Operation ready transportable data center in a shipping container
WO2008039773A2 (en) 2006-09-25 2008-04-03 Rackable Systems, Inc. Container-based data center
US20090229194A1 (en) 2008-03-11 2009-09-17 Advanced Shielding Technologies Europe S.I. Portable modular data center

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9930812B2 (en) 2010-05-26 2018-03-27 Bripco, Bvba Data centre cooling systems
US11363737B2 (en) 2011-08-05 2022-06-14 Bripco Bvba Data centre
US9347233B2 (en) 2011-08-05 2016-05-24 Bripco Bvba Data centre
WO2013021182A1 (en) 2011-08-05 2013-02-14 Bripco Bvba Data centre
US10575430B2 (en) 2011-08-05 2020-02-25 Bripco Bvba Data centre
US10123451B2 (en) 2011-08-05 2018-11-06 Bripco Bvba Data centre
EP2833237A1 (en) * 2012-03-29 2015-02-04 Fujitsu Limited Modular datacenter and method for controlling same
EP2833237A4 (en) * 2012-03-29 2015-04-01 Fujitsu Ltd Modular datacenter and method for controlling same
GB2526448B (en) * 2012-12-19 2020-02-12 Mipco S A R L Method of adding a data centre building module to a data centre building
WO2014096099A2 (en) * 2012-12-19 2014-06-26 Mipco S.A R.L Method of adding a data centre building module to a data centre building
WO2014096099A3 (en) * 2012-12-19 2014-10-16 Mipco S.A R.L Method of adding a data centre building module to a data centre building, a data centre building module and a carrier for transporting such a module
GB2526448A (en) * 2012-12-19 2015-11-25 Mipco S A R L Method of adding a data centre building module to a data centre building, a data centre building module and a carrier for transporting such a module
US9572290B2 (en) 2014-07-16 2017-02-14 Alibaba Group Holding Limited Modular data center
US9943005B2 (en) 2014-07-16 2018-04-10 Alibaba Group Holding Limited Modular data center
WO2016011186A1 (en) * 2014-07-16 2016-01-21 Alibaba Group Holding Limited Modular data center
EP3968743A1 (en) 2016-01-29 2022-03-16 Bripco Bvba Improvements in and relating to data centres
CN108951854A (en) * 2018-06-01 2018-12-07 中国移动通信集团设计院有限公司 The framed building system of micromodule
CN108951854B (en) * 2018-06-01 2020-06-05 中国移动通信集团设计院有限公司 Micro-modular frame building system

Also Published As

Publication number Publication date
GB201017932D0 (en) 2010-12-01
WO2011051655A3 (en) 2011-09-15
GB0919010D0 (en) 2009-12-16
GB2474944A (en) 2011-05-04

Similar Documents

Publication Publication Date Title
WO2011051655A2 (en) A data centre
US10575430B2 (en) Data centre
CA2871773C (en) Environmental system and modular power skid for a facility
US9945142B2 (en) Modular data center
AU2011237755B2 (en) Container based data center solutions
US9282684B2 (en) Cooling systems for electrical equipment
KR20140023296A (en) Space-saving high-density modular data pod systems and energy-efficient cooling systems
US11622468B1 (en) Modular data center
GB2579164A (en) Fire Barrier
CA2764213A1 (en) Modular data center

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10777070

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct app. not ent. europ. phase

Ref document number: 10777070

Country of ref document: EP

Kind code of ref document: A2