US20120147552A1 - Data center - Google Patents

Data center Download PDF

Info

Publication number
US20120147552A1
US20120147552A1 US13/195,817 US201113195817A US2012147552A1 US 20120147552 A1 US20120147552 A1 US 20120147552A1 US 201113195817 A US201113195817 A US 201113195817A US 2012147552 A1 US2012147552 A1 US 2012147552A1
Authority
US
United States
Prior art keywords
data center
container
power
air
refrigerant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/195,817
Inventor
David Driggers
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cirrascale Corp
Original Assignee
Cirrascale Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cirrascale Corp filed Critical Cirrascale Corp
Priority to US13/195,817 priority Critical patent/US20120147552A1/en
Assigned to VS ACQUISITION CO LLC reassignment VS ACQUISITION CO LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BARANDIARAN, ALEIX
Assigned to CIRRASCALE CORPORATION reassignment CIRRASCALE CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VS ACQUISITION CO LLC
Assigned to VINDRAUGA CORPORATION, A CALIFORNIA CORPORATION reassignment VINDRAUGA CORPORATION, A CALIFORNIA CORPORATION SECURITY AGREEMENT Assignors: CIRRASCALE CORPORATION, A CALIFORNIA CORPORATION
Assigned to VINDRAUGA CORPORATION, A CALIFORNIA CORPORATION reassignment VINDRAUGA CORPORATION, A CALIFORNIA CORPORATION SECURITY AGREEMENT Assignors: CIRRASCALE CORPORATION, A CALIFORNIA CORPORATION
Publication of US20120147552A1 publication Critical patent/US20120147552A1/en
Priority to CN2012102718440A priority patent/CN103257952A/en
Assigned to VINDRAUGA CORPORATION reassignment VINDRAUGA CORPORATION SECURITY INTEREST Assignors: CIRRASCALE CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05FSYSTEMS FOR REGULATING ELECTRIC OR MAGNETIC VARIABLES
    • G05F1/00Automatic systems in which deviations of an electric quantity from one or more predetermined values are detected at the output of the system and fed back to a device within the system to restore the detected quantity to its predetermined value or values, i.e. retroactive systems
    • G05F1/66Regulating electric power
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F11/00Control or safety arrangements
    • F24F11/30Control or safety arrangements for purposes related to the operation of the system, e.g. for safety or monitoring
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/18Packaging or power distribution
    • G06F1/183Internal mounting support structures, e.g. for printed circuit boards, internal connecting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/20Cooling means
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B35/00Electric light sources using a combination of different types of light generation
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B41/00Circuit arrangements or apparatus for igniting or operating discharge lamps
    • H05B41/14Circuit arrangements
    • H05B41/36Controlling
    • H05B41/38Controlling the intensity of light
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/175Controlling the light source by remote control
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/14Mounting supporting structure in casing or on frame or rack
    • H05K7/1485Servers; Data center rooms, e.g. 19-inch computer racks
    • H05K7/1497Rooms for data centers; Shipping containers therefor
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • H05K7/20709Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
    • H05K7/20718Forced ventilation of a gaseous coolant
    • H05K7/20736Forced ventilation of a gaseous coolant within cabinets for removing heat from server blades
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • H05K7/20709Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
    • H05K7/20718Forced ventilation of a gaseous coolant
    • H05K7/20745Forced ventilation of a gaseous coolant within rooms for removing heat from cabinets, e.g. by air conditioning device
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • H05K7/20709Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
    • H05K7/20763Liquid cooling without phase change
    • H05K7/2079Liquid cooling without phase change within rooms for removing heat from cabinets
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F2110/00Control inputs relating to air properties
    • F24F2110/10Temperature
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F24HEATING; RANGES; VENTILATING
    • F24FAIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
    • F24F2110/00Control inputs relating to air properties
    • F24F2110/20Humidity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2200/00Indexing scheme relating to G06F1/04 - G06F1/32
    • G06F2200/20Indexing scheme relating to G06F1/20
    • G06F2200/201Cooling arrangements using cooling fluid

Definitions

  • the present invention is directed generally to a data center and more particularly to a modular data center.
  • Planning and constructing a traditional data center requires substantial capital, planning, and time.
  • the challenges of planning a traditional data center include maximizing computing density (i.e., providing a maximum amount of computing capacity within a given physical space). Further, it may be difficult, if not impossible, to use the space available efficiently enough to provide adequate computing capacity.
  • a data center capable of integration with an already existing data center is also advantageous.
  • FIG. 1 is a perspective view of a data center housed inside a container.
  • FIG. 2 is an enlarged fragmentary perspective view of the container of FIG. 1 omitting its first longitudinal side portion, front portion, and personnel door to provide a view of its interior portion.
  • FIG. 3 is an enlarged fragmentary cross-sectional perspective view of the data center of FIG. 1 taken laterally through the container and omitting its first longitudinal side portion, and second longitudinal side portion.
  • FIG. 4 is an enlarged fragmentary cross-sectional perspective view of the data center of FIG. 1 omitting its electrical system and taken longitudinally through the container.
  • FIG. 5 is an enlarged fragmentary cross-sectional view of the data center of FIG. 1 omitting its electrical system and taken laterally through the container.
  • FIG. 6 is a front view of a carriage of the data center of FIG. 1 housing exemplary computing equipment.
  • FIG. 7A is an enlarged fragmentary cross-sectional perspective view of the data center of FIG. 1 omitting portions of its vertical cooling system and taken longitudinally through the container.
  • FIG. 7B is an electrical schematic of the electrical system of the data center of FIG. 1 .
  • FIG. 8A is an enlarged fragmentary cross-sectional perspective view of an embodiment of the data center of FIG. 1 including an uninterruptible power supply (“UPS”) omitting its vertical cooling system and taken longitudinally through the container.
  • UPS uninterruptible power supply
  • FIGS. 8B and 8C are an electrical schematic of the electrical system of the data center of FIG. 1 including a UPS.
  • FIG. 9 is a perspective view of the carriage of FIG. 5 omitting the exemplary computing equipment.
  • FIG. 10 is an enlarged fragmentary cross-sectional perspective view of the data center of FIG. 1 omitting its electrical system and taken longitudinally through the container.
  • FIG. 11 is an enlarged fragmentary cross-sectional view of an alternate embodiment of a data center including openings and louvers along its roof and floor portions, omitting its electrical system, and taken laterally through the container.
  • FIG. 12 is an enlarged fragmentary cross-sectional perspective view of the data center of FIG. 11 including alternate louvers along its roof and floor portions and, omitting its electrical system and portions of its vertical cooling systems, and taken longitudinally through the container.
  • FIG. 13 is an enlarged fragmentary perspective view of alternate embodiment of a data center including openings and louvers along its roof portion and side portions.
  • FIG. 14 is an enlarged fragmentary perspective view of the data center of FIG. 13 omitting louvers along its roof portion and including louver assemblies along its side portions.
  • FIG. 15 is an enlarged fragmentary cross-sectional view of the insulated wall of the data center of FIG. 1 showing the outer container wall, a middle insulating layer, and an inner protective layer.
  • FIG. 16 is a perspective view of the base frame of the modular data center, including corner support braces.
  • FIG. 17 is a perspective view of the base frame with bottom, side and top supports used to mount and support internal equipment.
  • FIG. 18 is a perspective view of a carriage assembly for receiving computing equipment.
  • FIG. 19 is a front view of a carriage assembly showing air moving devices and designated spaces for computing equipment.
  • FIG. 20 is a perspective view of a facilities module showing a heat exchanger, cooling water pipes, humidifier, dehumidifier, electrical panels and conduits, external connections, a controller, and sensors.
  • the internal components of the module are redundant on both sides therefore only one side is shown for clarity.
  • FIG. 21 is a fragmentary view of a modular wall showing an inner wall, an insulating layer, and an outer wall.
  • FIG. 22 is a fragmentary view of an end cap showing an outer wall, insulating layer, inner wall, frame, and personnel door.
  • FIG. 23 is a perspective view of an alternative embodiment of a computing module showing a personnel door replacing a carriage.
  • FIG. 24 is a fragmentary view of a computing equipment module showing the frame, two side walls, a bottom wall, and a top wall.
  • FIG. 25 is a fragmentary view of a facilities module showing the frame, two side walls, one end wall, one bottom wall, and one top wall.
  • FIG. 26 is a perspective view of an embodiment of the modular data center showing one facilities module, two computing equipment modules, one end cap with personnel door, and external support connections.
  • aspects of the present invention relate to a data center 10 housed inside a container 12 .
  • the container 12 may be a conventional shipping container of the type typically used to ships goods via a cargo ship, railcar, semi-tractor, and the like.
  • the container 12 is portable and may be delivered to a use site substantially ready for use with minimal set up required.
  • the data center 10 may be preconfigured with desired computer hardware, data storage capacity, and interface electronics.
  • the data center 10 may be configured according to customer requirements and/or specifications.
  • the data center 10 is completely self contained in the container 12 and may be substantially ready for use immediately following delivery thus reducing the need for on-site technical staff, and in particular embodiments, reducing the need to install and setup computing hardware, route data cables, route power cables, and the like.
  • the environment inside the container 12 may be climate controlled to provide a suitable environment for the operation of computing equipment and hardware.
  • the environment inside the container 12 may provide optimal power consumption (including adequate power for lighting), cooling, ventilation, and space utilization.
  • the data center 10 may be configured to provide an efficient self-contained computing solution suitable for applications in remote locations, temporary locations, and the like.
  • the container 12 has a first longitudinal side portion 14 opposite a second longitudinal side portion 16 .
  • the container 12 also includes a first end portion 18 extending transversely between the first and second longitudinal side portions 14 and 16 and a second end portion 20 extending transversely between the first and second side portions 14 and 16 .
  • each of the first and second longitudinal side portions 14 and 16 may be about 40 feet long and about 9.5 feet tall.
  • each of the first and second longitudinal side portions 14 and 16 may be about 20 feet long and about 9.5 feet tall.
  • the first and second end portions 18 and 20 may be about 8 feet wide and about 9.5 feet tall.
  • One of the first and second end portions 18 and 20 may include a personnel door 24 .
  • the container 12 also includes a top or roof portion 30 extending transversely between the first and second side portions 14 and 16 and longitudinally between the first and second end portions 18 and 20 .
  • the container 12 also includes a bottom or floor portion 32 extending transversely between the first and second side portions 14 and 16 and longitudinally between the first and second end portions 18 and 20 .
  • the container 12 may be mounted on pillars 33 , blocks, or the like to be elevated above the ground.
  • insulation may be applied to the inside of the container 12 , covering the longitudinal side portions 14 and 16 , the end portions 18 and 20 , the top or roof portion 30 and the bottom or floor portion 32 .
  • a steel panel (not shown) is then applied to cover the insulation, providing protection for the insulation.
  • the steel panel may be attached to the container 12 side portions 14 and 16 , end portions 18 and 20 , top or roof portion 30 , and bottom or floor portion 32 by way of, for example, spot welds numerous enough to provide adequate mechanical support for the steel panels and applied insulation.
  • the insulation may be pre-formed foam panels of polyisocyanurate.
  • the floor portion 32 includes a support frame 40 having a first longitudinally extending framing member 42 A spaced laterally from a second longitudinally extending framing member 42 B.
  • the first and second longitudinally extending framing members 42 A and 42 B extend along and support the first and second longitudinal side portions 14 and 16 (see FIG. 1 ), respectively.
  • the floor portion 32 also includes a plurality of laterally extending framing members 44 that extend transversely between the first and second longitudinally extending framing members 42 A and 42 B.
  • a plurality of laterally extending interstices or lower plenums 46 are defined between the laterally extending framing members 44 . If as illustrated in the embodiment depicted in FIG. 3 , the laterally extending framing members 44 have a C-shaped cross-sectional shape having an open inside portion 47 , the lower plenums 46 may each include the open inside portions 47 of the C-shaped laterally extending framing members 44 .
  • Air may flow laterally within the floor portion 32 inside the lower plenums 46 , which include the open inside portion 47 of the C-shaped laterally extending framing members 44 .
  • the laterally extending framing members 44 may help guide or direct this lateral airflow.
  • Each of the laterally extending framing members 44 may be constructed from a single elongated member having a C-shaped cross-sectional shape. However, each of the laterally extending framing members 44 may include three laterally extending portions: a first portion 50 , a second portion 52 , and a third portion 54 .
  • the first portion 50 is adjacent the first longitudinal side portion 14
  • the second portion 52 is adjacent the second longitudinal side portion 16
  • the third portion 54 is located between the first and second portions 50 and 52 .
  • a first pair of spaced apart longitudinally extending support surfaces 56 A and 56 B are supported by the first portion 50 of the laterally extending framing members 44 .
  • a second pair of spaced apart longitudinally extending support surfaces 58 A and 58 B are supported by the second portion 52 of the laterally extending framing members 44 .
  • the third portion 54 of the laterally extending framing members 44 is flanked by the longitudinally extending support surfaces 56 B and 58 B.
  • FIG. 4 provides a longitudinal cross-section of the data center 10 .
  • the first end portion 18 and the personnel door 24 have been omitted to provide a better view of the components inside the container 12 .
  • the first longitudinal side portion 14 , the second longitudinal side portion 16 , the first end portion 18 (see FIG. 1 ), the second end portion 20 , the roof portion 30 , and the floor portion 32 define an enclosed hollow interior portion 60 accessible to a user (such as a technician) via the personnel door 24 (see FIG. 1 ).
  • a plurality of racks or carriages 70 are arranged along each of the first and second longitudinal side portions 14 and 16 .
  • the first pair of spaced apart longitudinally extending support surfaces 56 A and 56 B (see FIGS. 2 and 3 ) supported by the first portions 50 of the laterally extending framing members 44 support the plurality of carriages 70 (see FIG. 3 ) extending along the first longitudinal side portion 14 .
  • the second pair of spaced apart longitudinally extending support surfaces 58 A and 58 B supported by the second portions 52 of the laterally extending framing members 44 support the plurality of carriages 70 (see FIGS. 3 and 4 ) extending along the second longitudinal side portion 16 .
  • a central aisle portion 72 is defined between the carriages 70 and above the third portions 54 of the laterally extending framing members 44 .
  • the third portions 54 of the laterally extending framing members 44 support a walkway 74 .
  • the walkway 74 may include a perforated portion 76 and one or more raceways or wire management channels 78 A and 78 B extending longitudinally alongside the perforated portion 76 .
  • one or more raceways or wire management channels may extend along the roof portion 30 in the central aisle portion 72 .
  • the perforated portion 76 may be constructed using a gas permeable, porous, or perforated material.
  • the perforated portion 76 may be constructed using perforated tiles 80 that permit air to flow through the tiles, from above the tiles to below the tiles and into the lower plenums 46 .
  • the perforated tiles 80 may be any standard perforated computer room tiles known in the art.
  • suitable tiles include manufacturing part number 20-0357 sold by Tate Access Floors, Inc. of Jessup, Md.
  • Each of the wire management channels 78 A and 78 B has an open top portion 82 and one or more removable cover 84 affixed thereupon.
  • Each of the covers 84 is couplable to the open top portion 82 of each of the wire management channels 78 A and 78 B.
  • the covers 84 may couple to the open top portion 82 of the channels 78 A and 78 B via a friction connection, snap fit connection, and the like.
  • the carriages 70 may be coupled to the first pair of spaced apart longitudinally extending support surfaces 56 A and 56 B and the second pair of spaced apart longitudinally extending support surfaces 58 A and 58 B by isolators or isolating couplers 86 configured to absorb movement of the container 12 relative to the carriages 70 .
  • the isolating couplers 86 help prevent damage to any computing equipment mounted to the carriages 70 that may be caused by the movement of the container 12 occurring when the container is moved to a use location, during a seismic event (e.g., an earthquake), and the like. As illustrated in FIG.
  • each of the carriages 70 may also be coupled to one of the first and second longitudinal side portions 14 and 16 by isolating couplers 86 to prevent the carriages from toppling over or bumping into the first and second longitudinal side portions 14 and 16 of the container 12 during transport, a seismic event, and the like.
  • five carriages 70 are arranged along each of the first and second longitudinal side portions 14 and 16 .
  • five carriages 70 may be arranged along each of the first and second longitudinal side portions 14 and 16 when the container 12 side portions 14 and 16 are each 40 feet long.
  • two carriages 70 may be arranged along each of the first and second longitudinal side portions 14 and 16 when the container 12 side portions 14 and 16 are each 20 feet long.
  • a first upper plenum 90 A is provided adjacent to the first longitudinal side portion 14 and the roof portion 30 and a second upper plenum 90 B is provided adjacent to the second longitudinal side portion 16 and the roof portion 30 .
  • Air disposed in the first upper plenum 90 A is cooled by a vertical cooling system 100 A (described in greater detail below).
  • Air disposed in the second upper plenum 90 B is cooled by a vertical cooling system 100 B substantially similar to the vertical cooling system 100 A. The cooled air flows downwardly from the first and second upper plenums 90 A and 90 B into the central aisle portion 72 of the interior portion 60 of the container 12 and toward the walkway 74 .
  • the central aisle portion 72 essentially serves as a duct to receive and combine the cooled air from both of the vertical cooling systems 100 A and 100 B.
  • the vertical cooling systems 100 A and 1008 flood with cooled air the central aisle portion 72 of the interior portion 60 of the container 12 between the carriages 70 .
  • the air in the central aisle portion 72 of the interior portion 60 of the container 12 may have a temperature of about 75 degrees F. to about 79 degrees F., and in some implementations about 77 degrees F.
  • the combined cooled air passes through the perforated portion 76 of the walkway 74 and into the laterally extending lower plenums 46 .
  • the cooled air inside the lower plenums 46 flows laterally along the laterally extending framing members 44 toward both the first and second longitudinal side portions 14 and 16 .
  • the cooled air is drawn up into the carriages 70 , flows upwardly therethrough, and returns to the first and second upper plenums 90 A and 90 B above the carriages 70 whereat it is cooled again by the vertical cooling systems 100 A and 100 B, respectively.
  • the vertical cooling systems 100 A and 1008 are mechanically separate and operate independently of one another. If one of the vertical cooling systems 100 A and 100 B is not functioning, the other functional vertical cooling system continues to cool the air flowing into the central aisle portion 72 and hence into the lower plenums 46 for distribution to both the carriages 70 at the first longitudinal side portion 14 and the carriages at the second longitudinal side portion 16 without regard to which vertical cooling system is not functioning. In this manner, the data center 10 may be cooled by one of the vertical cooling systems 100 A and 1008 alone. Both of the vertical cooling systems 100 A and 100 B may be coupled to a common power source or separate power sources. Further, the vertical cooling systems 100 A and 100 B may be coupled to a common cooled water supply or source 310 (see FIG. 10 ).
  • FIG. 6 provides a front view of one of the carriages 70 storing computing equipment 102 .
  • the particular computing equipment 102 received inside the carriage 70 may include any computing devices (e.g., blade-type servers, backplanes therefore, and the like) as well as any other type of rack mounted electronic equipment known in the art.
  • the structure of the carriages 70 is described in detail below.
  • an electrical system 110 supplies electric power to the computing equipment 102 (see FIG. 6 ) housed by the carriages 70 .
  • the computing equipment 102 has been omitted from FIGS. 7A and 7B .
  • One or more electric utility lines 112 A and 1128 supply power to the electrical system 110 .
  • each of the electric utility lines 112 A and 1128 may provide about 600 Amperes WYE of power to the electrical system 110 .
  • a WYE power system will allow for the implementation of standard voltages used in the computing equipment industry like, for example, 110 VAC and 208 VAC.
  • 208 VAC is supplied to a plurality of power receptacles 132 to allow for increased efficiency of the internal power supplies of the individual pieces of computing equipment thereby reducing overall power consumption of the data center. Additionally, 110 VAC is supplied to a plurality of power receptacles to support computing equipment that cannot accept 208 VAC power input.
  • the electrical system 110 includes one or more power distribution panels 120 A and 120 B each having a plurality of circuit breakers 122 A-M, and 122 A-N, respectively, that protect the various powered components (including the vertical cooling systems 100 A and 100 B, the computing equipment 102 , and the like) within the container 12 from power surges, such as an excess in current draw due to low voltage, a power cable interconnect fault, or any other condition that causes an excess current draw.
  • the circuit breakers 122 A-M of the power distribution panel 120 A and the circuit breakers 122 A-N of the power distribution panel 120 B may have a fault rating of less than 22 KAIC (Thousand Ampere Interrupting Capacity).
  • the utility line 112 A is coupled to the electrical system 110 through a disconnect switch 124 A configured to selectively disconnect the flow of current from the utility line 112 A to the power distribution panels 120 A and 120 B.
  • the disconnect switch may be configured for 600 Amps AC.
  • the utility line 112 B may be coupled to a separate disconnect switch 124 B configured to selectively disconnect the flow of current from the utility line 112 B.
  • the power distribution panel 120 A provides power to the vertical cooling system 100 A and the power distribution panel 120 B provides power to the vertical cooling system 100 B.
  • Each of the power distribution panels 120 A and 120 B also provides power to the carriages 70 along both the first and second longitudinal side portions 14 and 16 of the container 12 .
  • the five carriages 70 extending along the first longitudinal side portion 14 of the container 12 have been labeled “CARR. #9,” “CARR. #7,” “CARR. #5,” “CARR. #3,” and “CARR. #1,” and the five carriages 70 extending along the second longitudinal side portion 16 of the container 12 have been labeled “CARR. #8,” “CARR. #6,” “CARR. #4,” “CARR. #2,” and “CARR. #0.”
  • a plurality of electrical conductors 130 are connected to the circuit breakers 122 A-M of the power distribution panel 120 A and the circuit breakers 122 A-N of the power distribution panel 120 B.
  • Each of the electrical conductors 130 coupled to the circuit breakers 122 C-G and 1221 -M of the power distribution panel 120 A extend along the first longitudinal side portion 14 behind the carriages 70 and each of the electrical conductors 130 coupled to the circuit breakers 122 C-G and 1221 -M of the power distribution panel 120 B extend along the second longitudinal side portion 16 behind the carriages 70 .
  • the electrical conductors 130 extending along the first and second longitudinal side portions 14 and 16 transport electricity to a plurality of power receptacles 132 , which may be mounted to the first and second longitudinal side portions 14 and 16 , or the carriages 70 .
  • electrical conductors 130 conducting electricity to selected power receptacles 132 have been omitted.
  • two or more power receptacles 132 may be included for each carriage 70 .
  • two power receptacles 132 have been illustrated in FIG. 7B for each carriage 70 .
  • the power receptacles 132 for the carriage “CARR. #8” are coupled one each (via a pair of electrical conductors 130 ) to the circuit breakers 122 C of the power distribution panels 120 A and 1206 .
  • the power receptacles 132 for the carriage “CARR. #6” are coupled one each (via a pair of electrical conductors 130 ) to the circuit breakers 122 D of the power distribution panels 120 A and 120 B.
  • the power receptacles 132 for the carriage “CARR. #4” are coupled one each (via a pair of electrical conductors 130 ) to the circuit breakers 122 E of the power distribution panels 120 A and 120 B.
  • the power receptacles 132 for the carriage “CARR. #2” are coupled one each (via a pair of electrical conductors 130 ) to the circuit breakers 122 F of the power distribution panels 120 A and 120 B.
  • the power receptacles 132 for the carriage “CARR. #0” are coupled one each (via a pair of electrical conductors 130 ) to the circuit breakers 122 G of the power distribution panels 120 A and 120 B.
  • the power receptacles 132 for the carriage “CARR. #9” are coupled one each (via a pair of electrical conductors 130 ) to the circuit breakers 1221 of the power distribution panels 120 A and 120 B.
  • the power receptacles 132 for the carriage “CARR. #7” are coupled one each (via a pair of electrical conductors 130 ) to the circuit breakers 122 J of the power distribution panels 120 A and 120 B.
  • the power receptacles 132 for the carriage “CARR. #5” are coupled one each (via a pair of electrical conductors 130 ) to the circuit breakers 122 K of the power distribution panels 120 A and 120 B.
  • the power receptacles 132 for the carriage “CARR. #3” are coupled one each (via a pair of electrical conductors 130 ) to the circuit breakers 122 L of the power distribution panels 120 A and 120 B.
  • the power receptacles 132 for the carriage “CARR. #1” are coupled one each (via a pair of electrical conductors 130 ) to the circuit breakers 122 M of the power distribution panels 120 A and 120 B.
  • the electrical system 110 may include a separate power supply 133 (e.g., a 480 VAC power supply) for each of the power receptacles 132 .
  • Each of the power supplies 133 may be coupled between one of the circuit breakers 122 C-G and 1221 -M of the power distribution panels 120 A and 120 B and the power receptacles 132 .
  • the power supplies 133 are coupled to a controller 134 (described below).
  • the controller 134 sends instructions to the power supplies 133 instructing them to provide power to one or more of their respective power receptacles 132 or discontinue sending power to one or more of their respective power receptacles 132 . In this manner, the controller 134 controls which of the power receptacles 132 are powered and which are not.
  • circuit breaker 122 A of the power distribution panel 120 A is coupled by an electrical conductor 130 to the vertical cooling systems 100 A and the circuit breaker 122 B of the power distribution panel 120 B is coupled by an electrical conductor 130 to the vertical cooling systems 100 B.
  • the circuit breaker 122 B of the power distribution panel 120 A may be coupled to the vertical cooling systems 1008 and the circuit breaker 122 N of the power distribution panel 120 B may be coupled to the vertical cooling systems 100 A.
  • the circuit breaker 122 H of the power distribution panel 120 B may be coupled by an electrical conductor 130 to an optional humidifier 123 . Additionally, the circuit breaker 122 B of power distribution panel 120 A may be coupled by an electrical conductor 130 to an optional dehumidifier 125 .
  • the optional humidifier 123 and dehumidifier 125 may include a humidity sensor (not shown) configured to generate a humidity signal indicating the humidity inside the container 12 .
  • the controller 134 may be coupled to the optional humidifier 123 and dehumidifier 125 and configured to receive the humidity signal and interpret it to determine the humidity inside the container 12 .
  • the controller 134 may send instructions to the humidifier 123 and dehumidifier 125 instructing them to increase or decrease the humidity inside the container 12 based on the humidity signal.
  • the humidifier 123 may increase its water vapor output to increase the humidity in the air inside the container 12 or the dehumidifier may increase its dry air output to decrease the humidity inside the air inside the container 12 .
  • the functions of the humidifier 123 and dehumidifier 125 may be combined into a single humidity control unit (not shown).
  • the controller 134 may be coupled to the humidity control unit.
  • the controller 134 may send instructions to the humidity control unit instructing it to increase or decrease humidity inside the container 12 based on the humidity signal.
  • the electrical system 110 may include one or more uninterruptible power supplies (“UPS”) 114 , continuous power supplies (“CPS”), backup batteries, and the like.
  • the UPS 114 provides power to the various powered components of the data center 10 , including the vertical cooling systems 100 A and 100 B, the computing equipment 102 , and the like when power to the utility line 112 B is interrupted.
  • the electrical system 110 includes a single UPS 114 configured to provide power to all of the carriages 70 and other electrical equipment (e.g., the cooling systems 100 A and 100 B) located inside of the data center 10 .
  • the UPS 114 may include one or more batteries 115 .
  • One or more carriages 70 may be omitted from the data center 10 to provide physical space inside the container 12 for the UPS 114 .
  • a single UPS 114 may fit within the same footprint or spatial envelope occupied by one of the carriages 70 .
  • a single UPS 114 may fit within the same footprint or spatial envelope occupied by a pair of laterally adjacent carriages 70 .
  • the UPS 114 may fit within the spatial envelope of a first one of the carriages 70 and the batteries 115 of the UPS 114 may occupy the same spatial envelope as a second one of the carriages 70 laterally adjacent to the first.
  • the data center 10 may be configured based on the user's desires with respect to computing equipment 102 and the number of carriages 70 required thereby versus reliability (i.e., the inclusion or exclusion of one or more optional UPS 114 ).
  • the UPS 114 may receive electricity from the utility line 112 B and/or the utility line 112 A.
  • the UPS 114 is coupled to the power distribution panels 120 A and 120 B through a disconnect switch 124 C.
  • a UPS bypass switch 124 D is provided.
  • the switches 124 A, 124 B, and 124 C are closed and the UPS bypass switch 124 D is open.
  • the UPS 114 may be bypassed by opening switches 124 A, 1248 , and 124 C and closing the UPS bypass switch 124 D.
  • the controller 134 may be coupled to the switches 124 A, 124 B, 124 C, and 124 D and configured to open them to cut off power to the power distribution panels 120 A and 120 B.
  • control lines 8B illustrate control lines coupling the controller 134 to the switches 124 A, 124 C, and 124 D.
  • the control lines carry instructions from the controller instructing the switches 124 A, 124 C, and 124 D to open to cut all power to the power distribution panels 120 A and 120 B.
  • Another control line (not shown) may be used to connect the controller 134 to the disconnect switch 124 B.
  • the UPS 114 is configured to detect when power to the power distribution panels 120 A and 120 B has been interrupted and begin discharging power thereto to avoid or reduce the duration of any loss of power to the other components of the electrical system 110 .
  • power received from the utility line 112 B (through the disconnect switch 124 B) is routed by the UPS 114 through the disconnect switch 124 C to the power distribution panels 120 A and 1208 .
  • the UPS 114 may be configured to begin discharging electricity from the batteries 115 to the power distribution panels 120 A and 1208 or alternatively, to route power from the utility line 112 A to the power distribution panels 120 A and 120 B.
  • the UPS 114 includes a static switch 116 .
  • the static switch 116 may transfer the load (e.g., the computing equipment 102 ) to the utility line 112 A. If the utility line 112 A is also not providing power, the UPS 114 will discharge electricity from the batteries 115 to the power distribution panels 120 A and 120 B of the electrical system 110 . Alternatively, upon loss of power in the utility line 112 B, the UPS 114 may begin discharging electricity from the batteries 115 to the power distribution panels 120 A and 1208 of the electrical system 110 .
  • the static switch 116 When the UPS 114 has discharged all of its stored energy, the static switch 116 will transfer the load (e.g., the computing equipment 102 ) to the utility line 112 A. Coupling the static switch 116 of the UPS 114 to the utility line 112 A provides greater fault tolerance than coupling the UPS 114 to the utility line 112 B alone.
  • Tables A and B below provide a pair of non-limiting examples of from which power source, the utility line 112 A, the utility line 112 B, and the batteries 115 , the static switch 116 may direct power to the power distribution panels 120 A and 120 B.
  • the term “YES” indicates the power source is providing power at the static switch 116 and the term “NO” indicates the power source is not providing power at the static switch 116 .
  • the electrical system 110 also provides power to a lighting system 140 .
  • the lighting system 140 may include a plurality of light emitting diodes (“LEDs”) 142 installed inside the interior portion 60 of the container 12 on the roof portion 30 within the central aisle portion 72 above the walkway 74 and between the upper plenums 90 A and 90 B.
  • the LEDs 142 may provide power and/or space efficiency over other types of light emitting devices.
  • the lighting system 140 may include fluorescent lights (not shown) installed in the central aisle portion 72 above the walkway 74 .
  • the electrical system 110 may include a 2 KVA lighting transformer (not shown).
  • the lighting system 140 may include emergency lights (not shown) located over the personnel door 24 for emergency egress upon loss of power.
  • the controller 134 may be coupled to the lighting system 140 and configured to turn the LEDs 142 on and off.
  • the lighting system 140 may also include a motion sensing unit 153 installed inside the interior portion 60 of the container 12 .
  • the motion sensing unit may generate a motion signal indicating the presence of motion inside the container 12 .
  • the controller 134 may be coupled to the optional motion sensing unit 153 and configured to receive the motion signal and interpret it to determine the presence of motion inside the container 12 .
  • the controller 134 may send instructions to the lighting system 140 and configured the turn the LEDs 142 on.
  • the controller 134 may send instructions to the lighting system 140 and configured to turn the LEDs 142 off after a pre-determined time from the cessation of the presence of motion inside the container 12 .
  • the controller 134 may instruct the lighting system 140 to turn the LEDs 142 off after the presence of motion inside the container 12 has not been detected for 10 minutes.
  • the motion signal may also be communicated to an intrusion detection system 196 .
  • a 24 VDC system 180 may be implemented.
  • the 24 VDC system may provide power to various controllers associated with the data center.
  • the controller functions may be for power monitoring and management 190 such as voltage and current, water supply monitoring 192 such as pressure, temperature and flow rate, various system alarms such as fire detection 184 , fire suppression 186 such as DuPont's FM200 Fire Suppression System, flood detection 188 , as well as motion sensing 153 , lighting 140 , intrusion detection 196 , and personnel door 24 control.
  • the 24 VDC system may use a dedicated UPS 194 to allow for continued monitoring and management in the event that AC input power to the container 12 is lost or interrupted.
  • the UPS 194 will have enough capacity to provide power to the 24 VDC system 180 for a minimum of 1 hour. It is to be appreciated that multiple DC systems, each outputting a different DC voltage such as, for example, 12 VDC or 48 VDC, may be implemented to accomplish all management and control functions. It is also to be appreciated that each DC system may use a single dedicated UPS, a single UPS may be used to supply power to all DC systems, or multiple DC systems may be provided with power from one of a plurality of DC system UPSs.
  • the container 12 may include a network connection 150 , such as a modem, router, and the like, coupled to an external network 152 , such as the Internet.
  • the network connection 150 may be connected to the external network 152 by any suitable connection known in the art, including a wireless connection, a segment of copper cable, a segment of fiber optic cable, and the like.
  • the container 12 may be coupled to an external network implemented in a neighboring building by one or more network cable connections (e.g., 48 CAT6 GigE network connections).
  • the container 12 may also include an internal or private network 154 , such as a local area network (“LAN”), used to route data within the data center 10 between the various pieces of computing equipment 102 .
  • LAN local area network
  • the private network 154 may be implemented as an Ethernet network.
  • Network cabling may couple the computing equipment 102 in the carriages 70 to the various network components of the private network 154 .
  • the network cabling may include any suitable cables known in the art, including copper cables, fiber optic cables, and the like.
  • the network cabling may be coupled along the first and second longitudinal side portions 14 and 16 as appropriate to effect a connection with the computing equipment 102 residing in the carriages 70 . Further, the network cabling may reside inside the wire management channels 78 A and 78 B.
  • the computing equipment 102 in the carriages 70 may be coupled to the various components of the private network 154 via wireless connections.
  • the controller 134 is also coupled to the private network 154 .
  • the electrical system 110 may also be connected to the private network 154 .
  • each of the power sources 133 (coupled to the power receptacles 132 ) may be coupled to the private network 154 .
  • the controller 134 may send instructions to the power sources 133 over the private network 154 .
  • the lighting system 140 may be coupled to the private network 154 and the controller 134 may send instructions to the lighting system 140 over the private network 154 .
  • Other components such as the optional humidifier 123 , dehumidifier 125 , and the vertical cooling systems 100 A and 100 B may be coupled to the private network 154 for the purposes of communicating with the controller 134 and/or receiving instructions therefrom.
  • the network connection 150 may be coupled to the private network 154 for the purposes of providing communication between the private network 154 and the external network 152 .
  • Methods and devices for implementing the private network 154 , coupling the computing equipment 102 to the private network 154 , and coupling the private network 154 to the external network 152 are well-known in the art and will not be described in detail herein.
  • the controller 134 is coupled to and/or includes a memory 136 .
  • the memory 136 includes instructions executable by the controller 134 .
  • the controller 134 may also be optionally coupled to one or more temperature sensors 137 disposed inside the interior portion 60 of the container 12 each configured to send a temperature signal to the controller 134 .
  • the memory 136 may include instructions that when executed by the controller 134 instruct the controller to interpret the temperature signal received from each of the temperature sensors 137 to obtain a temperature measurement.
  • the memory 136 may also store the temperature measurement(s) obtained from the temperature signal(s), the temperature signal received from each of the temperature sensors 137 , and the like.
  • the controller 134 may control both the computing equipment 102 (see FIG. 6 ) and the environment inside the container 12 over the private network 154 .
  • the controller 134 may communicate with the controller 134 .
  • the remote computing devices may receive temperature information from the controller 134 .
  • the remote computing devices may receive humidity information from the controller 134 that the controller received from the optional humidifier 123 and dehumidifier 125 .
  • the remote computing devices may send instructions to the controller 134 instructing it to send instructions to the optional humidifier 123 and dehumidifier 125 to increase or decrease the humidity inside the container 12 .
  • the remote computing devices may also instruct the controller 134 to send instructions powering up or powering down selected power sources 133 (coupled to selected power receptacles 132 ). Further, the remote computing devices may also instruct the controller 134 to turn on or off the LEDs 142 of the lighting system 140 .
  • the controller 134 may monitor environmental systems inside the container 12 .
  • the vertical cooling systems 100 A and 1408 may each include a cooling system processor or controller 380 (described below).
  • the controller 134 may be coupled to the cooling system controller 380 for the purposes of receiving information (e.g., alerts, warnings, system faults, and the like) therefrom.
  • the controller 134 may send the information it receives to the remote computing device(s).
  • the controller 134 may transmit an alert to the remote computing device(s) indicating a problem has occurred (e.g., the flow of cooled water has stopped, the temperature of the flow of refrigerant is too high to adequately cool the computing equipment 102 , and the like).
  • the controller 134 may send instructions to the cooling system controller 380 instructing it to operate or not operate based on the temperature inside the container 12 .
  • the memory 136 may include instructions for monitoring the electrical system 110 and instructing the controller 134 to report information related to power availability and consumption to the remote computing device(s) (not shown) coupled to the external network 152 . Further, the controller 134 may receive instructions from the remote computing device(s), such as an instruction to power down the electrical system 110 (e.g., open switches 124 A, 124 B, 124 C, and 124 D), power selected power sources 133 (coupled to one or more power receptacles 132 ), turn off the power to selected power sources 133 (coupled to one or more power receptacles 132 ) and the like.
  • an instruction to power down the electrical system 110 e.g., open switches 124 A, 124 B, 124 C, and 124 D
  • power selected power sources 133 coupled to one or more power receptacles 132
  • turn off the power to selected power sources 133 coupled to one or more power receptacles 132
  • the like such as an instruction to power down the
  • the controller 134 may monitor and/or control the computing equipment 102 (see FIG. 6 ).
  • the memory 136 may include instructions for monitoring the UPS 114 , individual pieces of computing equipment 102 (e.g., individual blade servers), and the like. Further, the controller 134 may receive instructions from the remote computing device(s), instructing the controller to turn individual pieces of computing equipment 102 on or off, provide data thereto, and the like.
  • the controller 134 may include a user interface 138 configured to display the temperature measurement(s) obtained from the temperature signal received from each of the temperature sensors 137 , and any data received from other systems inside the container 12 .
  • FIGS. 5 , 6 , and 9 An exemplary embodiment of the carriage 70 is provided in FIGS. 5 , 6 , and 9 .
  • the carriage 70 is configured to store computing equipment 102 , which may include a plurality of computing devices (e.g., blade-type servers) as well as any other type of rack mounted electronic equipment known in the art.
  • the carriage 70 has a substantially open base portion 210 opposite a substantially open top portion 212 .
  • the carriage 70 also has a substantially open front portion 214 into which computing equipment 102 , fans, cabling, rack mountable equipment, accessories, and the like are received for storage and use therein. Opposite the open front portion 214 , the carriage 70 has a back portion 216 .
  • Cabling and wiring such as electrical wiring, communication cables, and the like, may enter the carriage 70 through the back portion 216 , which may be open and/or may include one or more apertures 215 configured to permit one or more cables or wires to pass therethrough.
  • the electrical conductors 130 and optional communication cabling may extend along the first and second longitudinal side portions 14 and 16 .
  • the power receptacles 132 are positioned adjacent to the back portions 216 of the carriages 70 along the first and second longitudinal side portions 14 and 16 .
  • Such power receptacles 132 and communication cabling may be coupled to the computing equipment 102 in the carriage 70 through its back portion 216 .
  • an amount of computing equipment 102 housed in the interior portion 60 of the container 12 is determined at least in part by the number of carriages 70 and the capacity of each to house computing equipment 102 .
  • the carriage 70 includes a frame 220 to which computing equipment 102 , fans, cabling, rack mountable equipment, accessories, and the like may be mounted or otherwise attached.
  • the frame 220 is configured to permit air to flow into the open base portion 210 , up through the carriage 70 , through and around the computing equipment 102 and other items therein, and out the open top portion 212 .
  • the frame 220 includes a plurality of spaced apart upright support members 222 A-H, defining one or more upright equipment receiving areas 224 A-C.
  • the embodiment depicted has three equipment receiving areas 224 A-C, defined by four upright support members 222 A-D arranged along the front portion 214 of the carriage 70 and four upright support members 222 E-H arranged along the back portion 216 of the carriage 70 .
  • Upright support member 222 C may be removable, as opposed to support members 222 A-B and 222 D-H which are fixed in place. The removal of upright support member 222 C and the associated front to back extending members 236 may allow for the installation of any configuration of computer equipment spanning equipment receiving areas 224 B and 224 C without any modification.
  • upright support member 222 C and the associated front to back extending members 236 may be removed to allow the installation of a custom designed server chassis oriented longitudinally along side portion 14 and 16 . Also, removing upright support member 222 C and the associated front to back extending members 236 may allow for the onsite installation of customer equipment without any modification of the carriage 70 .
  • carriages having a different number of upright equipment receiving areas may be constructed by applying ordinary skill in the art to the present teachings and such embodiments are within the scope of the present teachings.
  • the upright support members 222 A-H are coupled together at the open top portion 212 of the carriage 70 by a vented top plate 226 having apertures 228 A-F in communication with the equipment receiving areas 224 A-C through which heated air may exit the equipment receiving areas 224 A-C and be passed to the corresponding first or second upper plenum 90 A or 90 B positioned thereabove.
  • Apertures 228 A-B may be joined together to create one large aperture.
  • apertures 228 C-D and 228 E-F may be joined together. Joining the apertures together may be done to support some HVAC devices.
  • the upright support members 222 A-H are coupled together at the open base portion 210 along the front portion 214 of the carriage 70 by a front rail 230 and at the open base portion 210 along the back portion 216 of the carriage 70 by a back rail 232 .
  • the four upright support members 222 A-D aligned along the front portion 214 of the carriage 70 may be coupled to the four upright support members 222 E-H aligned along the back portion 216 of the carriage 70 by any desired number of front-to-back extending members 236 .
  • the members 236 may provide structural stability to the carriage 70 . Further, the members 236 may provide attachment points to which computing equipment 102 , fans, cabling, rack mountable equipment, accessories, and the like may be coupled. Further, the upright support members 222 E-H along the back portion 216 may be coupled together by any number of members 238 extending therebetween.
  • the members 238 may provide stability and/or attachment points to which computing equipment 102 , fans, cabling, rack mountable equipment, accessories, and the like may be coupled.
  • apertures 239 in the members 238 are configured to provide throughways for wiring, cabling, and the like.
  • the upright support members 222 A-D along the front portion 214 of the carriage 70 may include openings 240 A-F each configured to receive computing equipment, such as a rectifier, network switching device (e.g., routers), and the like.
  • a rectifier e.g., routers
  • two of the openings 240 E and 240 F each house a rectifier 242 and four of the openings 240 A-D each house a network switching device 244 .
  • the rectifier 242 may be configured to rectify from about 480 VAC to about 48 VDC. Referring to FIG.
  • the power receptacle 132 coupled to the power distribution panel 120 A may be coupled to one of the rectifiers 242 and the power receptacle 132 coupled to the other power distribution panel 120 B may be coupled to the other of the rectifiers 242 . In this manner, each of the rectifiers 242 receives power from a different power distribution panel 120 A or 120 B.
  • the upright support members 222 E-H along the back portion 216 of the carriage 70 may include one or more openings 241 substantially similar to the openings 240 A-F and aligned with one or more corresponding opening 240 A-F of the upright support members 222 A-D.
  • One or more open-ended conduits 250 A-F may extend between the upright support members 222 A-D along the front portion 214 and the upright support members 222 E-H along the back portion 216 .
  • Each of these conduits 250 A-F has an open front end portion 251 opposite and open back end portion 253 (see FIG. 3 ).
  • Each conduit 250 A-F may be configured to provide a throughway for cabling (not shown) from the front portion 214 of the carriage 70 to the back portion 216 of the carriage 70 .
  • the cabling may include Category 6 (“Cat-6”) cable for Ethernet connections.
  • one or more network connections 252 A-F such as an Ethernet jack, may be located adjacent the front portion 214 of the carriage 70 and coupled to a cables (not shown) extending through the conduits 250 A-F.
  • the equipment receiving areas 224 A-C may each be divided into four sections “S 1 -S 4 ” (for a total of 12 sections per carriage 70 ). Each section “S 1 -S 4 ” may use twenty-four Ethernet connections; however, this is not a requirement.
  • the equipment receiving areas 224 A-C may each be divided into five sections “S 1 -S 5 ” (for a total of 15 sections per carriage 70 ), where section S 5 (not shown) may be used to implement a multiport networking device.
  • the networking device may contain twenty four Ethernet ports or other suitable type of communication ports.
  • each blade slot may have two Ethernet ports.
  • each blade slot may include more than two Ethernet ports.
  • more than one Ethernet port may be located in a front portion of a blade server and more than one Ethernet port may be located in a back portion of a blade server.
  • the equipment receiving areas 224 A-C are not limited to use with blade servers having a particular number of Ethernet ports. Further, the equipment receiving areas 224 A-C are not limited to use with blade servers having Ethernet ports and may be used with blade servers having other types of communication ports.
  • a plurality of air moving assemblies 260 each having a plurality of air moving devices 264 (e.g., fans) oriented to blow air upwardly through the equipment receiving areas 224 A-C, are mounted therein between the upright support members 222 A-H of the carriage 70 .
  • Each of the air moving assemblies 260 includes a frame 262 configured to be mounted inside one of the equipment receiving areas 224 A-C.
  • the frame 262 houses the plurality of air moving devices 264 , each of which is oriented to flow air in substantially the same upward direction.
  • the carriage 70 includes nine air moving assemblies 260 . However, this is not a requirement.
  • the number of air moving assemblies mounted inside each of the equipment receiving areas 224 A-C may be determined based at least in part on the amount of air circulation required to cool the computing equipment received therein.
  • the air moving assemblies 260 each receive power from the power conductors 130 (see FIG. 7 ) carrying power to the carriages 70 and powering the computing equipment 102 housed therein.
  • Computing equipment, or the like, that is mounted in the region between upright support members 222 B and 222 F, or 222 C and 222 G may not receive adequate air flow due to the front to back extending members 236 blocking the path for air flow through the region.
  • one or more air moving assemblies 260 may be installed transversely between the upright support members 222 associated with the equipment to allow for the heated air produced by the equipment to be moved longitudinally into an upright equipment receiving area 224 A-C where it will mix with the air flow created by the vertical cooling system.
  • the upright equipment receiving areas 224 A-C may be customized to receive a predetermined collection of computing equipment (e.g., a predetermined number of blade servers).
  • the upright equipment receiving areas 224 A-C may be configured to receive blade servers 103 in an upright orientation.
  • the upright equipment receiving areas 224 A-C may be configured to receive blade servers in a horizontal orientation.
  • the upright equipment receiving areas 224 A-C may be configured to receive computing equipment in a longitudinal orientation. When computing equipment is to be installed longitudinally, it may be necessary to remove upright support member 222 C and the associated front to back extending members 236 to create the required spatial envelope for the computing equipment to occupy.
  • standard 19′′ rack mount computer gear may be mounted inside the upright equipment receiving areas 224 A-C.
  • the fans inside the rack mount computer gear will draw air into the upright equipment receiving areas 224 A-C from the central aisle portion 72 of the interior portion 60 of the container 12 .
  • This air will pass through the rack mount computer gear, be heated thereby, and exit from the rack mount computer gear adjacent to the back portion 216 of the carriage 70 .
  • the heated air may exit the rack mount computer gear inside the carriage 70 or between the back portion 216 of the carriage 70 and an adjacent one of the first and second longitudinal side portions 14 and 16 .
  • the air moving assemblies 260 will direct the heated air inside the carriage 70 upwardly toward the open top portion 212 of the carriage 70 .
  • the rack mount computer gear may be mounted inside the upright equipment receiving areas 224 A-C in any orientation.
  • the rack mount computer gear may be mounted inside the upright equipment receiving areas 224 A-C in a manner resembling blade servers.
  • an alternate embodiment of the carriage 70 may used, in which the rack mount computer gear may be mounted to extend longitudinally inside the container 12 .
  • the rack mount computer gear may be mounted inside the equipment receiving areas 224 A-C using a slide-out rail system (not shown).
  • a slide-out rail system may allow for any manufacture's computer hardware to be adapted for use in the data center 10 .
  • the slide-out rail system will allow for the computer gear to be pulled out from the equipment receiving areas 224 A-C to a distance of, for example, 6 inches past the front portion 214 of the carriages 70 . This will allow for unrestricted service access to all areas of that individual piece of computing equipment and associated external connections.
  • an articulated cable management tray system may be used to manage and control the movement of the various cables (e.g., data, power) associated with an individual piece of computing equipment when the piece of computing equipment is pulled out of and pushed into the equipment receiving areas 224 A-C.
  • One or more power strips may be attached to the slide-out rail system to provide electrical power to the computing equipment associated with the rail system.
  • the power strip input is connected to one of the plurality of power receptacles 132 .
  • the power strip may be supplied with 208 VAC single phase power.
  • At least one power strip is connected to a power receptacle 132 receiving power from power distribution panel 120 A, and at least one power strip is connected to a power receptacle 132 receiving power from power distribution panel 120 B. This allows for the computing equipment to be supplied with power from redundant sources.
  • the isolating couplers 86 may be coupled to the upright support members 222 A-H along the base portion 210 of the carriage 70 .
  • the isolating couplers 86 may be mounted to the front rail 230 , the back rail 232 , and/or the front to back extending members 236 located along the base portion 210 of the carriage 70 .
  • the isolating couplers 86 may also couple one or more of the upright support members 222 F-G to one of the first and second longitudinal side portions 14 and 16 of the container 12 .
  • the vertical cooling system 100 A cools air flowing up through the carriages 70 arranged along the first longitudinal side portion 14 and the vertical cooling system 100 B cools air flowing up through the carriages 70 arranged along the second longitudinal side portion 16 .
  • the vertical cooling system 100 B is substantially identical to the vertical cooling system 100 A. Therefore, for illustrative purposes, only the vertical cooling system 1008 will be described in detail.
  • the vertical cooling system 100 B includes two fluid flows: a flow of refrigerant and a flow of chilled or cooled water.
  • the flow of refrigerant is cooled by transferring its heat to the flow of cooled water.
  • the vertical cooling system 100 B includes a water/refrigerant heat exchanger 300 configured to transfer heat from the flow of refrigerant to the flow of cooled water.
  • the water/refrigerant heat exchanger 300 may be implemented using any heat exchanger known in the art.
  • a suitable heat exchanger includes a Liebert XDP Water-Based Coolant Pumping Unit, which may be purchased from Directnet, Inc. doing business as 42U of Broomfield, Colo.
  • the flow of cooled water is received from an external supply or source 310 of cooled water as a continuous flow of cooled water.
  • the flow of cooled water received may have a temperature of about 45 degrees Fahrenheit to about 55 degrees Fahrenheit.
  • the flow of cooled water may reside in a closed loop 312 that returns the heated previously cooled water to the external source 310 of cooled water to be cooled again.
  • the closed loop 312 and the water/refrigerant heat exchanger 300 are spaced apart from the carriages 70 and the refrigerant is brought thereto.
  • the closed loop 312 flow of cooled water and the water/refrigerant heat exchanger 300 are segregated from the computing equipment 102 of the data center 10 .
  • the flow of cooled water is transported to the container 12 by a first water line 318 and is transported away from the container 12 by a second water line 320 .
  • the container 12 includes a T-shaped inlet valve 330 that directs a portion of the flow of cooled water received from the first water line 318 to each of the vertical cooling systems 100 A and 100 B (see FIG. 5 ).
  • the container 12 includes a T-shaped outlet valve 332 that directs the flow of return water received from both of the vertical cooling systems 100 A and 1008 (see FIG. 5 ) to the second water line 320 .
  • An inlet pipe 334 is coupled between one outlet port of the inlet valve 330 and the water/refrigerant heat exchanger 300 of the vertical cooling system 1008 .
  • the inlet pipe 334 carries a portion of the flow of cooled water to the water/refrigerant heat exchanger 300 .
  • a similar inlet pipe (not shown) is coupled between the other outlet port of the inlet valve 330 and the water/refrigerant heat exchanger 300 of the vertical cooling system 100 A.
  • An outlet pipe 336 is coupled between the water/refrigerant heat exchanger 300 of the vertical cooling system 100 B and one inlet port of the outlet valve 332 .
  • the outlet pipe 336 carries the flow of return water from the water/refrigerant heat exchanger 300 to the outlet valve 332 .
  • a similar outlet pipe (not shown) is coupled between the water/refrigerant heat exchanger 300 of the vertical cooling system 100 A and the other inlet port of the outlet valve 332 .
  • the flow of cooled water flowing within the inlet pipe 334 may cool the inlet pipe below the condensation temperature of moisture in the air within the interior portion 60 of the container 12 .
  • water may condense on the inlet pipe 334 and drip therefrom.
  • the flow of return water flowing within the outlet pipe 336 may cool the outlet pipe below the condensation temperature of moisture in the air within the interior portion 60 of the container 12 causing water to condense on the outlet pipe and drip therefrom.
  • a basin or drip pan 340 may be positioned below the inlet and outlet pipes 334 and 336 . Any condensed water dripping from the inlet and outlet pipes 334 and 336 may drip into the drip pan 340 .
  • the drip pan 340 includes an outlet or drain 342 through which condensed water exits the drip pan 340 .
  • the drain 342 may extend through the floor portion 32 of the container 12 and may be in open communication with the environment outside the container 12 .
  • external piping, hoses, and the like may be coupled to the drain for the purposes of directing the condensed water away from the container 12 .
  • the passive dehumidification system 350 includes the outlet pipe 336 .
  • the amount of dehumidification provided by the passive dehumidification system 350 may be determined at least in part by the surface area of the components (e.g., the inlet pipe 334 , the outlet pipe 336 , the water/refrigerant heat exchanger 300 , the inlet valve 330 , the outlet valve 332 , and the like) upon which water condenses.
  • the closed loop 352 includes a refrigerant supply manifold 354 which is thermally insulated and a refrigerant return manifold 356 which is thermally insulated.
  • the refrigerant supply manifold 354 carries cooled refrigerant to a plurality of supply conduits 360 which are thermally insulated, each coupled to one of a plurality of refrigerant/air heat exchangers 370 .
  • two heat exchangers 370 are provided for each carriage 70 . However, this is not a requirement.
  • a plurality of return conduits 372 which are thermally insulated, each coupled to one of the plurality of heat exchangers 370 , carry heated refrigerant from the plurality of heat exchangers 370 to the refrigerant return manifold 356 .
  • the thermal insulation that is applied to the supply manifold, return manifold, supply conduits, and return conduits will prevent any condensation from dripping onto the servers located below the manifolds and conduits. Because the embodiment illustrated includes two heat exchangers 370 for each carriage 70 , the plurality of supply conduits 360 and the plurality of return conduits 372 each include ten conduits.
  • the refrigerant return manifold 356 carries heated refrigerant received from the heat exchangers 370 back to the water/refrigerant heat exchanger 300 to be cooled again by the flow of cooled water therein.
  • the refrigerant supply manifold 354 , supply conduits 360 , the refrigerant return manifold 356 , and return conduits 372 may include one or more flow regulators or valves 358 configured to control or restrict the flow of the refrigerant therethrough.
  • the refrigerant supply manifold 354 includes one valve 358 before the first supply conduit 360 regulating the flow of refrigerant into the supply conduits 360 .
  • the supply conduits 360 each include one valve 358 regulating the flow of refrigerant to each of the heat exchangers 370 .
  • the vertical cooling system 100 B may include one or more temperature sensors 376 coupled to refrigerant supply manifold 354 , supply conduits 360 , the refrigerant return manifold 356 , and/or return conduits 372 . Each of the temperature sensors 376 may be used to monitor the temperature of the flow of refrigerant and generate a temperature signal.
  • the vertical cooling system 1008 may include the cooling system controller 380 , which may be located inside cooling unit 300 . The cooling system controller may be coupled to the inlet valve 330 and the temperature sensor(s) 376 .
  • the cooling system controller 380 is configured to increase or decrease a flow rate of the cooled water through the first water line 318 and the inlet valve 330 based upon the temperature signal(s) received from the temperature sensor(s) 376 for the purpose of decreasing or increasing the temperature of the flow of refrigerant within the closed loop 352 of the vertical cooling system 100 B. In this manner, the temperature of the flow of refrigerant within the closed loop 352 may be adjusted by modifying the flow rate of the cooled water used to cool the flow of refrigerant.
  • the refrigerant supply manifold 354 , supply conduits 360 , the refrigerant return manifold 356 , and return conduits 372 in which the refrigerant circulates have a temperature above the condensation temperature of the moisture in the air within the interior portion 60 of the container 12 .
  • water does not condense on the refrigerant supply manifold 354 , supply conduits 360 , the refrigerant return manifold 356 , and return conduits 372 .
  • the flow of refrigerant does not expose the computing equipment 102 to dripping water (from condensation).
  • each of the heat exchangers 370 has a coil assembly 373 .
  • the refrigerant flows from the supply conduits 360 into each of the heat exchangers 370 and circulates through its coil assembly 373 .
  • the air above the carriages 70 is warm, having been heated by the computing equipment 102 .
  • the heated air travels upward through the heat exchangers 370 and is cooled by the refrigerant.
  • each of the heat exchangers 370 is implemented as a radiator style evaporator with its coil assembly 373 arranged at an angle relative to the front portion 214 and the open top portion 212 of the carriages 70 .
  • the coil assembly 373 has one or more cooling surfaces (not shown) whereat heat is exchanged between the air external to the coil assembly 373 and the refrigerant flowing inside the coil assembly 373 .
  • the coil assembly 373 of the heat exchangers 370 may be angled to maximize an amount of cooling surface for the space available for positioning of the heat exchangers, thereby providing a maximum amount of cooling capacity.
  • an inside angle “A” defined between the front portion 214 of the carriages 70 and the coil assembly 373 may range from about 144 degrees to about 158 degrees.
  • an angle of about 144 degrees to about 158 degrees may be defined between the coil assembly 373 and the open top portions 212 of the carriages 70 .
  • the cooling capacity of the heat exchanger 370 may also depend at least in part on the amount of refrigerant flowing in its coil assembly 373 . As mentioned above, by adjusting the valves 358 , the amount of refrigerant flowing from each of the supply conduits 360 into each of the heat exchangers 370 may be adjusted. In this manner, the cooling capacity of the vertical cooling system 1008 may be customized for each carriage 70 , a portion of each carriage, and the like. Further, the cooling capacity may be determined at least in part based on the amount of heat expected to be produced by the computing equipment 102 mounted within each of the carriages, portions of the carriages, and the like.
  • the flow of refrigerant from the supply conduits 360 into the heat exchangers 370 may be customized for a particular distribution of computing equipment 102 (e.g., blade servers) within the container 12 .
  • the valves 358 in the refrigerant supply manifold 354 may be used to control the flow of refrigerant to all of the heat exchangers 370 of the vertical cool system 100 B.
  • a valve (not shown) in the refrigerant return manifold 356 may be used to restrict the flow of refrigerant from all of the heat exchangers 370 of the vertical cool system 100 B.
  • a plurality of bent ducts or conduits 390 may be coupled between each of the heat exchangers 370 and at least a portion of the open top portion 212 of an adjacent carriage 70 to direct heated air rising from the carriage 70 into the heat exchanger 370 .
  • one bent conduit 390 is coupled between a single heat exchanger 370 and a portion (e.g., approximately half) of the open top portion 212 of an adjacent carriage 70 .
  • Each bent conduit 390 has a bent portion 392 and defines a bent travel path for the heated air expelled from the carriage 70 into the heat exchanger 370 .
  • the bent portions 392 help prevent the formation of a back pressure in the upper plenums 90 A and 90 B along the roof portion 30 that could push the heated air back into the open top portions 212 of the carriages 70 .
  • the bend conduit 390 includes an internal baffle 394 that bifurcates the bent conduit 390 along the bent travel path.
  • a sealing member 396 is positioned between the back portions 216 of the carriages 70 and the first and second longitudinal side portions 14 and 16 .
  • a sealing member 397 is positioned between the front portions 214 of the carriages 70 and the heat exchangers 370 .
  • the sealing members 396 and 397 help seal the upper plenums 90 A and 90 B from the remainder of the interior portion 60 of the container 12 .
  • the sealing members 396 and 397 may be constructed from any suitable material known in the art including foam.
  • the air cooled by the heat exchangers 370 is pushed therefrom by the air moving assemblies 260 and flows downwardly from the angled heat exchangers 370 toward the walkway 74 on the floor portion 32 of the container 12 .
  • the walkway 74 includes the perforated portion 76 that permits air to flow therethrough and into the lower plenums 46 .
  • the laterally extending framing members 44 are implemented with a C-shaped cross-sectional shape, air may flow laterally inside the open inside portion 47 of the laterally extending framing members 44 .
  • the open inside portion 47 of the C-shaped laterally extending framing members 44 may be considered part of an adjacent lower plenum 46 .
  • the air may flow beneath the carriages 70 . Because the laterally extending framing members 44 extend from the beneath the walkway 74 to beneath the carriages 70 arranged along both the first and second longitudinal side portions 14 and 16 , air is directed laterally by the laterally extending framing members 44 from beneath the walkway 74 toward and below the carriages 70 . Once beneath the carriages 70 , the air is drawn upward by the air moving assemblies 260 of the carriages and into the carriages 70 , and through and around the computing equipment 102 . As the air is heated by the computing equipment 102 , the heated air rises up through the carriage 70 , and into the bent conduit 390 , which directs the heated air into the heat exchangers 370 associated with the carriage to be cooled again.
  • each of the carriages 70 includes air moving devices 264 (see FIG. 5 ).
  • An amount of power consumed by the air moving devices 264 to adequately cool the computing equipment 102 may be determined at least in part by how well air flows from the carriages 70 and into the heat exchangers 370 .
  • the shape of the bent conduits 390 in the upper plenums 90 A and 90 B may determine at least in part the amount of power consumed by the air moving devices 264 .
  • the bent conduits 390 may be configured to reduce or minimize the amount of power consumed by the air moving devices 264 .
  • the container 12 may include openings through which air from the outside environment may flow into the container to cool the computing equipment 102 .
  • the container may also include openings through which air heated by the computing equipment 102 may exit the container into the outside environment.
  • some of the air cooling components of the vertical cooling systems 100 A and 100 B may be omitted from the data center 10 .
  • FIG. 11 provides a data center 400 for use in an environment having a temperature suitable for cooling the computing equipment 102 (see FIG. 6 ) mounted inside the carriages 70 .
  • the data center 400 includes a container 402 , substantially similar to the container 12 (see FIG. 5 ).
  • the container 402 substantially similar to the container 12 (see FIG. 5 ).
  • only aspects of the container 402 that differ from those of container 12 will be described in detail.
  • the container 402 includes a first plurality of upper openings 410 A, a second plurality of upper openings 410 B, a first plurality of lower openings 412 A, and a second plurality of lower openings 412 B.
  • the first plurality of upper openings 410 A and the first plurality of lower openings 412 A extend along the first longitudinal side portion 14 of the container 402 .
  • the second plurality of upper openings 410 B and the second plurality of lower openings 412 B extend along the second longitudinal side portion 16 of the container 402 .
  • the first and second plurality of upper openings 410 A and 410 B provide open communication between the upper plenums 90 A and 90 B, respectively, and the environment outside the container 402 .
  • the first and second plurality of lower openings 412 A and 412 B provide open communication between the lower plenums 46 and the environment outside the container 402 .
  • Cool air is drawn into the lower plenums 46 by the air moving assemblies 260 mounted inside the carriages 70 through the first and second plurality of lower openings 412 A and 412 B.
  • Air heated by the computing equipment 102 (see FIG. 6 ) is pushed from the upper plenums 90 A and 90 B by the air moving assemblies 260 through the first and second plurality of upper openings 410 A and 410 B, respectively.
  • the humidity of the air inside the container 402 is controlled by controlling the humidity of the air outside the container 402 .
  • the data center 400 includes louvers 420 .
  • a single louver 420 is received inside each of the first and second plurality of upper openings 410 A and 410 B and a single louver 420 is received inside each of the first and second plurality of lower openings 412 A and 412 B.
  • this is not a requirement.
  • the louvers 420 may cover the first and second plurality of upper openings 410 A and 4108 and the first and second plurality of lower openings 412 A and 412 B.
  • a first louver may cover a single one of the first plurality of upper openings 410 A and a second different louver may cover a single one of the second plurality of upper openings 410 B.
  • a third louver may cover a single one of the first plurality of lower openings 412 A and a fourth louver may cover a single one of the second plurality of lower openings 412 B.
  • a single louver may cover more than one of the first plurality of upper openings 410 A, more than one of the second plurality of upper openings 410 B, more than one of the first plurality of lower openings 412 A, or more than one of the second plurality of lower openings 412 B.
  • the louvers 420 may be selectively opened and closed to selectively transition the data center 400 between an open system state in which at least one of the louvers 420 is open and a closed system state in which all of the louvers 420 are closed. Based on the external environmental factors, the data center 400 may operate in the open system state to exploit “free air” cooling when appropriate and switch to the closed system state when necessary (e.g., the temperature of the air in the outside environment is too hot or too cold, the air in the outside environment is too humid, the air in the outside environment includes too many contaminants, and the like).
  • the data center 400 may omit the source 310 of cooled water, the chilled water/refrigerant heat exchanger 300 , the refrigerant supply manifold 354 , the refrigerant return manifold 356 , the supply conduits 360 , the return conduits 372 , the refrigerant/air heat exchangers 370 , the bent conduits 390 , the T-shaped inlet valve 330 , the T-shaped outlet valve 332 , the first water line 318 , the second water line 320 , the inlet pipe 334 , and the outlet pipe 336 .
  • the data center 400 may remain in the open system state during operation and transition to a closed system state only when the computing equipment 102 (see FIG. 6 ) is powered down.
  • each of the louvers 420 are configured such that all of the louvers 420 are either open or closed at the same time.
  • each of the louvers 420 may include a plurality of blades 422 (illustrated in an open position) selectively openable and closable by a control switch (not shown). When the switch is placed in the closed position, all of the blades 422 of the louvers 420 are closed and when the switch is in the open position all of the blades 422 of the louvers 420 are open.
  • the data center 400 includes one or more covers, chimneys, or similar structures (not shown) configured to allow air to flow from the first and second plurality of upper openings 410 A and 410 B and at the same time, prevent precipitation (rain, snow, etc) from entering the container 402 through the first and second plurality of upper openings 410 A and 410 B.
  • Louvers 430 are configured to be coupled to the roof portion 30 of the container 402 adjacent the second plurality of upper openings 410 B and to extend outwardly away from the roof portion 30 of the container 402 .
  • the louvers 430 are further configured to be coupled to the roof portion 30 of the container 402 adjacent the first plurality of upper openings 410 A (see FIG. 11 ) and to extend outwardly away from the roof portion 30 of the container 402 .
  • the louvers 430 are also configured to be coupled to the floor portion 32 of the container 402 adjacent one or more of the second plurality of lower openings 412 B and to extend outwardly away from the floor portion 32 of the container 402 .
  • the louvers 430 are further configured to be coupled to the floor portion 32 of the container 402 adjacent one or more of the first plurality of lower openings 412 A (see FIG. 11 ) and to extend outwardly away from the floor portion 32 of the container 402 .
  • Each of the louvers 430 include an assembly (not shown) configured to selectively open to provide air flow between the interior portion 60 of the container 402 and the outside environment and to selectively close to cutoff air flow between the interior portion 60 of the container 402 and the outside environment.
  • the louvers 430 may be configured to be opened and closed at the same time using any method known in the art.
  • each of the louvers 430 may include a filter (not shown) configured to prevent contaminants and particulate matter (e.g., dust, insects, and the like) from entering the interior portion 60 of the container 402 .
  • FIGS. 13 and 14 provide a data center 450 for use in an environment having a temperature suitable for cooling the computing equipment 102 (see FIG. 6 ) mounted inside the carriages 70 .
  • the data center 450 includes a container 452 , substantially similar to the container 12 (see FIG. 1 ).
  • the container 452 that differ from those of container 12 will be described in detail.
  • the data center 450 includes the first and second plurality of upper openings 410 A and 410 B. However, the data center 450 omits the first and second plurality of lower openings 412 A and 412 B. Instead, the data center 450 includes a first plurality of side openings 456 A and a second plurality of side openings 456 B. The first plurality of side openings 456 A extends along the first longitudinal side portion 14 of the container 452 and the second plurality of side openings 456 B extends along the second longitudinal side portion 16 of the container 452 .
  • the first and second plurality of side openings 456 A and 456 B provide open communication between the environment outside the container 452 and the lower plenums 46 (see FIG. 11 ). Cool air is drawn into lower plenums 46 by the air moving assemblies 260 (see FIG. 11 ) through the first and second plurality of side openings 456 A and 456 B. Air heated by the computing equipment 102 (see FIG. 6 ) is pushed from the upper plenums 90 A and 90 B (see FIG. 11 ) by the air moving assemblies 260 through the first and second plurality of upper openings 410 A and 412 B. In this embodiment, the humidity of the air inside the container 452 is controlled by controlling the humidity of the air outside the container 452 .
  • a louver 420 is received inside each of the first and second plurality of upper openings 410 A and 412 B and the first and second plurality of side openings 456 A and 456 B are covered by louvers 560 substantially similar to the louvers 420 .
  • the first and second plurality of upper openings 410 A and 412 B are illustrated without louvers and the first and second plurality of side openings 456 A and 456 B are covered by louver assemblies 562 that extend outwardly away from the container 452 .
  • the louver assemblies 562 include openings or slots 564 .
  • Each of the louver assemblies 562 includes an assembly (not shown) configured to selectively open to provide air flow between the interior portion 60 of the container 452 and the outside environment and to selectively close to cutoff air flow between the interior portion 60 of the container 452 and the outside environment.
  • the louver assemblies 562 may be configured to be opened and closed at the same time using any method known in the art.
  • each of the louver assemblies 562 may include a filter (not shown) configured to prevent particulate matter (e.g., dust, insects, and the like) from entering the interior portion 60 of the container 452 .
  • any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components.
  • any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
  • aspects of the modular embodiment relate to a data center, comprising modules that perform specific functions associated with the operation of a data center, where the modules can be connected together to form a functional data center to satisfy specific use requirements.
  • Many of the functions and individual components used in the data center contained within a container are used in the modular embodiment and function in an identical or similar manner. Only the differences between the two embodiments will be addressed in the following description.
  • the modular data center will consist of at least one facilities module 650 , at least one computing equipment module 652 , and one end cap with personnel door 660 .
  • a complete modular data center of a preferred embodiment will function identical to a data center contained within a container.
  • the environment inside the data center will be climate controlled to provide a suitable environment for the operation of computing equipment and associated hardware.
  • the external support services may include at least one data connection 152 , at least one power connection 112 A, and at least one supply of cool water 310 .
  • the data center modules may be preconfigured with the desired computing equipment and support interfaces to minimize set up time, cost, and technical knowledge.
  • the modular data center may provide an efficient self-contained solution suitable for applications in standard office spaces and other work environments where the availability of space and support services to implement a standard data center may be limited or not available.
  • the facilities module 650 , computing equipment modules 652 , and end caps 660 and 661 are designed to be connected together to form a continuous barrier allowing the interior environment to be separate from the exterior environment. This will allow for the interior temperature, humidity, and air flow to be maintained at the optimum levels required for the efficient operation of the computing equipment.
  • the outward facing portions of the frame members 602 , 604 , 606 , 608 , and 610 provide an exterior mating surface 628 which is used to mate the modules together or mount modular walls 640 .
  • An exterior mating surface 628 is smooth, straight, and uniform which will allow for pairs of exterior mating surfaces 628 to come into full contact along the length of the mating surfaces 628 .
  • the exterior mating surfaces 628 of a facilities module 650 , a computing equipment module 652 , end cap 660 , or modular wall 640 when in full contact, will form a continuous barrier between the external environment and internal environment.
  • a gasket like device, or other similar device known in the industry may be inserted between the external mating surfaces 628 to facilitate the forming of a barrier between the interior and exterior environment.
  • a ‘C’ style clamp may be applied to a plurality of locations around the mating surfaces to hold the modules in place and maintain continuity between the mating surfaces 628 .
  • any method known in the industry may be used to hold the modules together in close proximity thereby maintaining continuity between the sealing surfaces thereby maintaining the environmental barrier.
  • standard nuts, bolts, and washers may be used in conjunction with matching pre-drilled holes through the mating surfaces to hold the modules and/or end cap together.
  • the modular walls 640 of the modular data center consist of three layers.
  • the layers consist of an inner wall 642 , and an outer wall 646 , and an insulation layer 644 that is located between the inner wall 642 and the outer wall 646 .
  • the three layers are connected together, by any method known in the industry, to form the modular wall 640 .
  • the inner wall 642 has a mating surface 628 located around the outside perimeter.
  • the mating surface is of a width that allows for a modular wall 640 to fully engage the exterior mating surface 628 of the modules.
  • the width of the modular wall's mating surface 628 is 2 inches wide.
  • a modular wall 640 is connected to the exterior mating surface 628 of a module frame whereby a continuous barrier is formed between the internal and external environments.
  • the modular wall 640 is connected to the external mating surface 628 of a module by any method generally known in the industry.
  • the modular wall 640 may be connected to the exterior surface 628 by way of screws, washers, and threaded inserts that use predrilled holes through the exterior mating surface 628 of the basic frame 600 and in the modular wall 640 .
  • the facilities module and the computing equipment module each comprise a base frame 600 .
  • the frame consists of two lower longitudinally extending frame members 604 , two lower transversely extending frame members 602 , two upper longitudinally extending frame members 608 , and two upper transversely extending frame members 606 .
  • the frame also consists of four vertically extending frame members 610 .
  • the twelve extending frame members, when combined together, form the base frame 600 provide the necessary structural support required by the additional interior frame members, computing equipment and other hardware.
  • Corner support braces 612 may be used to provide additional structural support for the base frame 600 .
  • Each intersection of frame members may contain up to three corner braces 612 .
  • the base frame 600 will have additional support members 618 , 622 and 624 added to it as necessary depending on the type of module to be built and the use requirements associated with the module.
  • the facilities module 650 consists of a base frame 600 . Connected to the base frame 600 is a first side modular wall 653 , opposite a second side modular wall 654 .
  • the module also contains an upper modular wall 655 and a lower modular wall 656 .
  • Also connected to the base frame 600 is an end modular wall 658 .
  • the side opposite the end modular wall 658 is open and contains the external mating surface 628 (not shown) allow for the connection to a computing equipment module 652 .
  • the facilities module may contain one or more of the following: water/refrigerant heat exchanger 300 , inlet T-shaped valve 330 , outlet T-shaped valve 332 , a basin or drip pan 340 , power distribution panel 1208 , disconnect switch, humidifier 123 , dehumidifier 125 , humidity control unit, controller unit 134 , power supplies, lighting system, internal private network, UPS, and DC control system.
  • the UPS may be located in the computing equipment module depending on the operational requirements of the modular data center.
  • the functions performed by the above mentioned components in a modular data center are similar, if not identical, to the functions performed by the components in a data center contained within a container.
  • An additional alternative to the above mentioned cooling system is a water/refrigerant heat exchanger located within each computing equipment module.
  • the cooled water will be supplied to the computing equipment module's 652 water/refrigerant heat exchanger via the facilities module 650 .
  • the refrigerant will circulate within a closed loop that includes a refrigerant/air heat exchanger. The complete refrigerant loop will be contained within each module for ease of modular data center 699 assembly and maintenance.
  • the computing equipment module consists of a base frame 600 . Connected to the base frame 600 is a first side modular wall 653 , opposite a second side modular wall 654 .
  • the module also contains an upper modular wall 655 and a lower modular wall 656 .
  • the ends of the module are open to allow for the module to be connected to another computing equipment module 652 , a facilities module 650 , or an end cap 660 or 661 .
  • the computing equipment module contains transversely extending C-shaped frame members 614 that are laterally spaced apart to form a series of lower air plenums 616 .
  • the lower air plenums 616 allow for air flow from the center aisle 615 , down through the perforated floor 663 , flow transversely through the lower air plenums 616 , upward into and through the equipment receiving area 670 , into the upper air plenum 617 , then back into the center aisle 615 .
  • Above the transversely extending C-shaped frame members 614 are located four longitudinally extending floor support members 618 , laterally spaced apart, which are supported by the transversely extending C-shaped frame members 614 .
  • the computing equipment receiving areas 670 will be mounted to, or supported by, the longitudinally extending frame members 618 , 622 and 624 . Also mounted above the transversely extending frame members 614 , adjacent to the perforated floor 663 and in front of the equipment receiving areas 670 are cable conduits with covers 620 to allow for the efficient and manageable routing of various cables between modules.
  • the module may also contain four vertically extending support members (not shown), two on each side, which are laterally spaced apart and mounted adjacent to the first and second side portions, to provide additional support for the equipment receiving area or other module hardware. Additionally, the module may contain two transversely extending frame members (not shown), laterally spaced apart and mounted adjacent to the upper or roof portion of the module, which can be used to provide additional support for the vertical cooling system or other module hardware.
  • the equipment receiving area 670 may consist of an equipment receiving carriage 630 . Similar to the data center within a data center 10 , the function of the equipment receiving area is to store computing equipment 102 or other associated hardware to support data center functions such as air moving assemblies 260 .
  • the design of the carriage is very similar to the carriage 70 described above.
  • the equipment receiving carriage 630 of the modular data center 699 consists of a front upright support 632 A, a rear upright support 6328 , front to back extending members 672 that are connected between the front and rear upright supports 632 A-B, front carriage vertical support 680 A, rear carriage vertical support 680 B, carriage front to rear extending members 682 , front carriage rail 678 A, and a rear carriage rail 678 B.
  • the front and rear carriage rails 678 A-B of the equipment receiving carriage 630 may be mounted to isolators 86 or directly to longitudinally extending floor support members 618 .
  • the back of the equipment receiving carriage 630 may be mounted to isolators 86 or directly to longitudinally extending side support members 622 A-B.
  • the front upright support 632 A may contain openings 636 A-C, which allow for the mounting of networking equipment 244 or other computing hardware to support the computing equipment 103 located in areas S 1 -S 4 .
  • the front to rear extending members 672 form transverse cable conduits 634 A-C that may be used to route and manage the various cables associated with connecting computing and networking equipment.
  • Each computing equipment module may contain a center aisle portion 615 which exists between the front edges of the equipment receiving areas 670 .
  • the center aisle portion 615 will be wide enough to allow for the computing equipment 102 , which is mounted within the equipment receiving carriage 630 , to be “racked” out to allow for inspection and maintenance.
  • an alternative embodiment of the computing equipment module contains an insulated personnel door 648 and is generally designated 662 .
  • the personnel door 648 will allow access to the center aisle portion 615 .
  • only one equipment receiving area 670 is located within the module space and the personnel door 648 is located opposite of equipment receiving area 670 .
  • This module embodiment may be used when the space where the modular data center 699 is to be located is not sufficient to allow for personnel access to the outside end areas.
  • This module may replace any regular computing equipment module 652 that is part of the modular data center.
  • the end cap with personnel door is similar in construction to the end portion of the facilities module and is generally designated 660 .
  • the end cap 660 consists of a lower transversely extending member 602 , an upper transversely extending member 606 , and two vertically extending frame members 610 . Additionally, the frame contains corner braces 612 at the intersection of the transversely extending frame members 602 and 606 , and the vertically extending frame members 610 .
  • the end cap 660 has an exterior mating surface 628 and is used to mate the end cap 660 to the external mating surface 628 of a facilities module 650 thereby forming a continuous barrier between the inside and outside environment.
  • An end modular wall with personnel door 657 is connected exterior mating surface 628 to the outside of the end cap frame to create a continuous barrier between the inside and outside environment.
  • the end cap may not include a personnel door and is generally designated 661 .
  • the end modular wall with personnel door 657 is replaced by end modular wall 658 .
  • the modules may be manufactured such that the modules can be separated along the longitudinal centerline. Allowing for longitudinal separation would enable the individual modules to be separated prior to loading them onto an elevator or moving them through a space of restricted size, and then reassembled in the designated data center space.
  • the functionality of the modular data center 699 would not be limited by the split module design.

Abstract

A data center inside a shipping container having a lower plenum and an upper plenum in its interior. Heated air in the upper plenum exits therefrom into a plurality of heat exchangers adjacent thereto. Air cooled by the heat exchangers travels toward and enters the lower plenum. The data center includes a plurality of carriages each having an equipment receiving portion located between an open bottom portion in open communication with the lower plenum, and an open top portion in open communication with the upper plenum. Fans inside each of the carriages draw cooled air up from the lower plenum into the open bottom portion of the carriage, blow the cooled air up through the equipment receiving portion thereby cooling any computing equipment received therein, and vent the cooled air through the open top portion into the upper plenum.

Description

    RELATED APPLICATIONS
  • This application is a Continuation in Part of, and claims the benefit of priority to, United States Utility patent application Ser. No. 12/347,415 entitled “Data Center”, filed Dec. 31, 2008.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention is directed generally to a data center and more particularly to a modular data center.
  • 2. Description of the Related Art
  • Planning and constructing a traditional data center requires substantial capital, planning, and time. The challenges of planning a traditional data center include maximizing computing density (i.e., providing a maximum amount of computing capacity within a given physical space). Further, it may be difficult, if not impossible, to use the space available efficiently enough to provide adequate computing capacity.
  • Once a data center is constructed, it can be difficult to upgrade to keep up with current technologies. For example, it may be difficult, if not impossible, to expand an existing data center operating at full capacity because the expansion may require additional power and cooling resources, which simply are not available or would be costly to install.
  • Therefore, a need exists for a means of reducing the capital, planning, and/or time required to implement a data center. A further need exists for a data center that requires less capital, planning, and/or time than a traditional data center. A customizable data center configurable for a particular user's needs is also desirable. A data center capable of integration with an already existing data center is also advantageous. A further need also exists for a data center that requires less time and effort during set up and installation. The present application provides these and other advantages as will be apparent from the following detailed description and accompanying figures.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
  • FIG. 1 is a perspective view of a data center housed inside a container.
  • FIG. 2 is an enlarged fragmentary perspective view of the container of FIG. 1 omitting its first longitudinal side portion, front portion, and personnel door to provide a view of its interior portion.
  • FIG. 3 is an enlarged fragmentary cross-sectional perspective view of the data center of FIG. 1 taken laterally through the container and omitting its first longitudinal side portion, and second longitudinal side portion.
  • FIG. 4 is an enlarged fragmentary cross-sectional perspective view of the data center of FIG. 1 omitting its electrical system and taken longitudinally through the container.
  • FIG. 5 is an enlarged fragmentary cross-sectional view of the data center of FIG. 1 omitting its electrical system and taken laterally through the container.
  • FIG. 6 is a front view of a carriage of the data center of FIG. 1 housing exemplary computing equipment.
  • FIG. 7A is an enlarged fragmentary cross-sectional perspective view of the data center of FIG. 1 omitting portions of its vertical cooling system and taken longitudinally through the container.
  • FIG. 7B is an electrical schematic of the electrical system of the data center of FIG. 1.
  • FIG. 8A is an enlarged fragmentary cross-sectional perspective view of an embodiment of the data center of FIG. 1 including an uninterruptible power supply (“UPS”) omitting its vertical cooling system and taken longitudinally through the container.
  • FIGS. 8B and 8C are an electrical schematic of the electrical system of the data center of FIG. 1 including a UPS.
  • FIG. 9 is a perspective view of the carriage of FIG. 5 omitting the exemplary computing equipment.
  • FIG. 10 is an enlarged fragmentary cross-sectional perspective view of the data center of FIG. 1 omitting its electrical system and taken longitudinally through the container.
  • FIG. 11 is an enlarged fragmentary cross-sectional view of an alternate embodiment of a data center including openings and louvers along its roof and floor portions, omitting its electrical system, and taken laterally through the container.
  • FIG. 12 is an enlarged fragmentary cross-sectional perspective view of the data center of FIG. 11 including alternate louvers along its roof and floor portions and, omitting its electrical system and portions of its vertical cooling systems, and taken longitudinally through the container.
  • FIG. 13 is an enlarged fragmentary perspective view of alternate embodiment of a data center including openings and louvers along its roof portion and side portions.
  • FIG. 14 is an enlarged fragmentary perspective view of the data center of FIG. 13 omitting louvers along its roof portion and including louver assemblies along its side portions.
  • FIG. 15 is an enlarged fragmentary cross-sectional view of the insulated wall of the data center of FIG. 1 showing the outer container wall, a middle insulating layer, and an inner protective layer.
  • FIG. 16 is a perspective view of the base frame of the modular data center, including corner support braces.
  • FIG. 17 is a perspective view of the base frame with bottom, side and top supports used to mount and support internal equipment.
  • FIG. 18 is a perspective view of a carriage assembly for receiving computing equipment.
  • FIG. 19 is a front view of a carriage assembly showing air moving devices and designated spaces for computing equipment.
  • FIG. 20 is a perspective view of a facilities module showing a heat exchanger, cooling water pipes, humidifier, dehumidifier, electrical panels and conduits, external connections, a controller, and sensors. The internal components of the module are redundant on both sides therefore only one side is shown for clarity.
  • FIG. 21 is a fragmentary view of a modular wall showing an inner wall, an insulating layer, and an outer wall.
  • FIG. 22 is a fragmentary view of an end cap showing an outer wall, insulating layer, inner wall, frame, and personnel door.
  • FIG. 23 is a perspective view of an alternative embodiment of a computing module showing a personnel door replacing a carriage.
  • FIG. 24 is a fragmentary view of a computing equipment module showing the frame, two side walls, a bottom wall, and a top wall.
  • FIG. 25 is a fragmentary view of a facilities module showing the frame, two side walls, one end wall, one bottom wall, and one top wall.
  • FIG. 26 is a perspective view of an embodiment of the modular data center showing one facilities module, two computing equipment modules, one end cap with personnel door, and external support connections.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Referring to FIG. 1, aspects of the present invention relate to a data center 10 housed inside a container 12. The container 12 may be a conventional shipping container of the type typically used to ships goods via a cargo ship, railcar, semi-tractor, and the like. The container 12 is portable and may be delivered to a use site substantially ready for use with minimal set up required. As will be described in detail below, the data center 10 may be preconfigured with desired computer hardware, data storage capacity, and interface electronics. For example, the data center 10 may be configured according to customer requirements and/or specifications.
  • The data center 10 is completely self contained in the container 12 and may be substantially ready for use immediately following delivery thus reducing the need for on-site technical staff, and in particular embodiments, reducing the need to install and setup computing hardware, route data cables, route power cables, and the like.
  • As described in detail below, the environment inside the container 12 may be climate controlled to provide a suitable environment for the operation of computing equipment and hardware. For example, the environment inside the container 12 may provide optimal power consumption (including adequate power for lighting), cooling, ventilation, and space utilization. The data center 10 may be configured to provide an efficient self-contained computing solution suitable for applications in remote locations, temporary locations, and the like.
  • The container 12 has a first longitudinal side portion 14 opposite a second longitudinal side portion 16. The container 12 also includes a first end portion 18 extending transversely between the first and second longitudinal side portions 14 and 16 and a second end portion 20 extending transversely between the first and second side portions 14 and 16. By way of a non-limiting example, each of the first and second longitudinal side portions 14 and 16 may be about 40 feet long and about 9.5 feet tall. By way of an alternative non-limiting example, each of the first and second longitudinal side portions 14 and 16 may be about 20 feet long and about 9.5 feet tall. The first and second end portions 18 and 20 may be about 8 feet wide and about 9.5 feet tall. One of the first and second end portions 18 and 20 may include a personnel door 24. The container 12 also includes a top or roof portion 30 extending transversely between the first and second side portions 14 and 16 and longitudinally between the first and second end portions 18 and 20. The container 12 also includes a bottom or floor portion 32 extending transversely between the first and second side portions 14 and 16 and longitudinally between the first and second end portions 18 and 20. The container 12 may be mounted on pillars 33, blocks, or the like to be elevated above the ground.
  • To minimize or prevent condensation build up on the inside of the container 12, and to minimize the required amount of cool water from the cooled water supply or source 310, insulation may be applied to the inside of the container 12, covering the longitudinal side portions 14 and 16, the end portions 18 and 20, the top or roof portion 30 and the bottom or floor portion 32. A steel panel (not shown) is then applied to cover the insulation, providing protection for the insulation. The steel panel may be attached to the container 12 side portions 14 and 16, end portions 18 and 20, top or roof portion 30, and bottom or floor portion 32 by way of, for example, spot welds numerous enough to provide adequate mechanical support for the steel panels and applied insulation. By way of non-limiting example, the insulation may be pre-formed foam panels of polyisocyanurate.
  • As illustrated in FIG. 2 and appreciated by those of ordinary skill in the art, the floor portion 32 includes a support frame 40 having a first longitudinally extending framing member 42A spaced laterally from a second longitudinally extending framing member 42B. The first and second longitudinally extending framing members 42A and 42B extend along and support the first and second longitudinal side portions 14 and 16 (see FIG. 1), respectively.
  • The floor portion 32 also includes a plurality of laterally extending framing members 44 that extend transversely between the first and second longitudinally extending framing members 42A and 42B. A plurality of laterally extending interstices or lower plenums 46 are defined between the laterally extending framing members 44. If as illustrated in the embodiment depicted in FIG. 3, the laterally extending framing members 44 have a C-shaped cross-sectional shape having an open inside portion 47, the lower plenums 46 may each include the open inside portions 47 of the C-shaped laterally extending framing members 44. Air may flow laterally within the floor portion 32 inside the lower plenums 46, which include the open inside portion 47 of the C-shaped laterally extending framing members 44. The laterally extending framing members 44 may help guide or direct this lateral airflow.
  • Each of the laterally extending framing members 44 may be constructed from a single elongated member having a C-shaped cross-sectional shape. However, each of the laterally extending framing members 44 may include three laterally extending portions: a first portion 50, a second portion 52, and a third portion 54. The first portion 50 is adjacent the first longitudinal side portion 14, the second portion 52 is adjacent the second longitudinal side portion 16, and the third portion 54 is located between the first and second portions 50 and 52.
  • A first pair of spaced apart longitudinally extending support surfaces 56A and 56B are supported by the first portion 50 of the laterally extending framing members 44. A second pair of spaced apart longitudinally extending support surfaces 58A and 58B are supported by the second portion 52 of the laterally extending framing members 44. In the embodiment illustrated, the third portion 54 of the laterally extending framing members 44 is flanked by the longitudinally extending support surfaces 56B and 58B.
  • FIG. 4 provides a longitudinal cross-section of the data center 10. For illustrative purposes, the first end portion 18 and the personnel door 24 have been omitted to provide a better view of the components inside the container 12. The first longitudinal side portion 14, the second longitudinal side portion 16, the first end portion 18 (see FIG. 1), the second end portion 20, the roof portion 30, and the floor portion 32 define an enclosed hollow interior portion 60 accessible to a user (such as a technician) via the personnel door 24 (see FIG. 1).
  • Turning to FIGS. 3 and 5, inside the interior portion 60, a plurality of racks or carriages 70 are arranged along each of the first and second longitudinal side portions 14 and 16. The first pair of spaced apart longitudinally extending support surfaces 56A and 56B (see FIGS. 2 and 3) supported by the first portions 50 of the laterally extending framing members 44 support the plurality of carriages 70 (see FIG. 3) extending along the first longitudinal side portion 14. The second pair of spaced apart longitudinally extending support surfaces 58A and 58B supported by the second portions 52 of the laterally extending framing members 44 support the plurality of carriages 70 (see FIGS. 3 and 4) extending along the second longitudinal side portion 16.
  • A central aisle portion 72 is defined between the carriages 70 and above the third portions 54 of the laterally extending framing members 44. In the central aisle portion 72, the third portions 54 of the laterally extending framing members 44 support a walkway 74. Optionally, the walkway 74 may include a perforated portion 76 and one or more raceways or wire management channels 78A and 78B extending longitudinally alongside the perforated portion 76. Optionally, one or more raceways or wire management channels (not shown) may extend along the roof portion 30 in the central aisle portion 72.
  • The perforated portion 76 may be constructed using a gas permeable, porous, or perforated material. For example, the perforated portion 76 may be constructed using perforated tiles 80 that permit air to flow through the tiles, from above the tiles to below the tiles and into the lower plenums 46. The perforated tiles 80 may be any standard perforated computer room tiles known in the art. For example, suitable tiles include manufacturing part number 20-0357 sold by Tate Access Floors, Inc. of Jessup, Md.
  • Each of the wire management channels 78A and 78B has an open top portion 82 and one or more removable cover 84 affixed thereupon. Each of the covers 84 is couplable to the open top portion 82 of each of the wire management channels 78A and 78B. By way of a non-limiting example, the covers 84 may couple to the open top portion 82 of the channels 78A and 78B via a friction connection, snap fit connection, and the like.
  • Optionally, the carriages 70 may be coupled to the first pair of spaced apart longitudinally extending support surfaces 56A and 56B and the second pair of spaced apart longitudinally extending support surfaces 58A and 58B by isolators or isolating couplers 86 configured to absorb movement of the container 12 relative to the carriages 70. The isolating couplers 86 help prevent damage to any computing equipment mounted to the carriages 70 that may be caused by the movement of the container 12 occurring when the container is moved to a use location, during a seismic event (e.g., an earthquake), and the like. As illustrated in FIG. 5, each of the carriages 70 may also be coupled to one of the first and second longitudinal side portions 14 and 16 by isolating couplers 86 to prevent the carriages from toppling over or bumping into the first and second longitudinal side portions 14 and 16 of the container 12 during transport, a seismic event, and the like.
  • In the embodiment illustrated in FIG. 4, five carriages 70 are arranged along each of the first and second longitudinal side portions 14 and 16. However, this is not a requirement and different numbers of carriages 70 may be arranged along the first and/or second longitudinal side portions 14 and 16 depending upon the dimensions used to construct both the carriages 70 and the container 12. By way of a non-limiting example, five carriages 70 may be arranged along each of the first and second longitudinal side portions 14 and 16 when the container 12 side portions 14 and 16 are each 40 feet long. By way of an additional non-limiting example, two carriages 70 may be arranged along each of the first and second longitudinal side portions 14 and 16 when the container 12 side portions 14 and 16 are each 20 feet long.
  • As may best be viewed in FIG. 5, a first upper plenum 90A is provided adjacent to the first longitudinal side portion 14 and the roof portion 30 and a second upper plenum 90B is provided adjacent to the second longitudinal side portion 16 and the roof portion 30. Air disposed in the first upper plenum 90A is cooled by a vertical cooling system 100A (described in greater detail below). Air disposed in the second upper plenum 90B is cooled by a vertical cooling system 100B substantially similar to the vertical cooling system 100A. The cooled air flows downwardly from the first and second upper plenums 90A and 90B into the central aisle portion 72 of the interior portion 60 of the container 12 and toward the walkway 74. The central aisle portion 72 essentially serves as a duct to receive and combine the cooled air from both of the vertical cooling systems 100A and 100B. In other words, the vertical cooling systems 100A and 1008 flood with cooled air the central aisle portion 72 of the interior portion 60 of the container 12 between the carriages 70. By way of a non-limiting example, the air in the central aisle portion 72 of the interior portion 60 of the container 12 may have a temperature of about 75 degrees F. to about 79 degrees F., and in some implementations about 77 degrees F.
  • The combined cooled air passes through the perforated portion 76 of the walkway 74 and into the laterally extending lower plenums 46. The cooled air inside the lower plenums 46 flows laterally along the laterally extending framing members 44 toward both the first and second longitudinal side portions 14 and 16. As described below, the cooled air is drawn up into the carriages 70, flows upwardly therethrough, and returns to the first and second upper plenums 90A and 90B above the carriages 70 whereat it is cooled again by the vertical cooling systems 100A and 100B, respectively.
  • The vertical cooling systems 100A and 1008 are mechanically separate and operate independently of one another. If one of the vertical cooling systems 100A and 100B is not functioning, the other functional vertical cooling system continues to cool the air flowing into the central aisle portion 72 and hence into the lower plenums 46 for distribution to both the carriages 70 at the first longitudinal side portion 14 and the carriages at the second longitudinal side portion 16 without regard to which vertical cooling system is not functioning. In this manner, the data center 10 may be cooled by one of the vertical cooling systems 100A and 1008 alone. Both of the vertical cooling systems 100A and 100B may be coupled to a common power source or separate power sources. Further, the vertical cooling systems 100A and 100B may be coupled to a common cooled water supply or source 310 (see FIG. 10).
  • Electrical System
  • FIG. 6 provides a front view of one of the carriages 70 storing computing equipment 102. The particular computing equipment 102 received inside the carriage 70 may include any computing devices (e.g., blade-type servers, backplanes therefore, and the like) as well as any other type of rack mounted electronic equipment known in the art. The structure of the carriages 70 is described in detail below.
  • Turning to FIGS. 7A, 7B and 8A, an electrical system 110 supplies electric power to the computing equipment 102 (see FIG. 6) housed by the carriages 70. For ease of illustration, the computing equipment 102 has been omitted from FIGS. 7A and 7B. One or more electric utility lines 112A and 1128 (see FIG. 8A) supply power to the electrical system 110. By way of a non-limiting example, each of the electric utility lines 112A and 1128 may provide about 600 Amperes WYE of power to the electrical system 110. A WYE power system will allow for the implementation of standard voltages used in the computing equipment industry like, for example, 110 VAC and 208 VAC. In a preferred embodiment, 208 VAC is supplied to a plurality of power receptacles 132 to allow for increased efficiency of the internal power supplies of the individual pieces of computing equipment thereby reducing overall power consumption of the data center. Additionally, 110 VAC is supplied to a plurality of power receptacles to support computing equipment that cannot accept 208 VAC power input.
  • The electrical system 110 includes one or more power distribution panels 120A and 120B each having a plurality of circuit breakers 122A-M, and 122A-N, respectively, that protect the various powered components (including the vertical cooling systems 100A and 100B, the computing equipment 102, and the like) within the container 12 from power surges, such as an excess in current draw due to low voltage, a power cable interconnect fault, or any other condition that causes an excess current draw. By way of a non-limiting example, the circuit breakers 122A-M of the power distribution panel 120A and the circuit breakers 122A-N of the power distribution panel 120B may have a fault rating of less than 22 KAIC (Thousand Ampere Interrupting Capacity).
  • The utility line 112A is coupled to the electrical system 110 through a disconnect switch 124A configured to selectively disconnect the flow of current from the utility line 112A to the power distribution panels 120A and 120B. For example, the disconnect switch may be configured for 600 Amps AC. The utility line 112B may be coupled to a separate disconnect switch 124B configured to selectively disconnect the flow of current from the utility line 112B.
  • In the embodiment depicted, the power distribution panel 120A provides power to the vertical cooling system 100A and the power distribution panel 120B provides power to the vertical cooling system 100B. Each of the power distribution panels 120A and 120B also provides power to the carriages 70 along both the first and second longitudinal side portions 14 and 16 of the container 12. In FIG. 7B, the five carriages 70 extending along the first longitudinal side portion 14 of the container 12 have been labeled “CARR. #9,” “CARR. #7,” “CARR. #5,” “CARR. #3,” and “CARR. #1,” and the five carriages 70 extending along the second longitudinal side portion 16 of the container 12 have been labeled “CARR. #8,” “CARR. #6,” “CARR. #4,” “CARR. #2,” and “CARR. #0.”
  • A plurality of electrical conductors 130 are connected to the circuit breakers 122A-M of the power distribution panel 120A and the circuit breakers 122A-N of the power distribution panel 120B. Each of the electrical conductors 130 coupled to the circuit breakers 122C-G and 1221-M of the power distribution panel 120A extend along the first longitudinal side portion 14 behind the carriages 70 and each of the electrical conductors 130 coupled to the circuit breakers 122C-G and 1221-M of the power distribution panel 120B extend along the second longitudinal side portion 16 behind the carriages 70. The electrical conductors 130 extending along the first and second longitudinal side portions 14 and 16 transport electricity to a plurality of power receptacles 132, which may be mounted to the first and second longitudinal side portions 14 and 16, or the carriages 70. For ease of illustration, in FIG. 7A, electrical conductors 130 conducting electricity to selected power receptacles 132 have been omitted.
  • Depending upon the implementation details and as appropriate to satisfy power needs, two or more power receptacles 132 may be included for each carriage 70. For ease of illustration, two power receptacles 132 have been illustrated in FIG. 7B for each carriage 70. In the embodiment illustrated, the power receptacles 132 for the carriage “CARR. #8” are coupled one each (via a pair of electrical conductors 130) to the circuit breakers 122C of the power distribution panels 120A and 1206. The power receptacles 132 for the carriage “CARR. #6” are coupled one each (via a pair of electrical conductors 130) to the circuit breakers 122D of the power distribution panels 120A and 120B. The power receptacles 132 for the carriage “CARR. #4” are coupled one each (via a pair of electrical conductors 130) to the circuit breakers 122E of the power distribution panels 120A and 120B. The power receptacles 132 for the carriage “CARR. #2” are coupled one each (via a pair of electrical conductors 130) to the circuit breakers 122F of the power distribution panels 120A and 120B. The power receptacles 132 for the carriage “CARR. #0” are coupled one each (via a pair of electrical conductors 130) to the circuit breakers 122G of the power distribution panels 120A and 120B.
  • Turning to the carriages 70 along the second longitudinal side portion 16, the power receptacles 132 for the carriage “CARR. #9” are coupled one each (via a pair of electrical conductors 130) to the circuit breakers 1221 of the power distribution panels 120A and 120B. The power receptacles 132 for the carriage “CARR. #7” are coupled one each (via a pair of electrical conductors 130) to the circuit breakers 122J of the power distribution panels 120A and 120B. The power receptacles 132 for the carriage “CARR. #5” are coupled one each (via a pair of electrical conductors 130) to the circuit breakers 122K of the power distribution panels 120A and 120B. The power receptacles 132 for the carriage “CARR. #3” are coupled one each (via a pair of electrical conductors 130) to the circuit breakers 122L of the power distribution panels 120A and 120B. The power receptacles 132 for the carriage “CARR. #1” are coupled one each (via a pair of electrical conductors 130) to the circuit breakers 122M of the power distribution panels 120A and 120B.
  • The electrical system 110 may include a separate power supply 133 (e.g., a 480 VAC power supply) for each of the power receptacles 132. Each of the power supplies 133 may be coupled between one of the circuit breakers 122C-G and 1221-M of the power distribution panels 120A and 120B and the power receptacles 132. The power supplies 133 are coupled to a controller 134 (described below). The controller 134 sends instructions to the power supplies 133 instructing them to provide power to one or more of their respective power receptacles 132 or discontinue sending power to one or more of their respective power receptacles 132. In this manner, the controller 134 controls which of the power receptacles 132 are powered and which are not.
  • Further, the circuit breaker 122A of the power distribution panel 120A is coupled by an electrical conductor 130 to the vertical cooling systems 100A and the circuit breaker 122B of the power distribution panel 120B is coupled by an electrical conductor 130 to the vertical cooling systems 100B. Optionally, the circuit breaker 122B of the power distribution panel 120A may be coupled to the vertical cooling systems 1008 and the circuit breaker 122N of the power distribution panel 120B may be coupled to the vertical cooling systems 100A.
  • The circuit breaker 122H of the power distribution panel 120B may be coupled by an electrical conductor 130 to an optional humidifier 123. Additionally, the circuit breaker 122B of power distribution panel 120A may be coupled by an electrical conductor 130 to an optional dehumidifier 125. The optional humidifier 123 and dehumidifier 125 may include a humidity sensor (not shown) configured to generate a humidity signal indicating the humidity inside the container 12. The controller 134 may be coupled to the optional humidifier 123 and dehumidifier 125 and configured to receive the humidity signal and interpret it to determine the humidity inside the container 12. The controller 134 may send instructions to the humidifier 123 and dehumidifier 125 instructing them to increase or decrease the humidity inside the container 12 based on the humidity signal. In response to the instructions from the controller 134, the humidifier 123 may increase its water vapor output to increase the humidity in the air inside the container 12 or the dehumidifier may increase its dry air output to decrease the humidity inside the air inside the container 12. Optionally, the functions of the humidifier 123 and dehumidifier 125 may be combined into a single humidity control unit (not shown). The controller 134 may be coupled to the humidity control unit. The controller 134 may send instructions to the humidity control unit instructing it to increase or decrease humidity inside the container 12 based on the humidity signal.
  • Referring to FIGS. 8A-8C, optionally, the electrical system 110 may include one or more uninterruptible power supplies (“UPS”) 114, continuous power supplies (“CPS”), backup batteries, and the like. The UPS 114 provides power to the various powered components of the data center 10, including the vertical cooling systems 100A and 100B, the computing equipment 102, and the like when power to the utility line 112B is interrupted. In the embodiment illustrated, the electrical system 110 includes a single UPS 114 configured to provide power to all of the carriages 70 and other electrical equipment (e.g., the cooling systems 100A and 100B) located inside of the data center 10. The UPS 114 may include one or more batteries 115.
  • One or more carriages 70 may be omitted from the data center 10 to provide physical space inside the container 12 for the UPS 114. By way of a non-limiting example, a single UPS 114 may fit within the same footprint or spatial envelope occupied by one of the carriages 70. By way of another non-limiting example, a single UPS 114 may fit within the same footprint or spatial envelope occupied by a pair of laterally adjacent carriages 70. In such embodiments, the UPS 114 may fit within the spatial envelope of a first one of the carriages 70 and the batteries 115 of the UPS 114 may occupy the same spatial envelope as a second one of the carriages 70 laterally adjacent to the first. Thus, the data center 10 may be configured based on the user's desires with respect to computing equipment 102 and the number of carriages 70 required thereby versus reliability (i.e., the inclusion or exclusion of one or more optional UPS 114).
  • The UPS 114 may receive electricity from the utility line 112B and/or the utility line 112A. The UPS 114 is coupled to the power distribution panels 120A and 120B through a disconnect switch 124C. In the implementation illustrated, a UPS bypass switch 124D is provided. During normal operations, the switches 124A, 124B, and 124C are closed and the UPS bypass switch 124D is open. The UPS 114 may be bypassed by opening switches 124A, 1248, and 124C and closing the UPS bypass switch 124D. The controller 134 may be coupled to the switches 124A, 124B, 124C, and 124D and configured to open them to cut off power to the power distribution panels 120A and 120B. The dashed lines in FIG. 8B illustrate control lines coupling the controller 134 to the switches 124A, 124C, and 124D. The control lines carry instructions from the controller instructing the switches 124A, 124C, and 124D to open to cut all power to the power distribution panels 120A and 120B. Another control line (not shown) may be used to connect the controller 134 to the disconnect switch 124B.
  • The UPS 114 is configured to detect when power to the power distribution panels 120A and 120B has been interrupted and begin discharging power thereto to avoid or reduce the duration of any loss of power to the other components of the electrical system 110. In the embodiment depicted, power received from the utility line 112B (through the disconnect switch 124B) is routed by the UPS 114 through the disconnect switch 124C to the power distribution panels 120A and 1208. When the UPS 114 detects the utility line 112B is no longer carrying an electrical current, the UPS 114 may be configured to begin discharging electricity from the batteries 115 to the power distribution panels 120A and 1208 or alternatively, to route power from the utility line 112A to the power distribution panels 120A and 120B.
  • In the embodiment illustrated in FIGS. 8A-8C, the UPS 114 includes a static switch 116. Upon loss of power in the utility line 112B, the static switch 116 may transfer the load (e.g., the computing equipment 102) to the utility line 112A. If the utility line 112A is also not providing power, the UPS 114 will discharge electricity from the batteries 115 to the power distribution panels 120A and 120B of the electrical system 110. Alternatively, upon loss of power in the utility line 112B, the UPS 114 may begin discharging electricity from the batteries 115 to the power distribution panels 120A and 1208 of the electrical system 110. When the UPS 114 has discharged all of its stored energy, the static switch 116 will transfer the load (e.g., the computing equipment 102) to the utility line 112A. Coupling the static switch 116 of the UPS 114 to the utility line 112A provides greater fault tolerance than coupling the UPS 114 to the utility line 112B alone.
  • Tables A and B below provide a pair of non-limiting examples of from which power source, the utility line 112A, the utility line 112B, and the batteries 115, the static switch 116 may direct power to the power distribution panels 120A and 120B. In Tables A and B, the term “YES” indicates the power source is providing power at the static switch 116 and the term “NO” indicates the power source is not providing power at the static switch 116.
  • TABLE A
    Supplies power to
    Utility Utility Batteries power distribution
    Line 112A Line 112B
    115 panels 120A and 120B
    YES YES YES Utility Line 112B
    YES YES NO Utility Line 112B
    YES NO YES Utility Line 112A
    YES NO NO Utility Line 112A
    NO YES YES Utility Line 112B
    NO YES NO Utility Line 112B
    NO NO YES Batteries 115
    NO NO NO None
  • TABLE B
    Supplies power to
    Utility Utility Batteries power distribution
    Line 112A Line 112B
    115 panels 120A and 120B
    YES YES YES Utility Line 112A
    YES YES NO Utility Line 112A
    YES NO YES Utility Line 112A
    YES NO NO Utility Line 112A
    NO YES YES Batteries 115
    NO YES NO Utility Line 112B
    NO NO YES Batteries 115
    NO NO NO None
  • Referring to FIG. 5, the electrical system 110 also provides power to a lighting system 140. The lighting system 140 may include a plurality of light emitting diodes (“LEDs”) 142 installed inside the interior portion 60 of the container 12 on the roof portion 30 within the central aisle portion 72 above the walkway 74 and between the upper plenums 90A and 90B. The LEDs 142 may provide power and/or space efficiency over other types of light emitting devices. Alternatively, the lighting system 140 may include fluorescent lights (not shown) installed in the central aisle portion 72 above the walkway 74. In such embodiments, the electrical system 110 may include a 2 KVA lighting transformer (not shown). The lighting system 140 may include emergency lights (not shown) located over the personnel door 24 for emergency egress upon loss of power. The controller 134 may be coupled to the lighting system 140 and configured to turn the LEDs 142 on and off. The lighting system 140 may also include a motion sensing unit 153 installed inside the interior portion 60 of the container 12. The motion sensing unit may generate a motion signal indicating the presence of motion inside the container 12. The controller 134 may be coupled to the optional motion sensing unit 153 and configured to receive the motion signal and interpret it to determine the presence of motion inside the container 12. The controller 134 may send instructions to the lighting system 140 and configured the turn the LEDs 142 on. The controller 134 may send instructions to the lighting system 140 and configured to turn the LEDs 142 off after a pre-determined time from the cessation of the presence of motion inside the container 12. By way of a non-limiting example, the controller 134 may instruct the lighting system 140 to turn the LEDs 142 off after the presence of motion inside the container 12 has not been detected for 10 minutes. The motion signal may also be communicated to an intrusion detection system 196.
  • Referring to FIG. 8D, to support various management functions within the container 12, a 24 VDC system 180 may be implemented. The 24 VDC system may provide power to various controllers associated with the data center. By way of non-limiting examples, the controller functions may be for power monitoring and management 190 such as voltage and current, water supply monitoring 192 such as pressure, temperature and flow rate, various system alarms such as fire detection 184, fire suppression 186 such as DuPont's FM200 Fire Suppression System, flood detection 188, as well as motion sensing 153, lighting 140, intrusion detection 196, and personnel door 24 control. The 24 VDC system may use a dedicated UPS 194 to allow for continued monitoring and management in the event that AC input power to the container 12 is lost or interrupted. In a preferred embodiment, the UPS 194 will have enough capacity to provide power to the 24 VDC system 180 for a minimum of 1 hour. It is to be appreciated that multiple DC systems, each outputting a different DC voltage such as, for example, 12 VDC or 48 VDC, may be implemented to accomplish all management and control functions. It is also to be appreciated that each DC system may use a single dedicated UPS, a single UPS may be used to supply power to all DC systems, or multiple DC systems may be provided with power from one of a plurality of DC system UPSs.
  • Communication Network
  • Returning to FIGS. 7A and 8A, the container 12 may include a network connection 150, such as a modem, router, and the like, coupled to an external network 152, such as the Internet. The network connection 150 may be connected to the external network 152 by any suitable connection known in the art, including a wireless connection, a segment of copper cable, a segment of fiber optic cable, and the like. For example, the container 12 may be coupled to an external network implemented in a neighboring building by one or more network cable connections (e.g., 48 CAT6 GigE network connections).
  • The container 12 may also include an internal or private network 154, such as a local area network (“LAN”), used to route data within the data center 10 between the various pieces of computing equipment 102. By way of a non-limiting example, the private network 154 may be implemented as an Ethernet network.
  • Network cabling (not shown) may couple the computing equipment 102 in the carriages 70 to the various network components of the private network 154. The network cabling may include any suitable cables known in the art, including copper cables, fiber optic cables, and the like. The network cabling may be coupled along the first and second longitudinal side portions 14 and 16 as appropriate to effect a connection with the computing equipment 102 residing in the carriages 70. Further, the network cabling may reside inside the wire management channels 78A and 78B. Alternatively, the computing equipment 102 in the carriages 70 may be coupled to the various components of the private network 154 via wireless connections.
  • The controller 134 is also coupled to the private network 154. The electrical system 110 may also be connected to the private network 154. For example, each of the power sources 133 (coupled to the power receptacles 132) may be coupled to the private network 154. In such embodiments, the controller 134 may send instructions to the power sources 133 over the private network 154. Further, the lighting system 140 may be coupled to the private network 154 and the controller 134 may send instructions to the lighting system 140 over the private network 154. Other components, such as the optional humidifier 123, dehumidifier 125, and the vertical cooling systems 100A and 100B may be coupled to the private network 154 for the purposes of communicating with the controller 134 and/or receiving instructions therefrom.
  • The network connection 150 may be coupled to the private network 154 for the purposes of providing communication between the private network 154 and the external network 152. Methods and devices for implementing the private network 154, coupling the computing equipment 102 to the private network 154, and coupling the private network 154 to the external network 152 are well-known in the art and will not be described in detail herein.
  • Controller
  • As is appreciated by those of ordinary skill in the art, the controller 134 is coupled to and/or includes a memory 136. The memory 136 includes instructions executable by the controller 134. The controller 134 may also be optionally coupled to one or more temperature sensors 137 disposed inside the interior portion 60 of the container 12 each configured to send a temperature signal to the controller 134. The memory 136 may include instructions that when executed by the controller 134 instruct the controller to interpret the temperature signal received from each of the temperature sensors 137 to obtain a temperature measurement. The memory 136 may also store the temperature measurement(s) obtained from the temperature signal(s), the temperature signal received from each of the temperature sensors 137, and the like.
  • The controller 134 may control both the computing equipment 102 (see FIG. 6) and the environment inside the container 12 over the private network 154. In embodiments in which the controller 134 is coupled to the network connection 150 to the external network 152, one or more remote computing devices (not shown) coupled to the external network 152 may communicate with the controller 134. For example, the remote computing devices may receive temperature information from the controller 134. Similarly, the remote computing devices may receive humidity information from the controller 134 that the controller received from the optional humidifier 123 and dehumidifier 125. Further, the remote computing devices may send instructions to the controller 134 instructing it to send instructions to the optional humidifier 123 and dehumidifier 125 to increase or decrease the humidity inside the container 12. The remote computing devices may also instruct the controller 134 to send instructions powering up or powering down selected power sources 133 (coupled to selected power receptacles 132). Further, the remote computing devices may also instruct the controller 134 to turn on or off the LEDs 142 of the lighting system 140.
  • The controller 134 may monitor environmental systems inside the container 12. For example, the vertical cooling systems 100A and 1408 may each include a cooling system processor or controller 380 (described below). The controller 134 may be coupled to the cooling system controller 380 for the purposes of receiving information (e.g., alerts, warnings, system faults, and the like) therefrom. The controller 134 may send the information it receives to the remote computing device(s). For example, the controller 134 may transmit an alert to the remote computing device(s) indicating a problem has occurred (e.g., the flow of cooled water has stopped, the temperature of the flow of refrigerant is too high to adequately cool the computing equipment 102, and the like). Further, the controller 134 may send instructions to the cooling system controller 380 instructing it to operate or not operate based on the temperature inside the container 12.
  • The memory 136 may include instructions for monitoring the electrical system 110 and instructing the controller 134 to report information related to power availability and consumption to the remote computing device(s) (not shown) coupled to the external network 152. Further, the controller 134 may receive instructions from the remote computing device(s), such as an instruction to power down the electrical system 110 (e.g., open switches 124A, 124B, 124C, and 124D), power selected power sources 133 (coupled to one or more power receptacles 132), turn off the power to selected power sources 133 (coupled to one or more power receptacles 132) and the like.
  • The controller 134 may monitor and/or control the computing equipment 102 (see FIG. 6). For example, the memory 136 may include instructions for monitoring the UPS 114, individual pieces of computing equipment 102 (e.g., individual blade servers), and the like. Further, the controller 134 may receive instructions from the remote computing device(s), instructing the controller to turn individual pieces of computing equipment 102 on or off, provide data thereto, and the like.
  • The controller 134 may include a user interface 138 configured to display the temperature measurement(s) obtained from the temperature signal received from each of the temperature sensors 137, and any data received from other systems inside the container 12.
  • Carriage
  • An exemplary embodiment of the carriage 70 is provided in FIGS. 5, 6, and 9. As mentioned above, the carriage 70 is configured to store computing equipment 102, which may include a plurality of computing devices (e.g., blade-type servers) as well as any other type of rack mounted electronic equipment known in the art. The carriage 70 has a substantially open base portion 210 opposite a substantially open top portion 212. The carriage 70 also has a substantially open front portion 214 into which computing equipment 102, fans, cabling, rack mountable equipment, accessories, and the like are received for storage and use therein. Opposite the open front portion 214, the carriage 70 has a back portion 216.
  • Cabling and wiring, such as electrical wiring, communication cables, and the like, may enter the carriage 70 through the back portion 216, which may be open and/or may include one or more apertures 215 configured to permit one or more cables or wires to pass therethrough. As mentioned above, the electrical conductors 130 and optional communication cabling (not shown) may extend along the first and second longitudinal side portions 14 and 16. Further, the power receptacles 132 (see FIG. 7) are positioned adjacent to the back portions 216 of the carriages 70 along the first and second longitudinal side portions 14 and 16. Such power receptacles 132 and communication cabling may be coupled to the computing equipment 102 in the carriage 70 through its back portion 216.
  • As is appreciated by those of ordinary skill in the art, an amount of computing equipment 102 housed in the interior portion 60 of the container 12 is determined at least in part by the number of carriages 70 and the capacity of each to house computing equipment 102. The carriage 70 includes a frame 220 to which computing equipment 102, fans, cabling, rack mountable equipment, accessories, and the like may be mounted or otherwise attached. The frame 220 is configured to permit air to flow into the open base portion 210, up through the carriage 70, through and around the computing equipment 102 and other items therein, and out the open top portion 212.
  • The frame 220 includes a plurality of spaced apart upright support members 222A-H, defining one or more upright equipment receiving areas 224A-C. The embodiment depicted has three equipment receiving areas 224A-C, defined by four upright support members 222A-D arranged along the front portion 214 of the carriage 70 and four upright support members 222E-H arranged along the back portion 216 of the carriage 70. Upright support member 222C may be removable, as opposed to support members 222A-B and 222D-H which are fixed in place. The removal of upright support member 222C and the associated front to back extending members 236 may allow for the installation of any configuration of computer equipment spanning equipment receiving areas 224B and 224C without any modification. By way of a non-limiting example, upright support member 222C and the associated front to back extending members 236 may be removed to allow the installation of a custom designed server chassis oriented longitudinally along side portion 14 and 16. Also, removing upright support member 222C and the associated front to back extending members 236 may allow for the onsite installation of customer equipment without any modification of the carriage 70. Those of ordinary skill in the art appreciate that carriages having a different number of upright equipment receiving areas may be constructed by applying ordinary skill in the art to the present teachings and such embodiments are within the scope of the present teachings.
  • The upright support members 222A-H are coupled together at the open top portion 212 of the carriage 70 by a vented top plate 226 having apertures 228A-F in communication with the equipment receiving areas 224A-C through which heated air may exit the equipment receiving areas 224A-C and be passed to the corresponding first or second upper plenum 90A or 90B positioned thereabove. Apertures 228A-B may be joined together to create one large aperture. Similarly, apertures 228C-D and 228E-F may be joined together. Joining the apertures together may be done to support some HVAC devices. The upright support members 222A-H are coupled together at the open base portion 210 along the front portion 214 of the carriage 70 by a front rail 230 and at the open base portion 210 along the back portion 216 of the carriage 70 by a back rail 232.
  • The four upright support members 222A-D aligned along the front portion 214 of the carriage 70 may be coupled to the four upright support members 222E-H aligned along the back portion 216 of the carriage 70 by any desired number of front-to-back extending members 236. The members 236 may provide structural stability to the carriage 70. Further, the members 236 may provide attachment points to which computing equipment 102, fans, cabling, rack mountable equipment, accessories, and the like may be coupled. Further, the upright support members 222E-H along the back portion 216 may be coupled together by any number of members 238 extending therebetween. The members 238 may provide stability and/or attachment points to which computing equipment 102, fans, cabling, rack mountable equipment, accessories, and the like may be coupled. Optionally, apertures 239 in the members 238 are configured to provide throughways for wiring, cabling, and the like.
  • The upright support members 222A-D along the front portion 214 of the carriage 70 may include openings 240A-F each configured to receive computing equipment, such as a rectifier, network switching device (e.g., routers), and the like. In the embodiment illustrated in FIG. 6, two of the openings 240E and 240F each house a rectifier 242 and four of the openings 240A-D each house a network switching device 244. By way of an example, the rectifier 242 may be configured to rectify from about 480 VAC to about 48 VDC. Referring to FIG. 7B, the power receptacle 132 coupled to the power distribution panel 120A may be coupled to one of the rectifiers 242 and the power receptacle 132 coupled to the other power distribution panel 120B may be coupled to the other of the rectifiers 242. In this manner, each of the rectifiers 242 receives power from a different power distribution panel 120A or 120B.
  • Turning to FIG. 9, optionally, the upright support members 222E-H along the back portion 216 of the carriage 70 may include one or more openings 241 substantially similar to the openings 240A-F and aligned with one or more corresponding opening 240A-F of the upright support members 222A-D.
  • One or more open-ended conduits 250A-F may extend between the upright support members 222A-D along the front portion 214 and the upright support members 222E-H along the back portion 216. Each of these conduits 250A-F has an open front end portion 251 opposite and open back end portion 253 (see FIG. 3). Each conduit 250A-F may be configured to provide a throughway for cabling (not shown) from the front portion 214 of the carriage 70 to the back portion 216 of the carriage 70. By way of a non-limiting example, the cabling may include Category 6 (“Cat-6”) cable for Ethernet connections. Turning to FIG. 6, one or more network connections 252A-F, such as an Ethernet jack, may be located adjacent the front portion 214 of the carriage 70 and coupled to a cables (not shown) extending through the conduits 250A-F.
  • As illustrated in FIG. 6, the equipment receiving areas 224A-C may each be divided into four sections “S1-S4” (for a total of 12 sections per carriage 70). Each section “S1-S4” may use twenty-four Ethernet connections; however, this is not a requirement. Alternatively, the equipment receiving areas 224A-C may each be divided into five sections “S1-S5” (for a total of 15 sections per carriage 70), where section S5 (not shown) may be used to implement a multiport networking device. By way of non-limiting example, the networking device may contain twenty four Ethernet ports or other suitable type of communication ports. By way of a non-limiting example, each blade slot may have two Ethernet ports. However, as is appreciated by those of ordinary skill in the art, each blade slot may include more than two Ethernet ports. For example, more than one Ethernet port may be located in a front portion of a blade server and more than one Ethernet port may be located in a back portion of a blade server. The equipment receiving areas 224A-C are not limited to use with blade servers having a particular number of Ethernet ports. Further, the equipment receiving areas 224A-C are not limited to use with blade servers having Ethernet ports and may be used with blade servers having other types of communication ports.
  • As illustrated in FIGS. 5 and 6, a plurality of air moving assemblies 260 each having a plurality of air moving devices 264 (e.g., fans) oriented to blow air upwardly through the equipment receiving areas 224A-C, are mounted therein between the upright support members 222A-H of the carriage 70. Each of the air moving assemblies 260 includes a frame 262 configured to be mounted inside one of the equipment receiving areas 224A-C. The frame 262 houses the plurality of air moving devices 264, each of which is oriented to flow air in substantially the same upward direction. In the embodiment depicted in FIGS. 5 and 6, the carriage 70 includes nine air moving assemblies 260. However, this is not a requirement. The number of air moving assemblies mounted inside each of the equipment receiving areas 224A-C may be determined based at least in part on the amount of air circulation required to cool the computing equipment received therein. The air moving assemblies 260 each receive power from the power conductors 130 (see FIG. 7) carrying power to the carriages 70 and powering the computing equipment 102 housed therein.
  • Computing equipment, or the like, that is mounted in the region between upright support members 222B and 222F, or 222C and 222G may not receive adequate air flow due to the front to back extending members 236 blocking the path for air flow through the region. When equipment is installed in these regions, one or more air moving assemblies 260 may be installed transversely between the upright support members 222 associated with the equipment to allow for the heated air produced by the equipment to be moved longitudinally into an upright equipment receiving area 224A-C where it will mix with the air flow created by the vertical cooling system.
  • The upright equipment receiving areas 224A-C may be customized to receive a predetermined collection of computing equipment (e.g., a predetermined number of blade servers). For example, the upright equipment receiving areas 224A-C may be configured to receive blade servers 103 in an upright orientation. Alternatively, the upright equipment receiving areas 224A-C may be configured to receive blade servers in a horizontal orientation. Additionally, the upright equipment receiving areas 224A-C may be configured to receive computing equipment in a longitudinal orientation. When computing equipment is to be installed longitudinally, it may be necessary to remove upright support member 222C and the associated front to back extending members 236 to create the required spatial envelope for the computing equipment to occupy.
  • In some embodiments, standard 19″ rack mount computer gear (not shown) may be mounted inside the upright equipment receiving areas 224A-C. The fans inside the rack mount computer gear will draw air into the upright equipment receiving areas 224A-C from the central aisle portion 72 of the interior portion 60 of the container 12. This air will pass through the rack mount computer gear, be heated thereby, and exit from the rack mount computer gear adjacent to the back portion 216 of the carriage 70. The heated air may exit the rack mount computer gear inside the carriage 70 or between the back portion 216 of the carriage 70 and an adjacent one of the first and second longitudinal side portions 14 and 16. In such embodiments, the air moving assemblies 260 will direct the heated air inside the carriage 70 upwardly toward the open top portion 212 of the carriage 70. Further, the air moving assemblies 260 will help draw heated air outside the carriage 70 into the upright equipment receiving areas 224A-C whereat the air moving assemblies 260 will direct the heated air upwardly toward the open top portion 212 of the carriage 70. The rack mount computer gear may be mounted inside the upright equipment receiving areas 224A-C in any orientation. For example, the rack mount computer gear may be mounted inside the upright equipment receiving areas 224A-C in a manner resembling blade servers. Furthermore, an alternate embodiment of the carriage 70 may used, in which the rack mount computer gear may be mounted to extend longitudinally inside the container 12.
  • The rack mount computer gear may be mounted inside the equipment receiving areas 224A-C using a slide-out rail system (not shown). The use of a slide-out rail system may allow for any manufacture's computer hardware to be adapted for use in the data center 10. The slide-out rail system will allow for the computer gear to be pulled out from the equipment receiving areas 224A-C to a distance of, for example, 6 inches past the front portion 214 of the carriages 70. This will allow for unrestricted service access to all areas of that individual piece of computing equipment and associated external connections. To support the use of a slide-out rail system, an articulated cable management tray system (not shown) may be used to manage and control the movement of the various cables (e.g., data, power) associated with an individual piece of computing equipment when the piece of computing equipment is pulled out of and pushed into the equipment receiving areas 224A-C. One or more power strips may be attached to the slide-out rail system to provide electrical power to the computing equipment associated with the rail system. The power strip input is connected to one of the plurality of power receptacles 132. By way of a non-limiting example, the power strip may be supplied with 208 VAC single phase power. When a plurality of power strips are attached to a rail system, at least one power strip is connected to a power receptacle 132 receiving power from power distribution panel 120A, and at least one power strip is connected to a power receptacle 132 receiving power from power distribution panel 120B. This allows for the computing equipment to be supplied with power from redundant sources.
  • The isolating couplers 86 may be coupled to the upright support members 222A-H along the base portion 210 of the carriage 70. Alternatively, the isolating couplers 86 may be mounted to the front rail 230, the back rail 232, and/or the front to back extending members 236 located along the base portion 210 of the carriage 70. As may best be viewed in FIG. 5, the isolating couplers 86 may also couple one or more of the upright support members 222F-G to one of the first and second longitudinal side portions 14 and 16 of the container 12.
  • Vertical Cooling System
  • Referring to FIG. 5, as mentioned above, the vertical cooling system 100A cools air flowing up through the carriages 70 arranged along the first longitudinal side portion 14 and the vertical cooling system 100B cools air flowing up through the carriages 70 arranged along the second longitudinal side portion 16. The vertical cooling system 100B is substantially identical to the vertical cooling system 100A. Therefore, for illustrative purposes, only the vertical cooling system 1008 will be described in detail.
  • Turning to FIG. 10, the vertical cooling system 100B includes two fluid flows: a flow of refrigerant and a flow of chilled or cooled water. Within the vertical cooling system 100B, the flow of refrigerant is cooled by transferring its heat to the flow of cooled water. The vertical cooling system 100B includes a water/refrigerant heat exchanger 300 configured to transfer heat from the flow of refrigerant to the flow of cooled water. The water/refrigerant heat exchanger 300 may be implemented using any heat exchanger known in the art. By way of a non-limiting example, a suitable heat exchanger includes a Liebert XDP Water-Based Coolant Pumping Unit, which may be purchased from Directnet, Inc. doing business as 42U of Broomfield, Colo.
  • The flow of cooled water is received from an external supply or source 310 of cooled water as a continuous flow of cooled water. By way of a non-limiting example, the flow of cooled water received may have a temperature of about 45 degrees Fahrenheit to about 55 degrees Fahrenheit. Optionally, the flow of cooled water may reside in a closed loop 312 that returns the heated previously cooled water to the external source 310 of cooled water to be cooled again. The closed loop 312 and the water/refrigerant heat exchanger 300 are spaced apart from the carriages 70 and the refrigerant is brought thereto. Thus, the closed loop 312 flow of cooled water and the water/refrigerant heat exchanger 300 are segregated from the computing equipment 102 of the data center 10.
  • The flow of cooled water is transported to the container 12 by a first water line 318 and is transported away from the container 12 by a second water line 320. The container 12 includes a T-shaped inlet valve 330 that directs a portion of the flow of cooled water received from the first water line 318 to each of the vertical cooling systems 100A and 100B (see FIG. 5). The container 12 includes a T-shaped outlet valve 332 that directs the flow of return water received from both of the vertical cooling systems 100A and 1008 (see FIG. 5) to the second water line 320.
  • An inlet pipe 334 is coupled between one outlet port of the inlet valve 330 and the water/refrigerant heat exchanger 300 of the vertical cooling system 1008. The inlet pipe 334 carries a portion of the flow of cooled water to the water/refrigerant heat exchanger 300. A similar inlet pipe (not shown) is coupled between the other outlet port of the inlet valve 330 and the water/refrigerant heat exchanger 300 of the vertical cooling system 100A.
  • An outlet pipe 336 is coupled between the water/refrigerant heat exchanger 300 of the vertical cooling system 100B and one inlet port of the outlet valve 332. The outlet pipe 336 carries the flow of return water from the water/refrigerant heat exchanger 300 to the outlet valve 332. A similar outlet pipe (not shown) is coupled between the water/refrigerant heat exchanger 300 of the vertical cooling system 100A and the other inlet port of the outlet valve 332.
  • The flow of cooled water flowing within the inlet pipe 334 may cool the inlet pipe below the condensation temperature of moisture in the air within the interior portion 60 of the container 12. Thus, water may condense on the inlet pipe 334 and drip therefrom. Similarly, the flow of return water flowing within the outlet pipe 336 may cool the outlet pipe below the condensation temperature of moisture in the air within the interior portion 60 of the container 12 causing water to condense on the outlet pipe and drip therefrom.
  • A basin or drip pan 340 may be positioned below the inlet and outlet pipes 334 and 336. Any condensed water dripping from the inlet and outlet pipes 334 and 336 may drip into the drip pan 340. The drip pan 340 includes an outlet or drain 342 through which condensed water exits the drip pan 340. The drain 342 may extend through the floor portion 32 of the container 12 and may be in open communication with the environment outside the container 12. As is appreciated by those of ordinary skill in the art, external piping, hoses, and the like may be coupled to the drain for the purposes of directing the condensed water away from the container 12.
  • Together the inlet pipe 334 and drip pan 340 form a passive dehumidification system 350 that limits the humidity inside the container 12 without consuming any additional electrical power beyond that consumed by the vertical cooling systems 100A and 1008 (see FIG. 5). In some implementations, the passive dehumidification system 350 includes the outlet pipe 336. The amount of dehumidification provided by the passive dehumidification system 350 may be determined at least in part by the surface area of the components (e.g., the inlet pipe 334, the outlet pipe 336, the water/refrigerant heat exchanger 300, the inlet valve 330, the outlet valve 332, and the like) upon which water condenses.
  • Within the vertical cooling system 1008, the flow of refrigerant flows through a closed loop 352. The closed loop 352 includes a refrigerant supply manifold 354 which is thermally insulated and a refrigerant return manifold 356 which is thermally insulated. The refrigerant supply manifold 354 carries cooled refrigerant to a plurality of supply conduits 360 which are thermally insulated, each coupled to one of a plurality of refrigerant/air heat exchangers 370. In the embodiment illustrated, two heat exchangers 370 are provided for each carriage 70. However, this is not a requirement. A plurality of return conduits 372 which are thermally insulated, each coupled to one of the plurality of heat exchangers 370, carry heated refrigerant from the plurality of heat exchangers 370 to the refrigerant return manifold 356. The thermal insulation that is applied to the supply manifold, return manifold, supply conduits, and return conduits will prevent any condensation from dripping onto the servers located below the manifolds and conduits. Because the embodiment illustrated includes two heat exchangers 370 for each carriage 70, the plurality of supply conduits 360 and the plurality of return conduits 372 each include ten conduits. The refrigerant return manifold 356 carries heated refrigerant received from the heat exchangers 370 back to the water/refrigerant heat exchanger 300 to be cooled again by the flow of cooled water therein.
  • The refrigerant supply manifold 354, supply conduits 360, the refrigerant return manifold 356, and return conduits 372 may include one or more flow regulators or valves 358 configured to control or restrict the flow of the refrigerant therethrough. In the embodiment depicted in FIG. 10, the refrigerant supply manifold 354 includes one valve 358 before the first supply conduit 360 regulating the flow of refrigerant into the supply conduits 360. In the embodiment depicted in FIG. 10, the supply conduits 360 each include one valve 358 regulating the flow of refrigerant to each of the heat exchangers 370. By selectively adjusting the flow of refrigerant through the valves 358, the amount of cooling supplied to each of the heat exchangers 370 may be adjusted.
  • The vertical cooling system 100B may include one or more temperature sensors 376 coupled to refrigerant supply manifold 354, supply conduits 360, the refrigerant return manifold 356, and/or return conduits 372. Each of the temperature sensors 376 may be used to monitor the temperature of the flow of refrigerant and generate a temperature signal. As mentioned above, the vertical cooling system 1008 may include the cooling system controller 380, which may be located inside cooling unit 300. The cooling system controller may be coupled to the inlet valve 330 and the temperature sensor(s) 376. In such embodiments, the cooling system controller 380 is configured to increase or decrease a flow rate of the cooled water through the first water line 318 and the inlet valve 330 based upon the temperature signal(s) received from the temperature sensor(s) 376 for the purpose of decreasing or increasing the temperature of the flow of refrigerant within the closed loop 352 of the vertical cooling system 100B. In this manner, the temperature of the flow of refrigerant within the closed loop 352 may be adjusted by modifying the flow rate of the cooled water used to cool the flow of refrigerant.
  • If any of the refrigerant leaks from the vertical cooling system 100B, it does so in a gas or vapor form. Thus, even if a refrigerant leak occurs, it does not leak or drip onto the computing equipment 102. The refrigerant supply manifold 354, supply conduits 360, the refrigerant return manifold 356, and return conduits 372 in which the refrigerant circulates have a temperature above the condensation temperature of the moisture in the air within the interior portion 60 of the container 12. Thus, water does not condense on the refrigerant supply manifold 354, supply conduits 360, the refrigerant return manifold 356, and return conduits 372. As a result, the flow of refrigerant does not expose the computing equipment 102 to dripping water (from condensation).
  • Referring to FIG. 4, each of the heat exchangers 370 has a coil assembly 373. The refrigerant flows from the supply conduits 360 into each of the heat exchangers 370 and circulates through its coil assembly 373. The air above the carriages 70 is warm, having been heated by the computing equipment 102. The heated air travels upward through the heat exchangers 370 and is cooled by the refrigerant. As may best be viewed in FIGS. 4 and 5, each of the heat exchangers 370 is implemented as a radiator style evaporator with its coil assembly 373 arranged at an angle relative to the front portion 214 and the open top portion 212 of the carriages 70. As is appreciated by those of ordinary skill in the art, the coil assembly 373 has one or more cooling surfaces (not shown) whereat heat is exchanged between the air external to the coil assembly 373 and the refrigerant flowing inside the coil assembly 373. The coil assembly 373 of the heat exchangers 370 may be angled to maximize an amount of cooling surface for the space available for positioning of the heat exchangers, thereby providing a maximum amount of cooling capacity. For example, an inside angle “A” defined between the front portion 214 of the carriages 70 and the coil assembly 373 may range from about 144 degrees to about 158 degrees. Thus, an angle of about 144 degrees to about 158 degrees may be defined between the coil assembly 373 and the open top portions 212 of the carriages 70.
  • The cooling capacity of the heat exchanger 370 may also depend at least in part on the amount of refrigerant flowing in its coil assembly 373. As mentioned above, by adjusting the valves 358, the amount of refrigerant flowing from each of the supply conduits 360 into each of the heat exchangers 370 may be adjusted. In this manner, the cooling capacity of the vertical cooling system 1008 may be customized for each carriage 70, a portion of each carriage, and the like. Further, the cooling capacity may be determined at least in part based on the amount of heat expected to be produced by the computing equipment 102 mounted within each of the carriages, portions of the carriages, and the like. By way of a non-limiting example, the flow of refrigerant from the supply conduits 360 into the heat exchangers 370 may be customized for a particular distribution of computing equipment 102 (e.g., blade servers) within the container 12. Further, the valves 358 in the refrigerant supply manifold 354 may be used to control the flow of refrigerant to all of the heat exchangers 370 of the vertical cool system 100B. Similarly, a valve (not shown) in the refrigerant return manifold 356 may be used to restrict the flow of refrigerant from all of the heat exchangers 370 of the vertical cool system 100B.
  • A plurality of bent ducts or conduits 390 may be coupled between each of the heat exchangers 370 and at least a portion of the open top portion 212 of an adjacent carriage 70 to direct heated air rising from the carriage 70 into the heat exchanger 370. In the embodiment illustrated, one bent conduit 390 is coupled between a single heat exchanger 370 and a portion (e.g., approximately half) of the open top portion 212 of an adjacent carriage 70. Each bent conduit 390 has a bent portion 392 and defines a bent travel path for the heated air expelled from the carriage 70 into the heat exchanger 370. By directing the heated air rising from the carriage 70 along the roof portion 30 of the container 12, the bent portions 392 help prevent the formation of a back pressure in the upper plenums 90A and 90B along the roof portion 30 that could push the heated air back into the open top portions 212 of the carriages 70. In the embodiment depicted, the bend conduit 390 includes an internal baffle 394 that bifurcates the bent conduit 390 along the bent travel path.
  • A sealing member 396 is positioned between the back portions 216 of the carriages 70 and the first and second longitudinal side portions 14 and 16. Similarly, a sealing member 397 is positioned between the front portions 214 of the carriages 70 and the heat exchangers 370. The sealing members 396 and 397 help seal the upper plenums 90A and 90B from the remainder of the interior portion 60 of the container 12. The sealing members 396 and 397 may be constructed from any suitable material known in the art including foam.
  • The air cooled by the heat exchangers 370 is pushed therefrom by the air moving assemblies 260 and flows downwardly from the angled heat exchangers 370 toward the walkway 74 on the floor portion 32 of the container 12. As discussed above, the walkway 74 includes the perforated portion 76 that permits air to flow therethrough and into the lower plenums 46. If the laterally extending framing members 44 are implemented with a C-shaped cross-sectional shape, air may flow laterally inside the open inside portion 47 of the laterally extending framing members 44. In other words, the open inside portion 47 of the C-shaped laterally extending framing members 44 may be considered part of an adjacent lower plenum 46.
  • Once inside one of the lower plenums 46, the air may flow beneath the carriages 70. Because the laterally extending framing members 44 extend from the beneath the walkway 74 to beneath the carriages 70 arranged along both the first and second longitudinal side portions 14 and 16, air is directed laterally by the laterally extending framing members 44 from beneath the walkway 74 toward and below the carriages 70. Once beneath the carriages 70, the air is drawn upward by the air moving assemblies 260 of the carriages and into the carriages 70, and through and around the computing equipment 102. As the air is heated by the computing equipment 102, the heated air rises up through the carriage 70, and into the bent conduit 390, which directs the heated air into the heat exchangers 370 associated with the carriage to be cooled again.
  • As mentioned above, each of the carriages 70 includes air moving devices 264 (see FIG. 5). An amount of power consumed by the air moving devices 264 to adequately cool the computing equipment 102 may be determined at least in part by how well air flows from the carriages 70 and into the heat exchangers 370. Thus, the shape of the bent conduits 390 in the upper plenums 90A and 90B may determine at least in part the amount of power consumed by the air moving devices 264. Thus, the bent conduits 390 may be configured to reduce or minimize the amount of power consumed by the air moving devices 264.
  • If the container 12 is located in an environment in which the air outside the container has a temperature suitable for cooling the computing equipment 102 (see FIG. 6) mounted inside the carriages 70, the container may include openings through which air from the outside environment may flow into the container to cool the computing equipment 102. The container may also include openings through which air heated by the computing equipment 102 may exit the container into the outside environment. In such embodiments, some of the air cooling components of the vertical cooling systems 100A and 100B (see FIG. 5) may be omitted from the data center 10.
  • FIG. 11 provides a data center 400 for use in an environment having a temperature suitable for cooling the computing equipment 102 (see FIG. 6) mounted inside the carriages 70. For ease of illustration, like reference numerals have been used to identify like components of the data center 400 and the data center 10 (see FIG. 5). The data center 400 includes a container 402, substantially similar to the container 12 (see FIG. 5). For ease of illustration, only aspects of the container 402 that differ from those of container 12 will be described in detail.
  • The container 402 includes a first plurality of upper openings 410A, a second plurality of upper openings 410B, a first plurality of lower openings 412A, and a second plurality of lower openings 412B. The first plurality of upper openings 410A and the first plurality of lower openings 412A extend along the first longitudinal side portion 14 of the container 402. The second plurality of upper openings 410B and the second plurality of lower openings 412B extend along the second longitudinal side portion 16 of the container 402. The first and second plurality of upper openings 410A and 410B provide open communication between the upper plenums 90A and 90B, respectively, and the environment outside the container 402. The first and second plurality of lower openings 412A and 412B provide open communication between the lower plenums 46 and the environment outside the container 402.
  • Cool air is drawn into the lower plenums 46 by the air moving assemblies 260 mounted inside the carriages 70 through the first and second plurality of lower openings 412A and 412B. Air heated by the computing equipment 102 (see FIG. 6) is pushed from the upper plenums 90A and 90B by the air moving assemblies 260 through the first and second plurality of upper openings 410A and 410B, respectively. In this embodiment, the humidity of the air inside the container 402 is controlled by controlling the humidity of the air outside the container 402.
  • Optionally, the data center 400 includes louvers 420. In the embodiment illustrated in FIG. 11, a single louver 420 is received inside each of the first and second plurality of upper openings 410A and 410B and a single louver 420 is received inside each of the first and second plurality of lower openings 412A and 412B. However, this is not a requirement.
  • In alternate implementations discussed below, the louvers 420 may cover the first and second plurality of upper openings 410A and 4108 and the first and second plurality of lower openings 412A and 412B. By way of a non-limiting example, a first louver may cover a single one of the first plurality of upper openings 410A and a second different louver may cover a single one of the second plurality of upper openings 410B. Similarly, a third louver may cover a single one of the first plurality of lower openings 412A and a fourth louver may cover a single one of the second plurality of lower openings 412B. By way of another non-limiting example, a single louver may cover more than one of the first plurality of upper openings 410A, more than one of the second plurality of upper openings 410B, more than one of the first plurality of lower openings 412A, or more than one of the second plurality of lower openings 412B.
  • The louvers 420 may be selectively opened and closed to selectively transition the data center 400 between an open system state in which at least one of the louvers 420 is open and a closed system state in which all of the louvers 420 are closed. Based on the external environmental factors, the data center 400 may operate in the open system state to exploit “free air” cooling when appropriate and switch to the closed system state when necessary (e.g., the temperature of the air in the outside environment is too hot or too cold, the air in the outside environment is too humid, the air in the outside environment includes too many contaminants, and the like).
  • Optionally, as illustrated in FIGS. 11 and 12, the data center 400 may omit the source 310 of cooled water, the chilled water/refrigerant heat exchanger 300, the refrigerant supply manifold 354, the refrigerant return manifold 356, the supply conduits 360, the return conduits 372, the refrigerant/air heat exchangers 370, the bent conduits 390, the T-shaped inlet valve 330, the T-shaped outlet valve 332, the first water line 318, the second water line 320, the inlet pipe 334, and the outlet pipe 336. In such embodiments, the data center 400 may remain in the open system state during operation and transition to a closed system state only when the computing equipment 102 (see FIG. 6) is powered down.
  • In some implementations, the louvers 420 are configured such that all of the louvers 420 are either open or closed at the same time. For example, each of the louvers 420 may include a plurality of blades 422 (illustrated in an open position) selectively openable and closable by a control switch (not shown). When the switch is placed in the closed position, all of the blades 422 of the louvers 420 are closed and when the switch is in the open position all of the blades 422 of the louvers 420 are open.
  • Optionally, the data center 400 includes one or more covers, chimneys, or similar structures (not shown) configured to allow air to flow from the first and second plurality of upper openings 410A and 410B and at the same time, prevent precipitation (rain, snow, etc) from entering the container 402 through the first and second plurality of upper openings 410A and 410B.
  • Referring to FIG. 12, an alternate embodiment of the louvers 420 is provided. Louvers 430 are configured to be coupled to the roof portion 30 of the container 402 adjacent the second plurality of upper openings 410B and to extend outwardly away from the roof portion 30 of the container 402. The louvers 430 are further configured to be coupled to the roof portion 30 of the container 402 adjacent the first plurality of upper openings 410A (see FIG. 11) and to extend outwardly away from the roof portion 30 of the container 402. The louvers 430 are also configured to be coupled to the floor portion 32 of the container 402 adjacent one or more of the second plurality of lower openings 412B and to extend outwardly away from the floor portion 32 of the container 402. The louvers 430 are further configured to be coupled to the floor portion 32 of the container 402 adjacent one or more of the first plurality of lower openings 412A (see FIG. 11) and to extend outwardly away from the floor portion 32 of the container 402.
  • Each of the louvers 430 include an assembly (not shown) configured to selectively open to provide air flow between the interior portion 60 of the container 402 and the outside environment and to selectively close to cutoff air flow between the interior portion 60 of the container 402 and the outside environment. The louvers 430 may be configured to be opened and closed at the same time using any method known in the art. Further, each of the louvers 430 may include a filter (not shown) configured to prevent contaminants and particulate matter (e.g., dust, insects, and the like) from entering the interior portion 60 of the container 402.
  • FIGS. 13 and 14 provide a data center 450 for use in an environment having a temperature suitable for cooling the computing equipment 102 (see FIG. 6) mounted inside the carriages 70. For ease of illustration, like reference numerals have been used to identify like components of the data center 450 and the data centers 10 and 400. The data center 450 includes a container 452, substantially similar to the container 12 (see FIG. 1). For ease of illustration, only aspects of the container 452 that differ from those of container 12 will be described in detail.
  • Like the data center 400 (see FIGS. 11 and 12), the data center 450 includes the first and second plurality of upper openings 410A and 410B. However, the data center 450 omits the first and second plurality of lower openings 412A and 412B. Instead, the data center 450 includes a first plurality of side openings 456A and a second plurality of side openings 456B. The first plurality of side openings 456A extends along the first longitudinal side portion 14 of the container 452 and the second plurality of side openings 456B extends along the second longitudinal side portion 16 of the container 452.
  • The first and second plurality of side openings 456A and 456B provide open communication between the environment outside the container 452 and the lower plenums 46 (see FIG. 11). Cool air is drawn into lower plenums 46 by the air moving assemblies 260 (see FIG. 11) through the first and second plurality of side openings 456A and 456B. Air heated by the computing equipment 102 (see FIG. 6) is pushed from the upper plenums 90A and 90B (see FIG. 11) by the air moving assemblies 260 through the first and second plurality of upper openings 410A and 412B. In this embodiment, the humidity of the air inside the container 452 is controlled by controlling the humidity of the air outside the container 452.
  • In FIG. 13, a louver 420 is received inside each of the first and second plurality of upper openings 410A and 412B and the first and second plurality of side openings 456A and 456B are covered by louvers 560 substantially similar to the louvers 420. In FIG. 14, the first and second plurality of upper openings 410A and 412B are illustrated without louvers and the first and second plurality of side openings 456A and 456B are covered by louver assemblies 562 that extend outwardly away from the container 452.
  • Instead of blades, the louver assemblies 562 include openings or slots 564. Each of the louver assemblies 562 includes an assembly (not shown) configured to selectively open to provide air flow between the interior portion 60 of the container 452 and the outside environment and to selectively close to cutoff air flow between the interior portion 60 of the container 452 and the outside environment. The louver assemblies 562 may be configured to be opened and closed at the same time using any method known in the art. Further, each of the louver assemblies 562 may include a filter (not shown) configured to prevent particulate matter (e.g., dust, insects, and the like) from entering the interior portion 60 of the container 452.
  • The foregoing described embodiments depict different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.
  • Modular Embodiment
  • Aspects of the modular embodiment relate to a data center, comprising modules that perform specific functions associated with the operation of a data center, where the modules can be connected together to form a functional data center to satisfy specific use requirements. Many of the functions and individual components used in the data center contained within a container are used in the modular embodiment and function in an identical or similar manner. Only the differences between the two embodiments will be addressed in the following description.
  • In a typical embodiment, the modular data center will consist of at least one facilities module 650, at least one computing equipment module 652, and one end cap with personnel door 660. A complete modular data center of a preferred embodiment will function identical to a data center contained within a container. The environment inside the data center will be climate controlled to provide a suitable environment for the operation of computing equipment and associated hardware. The external support services may include at least one data connection 152, at least one power connection 112A, and at least one supply of cool water 310. The data center modules may be preconfigured with the desired computing equipment and support interfaces to minimize set up time, cost, and technical knowledge. The modular data center may provide an efficient self-contained solution suitable for applications in standard office spaces and other work environments where the availability of space and support services to implement a standard data center may be limited or not available.
  • The facilities module 650, computing equipment modules 652, and end caps 660 and 661 are designed to be connected together to form a continuous barrier allowing the interior environment to be separate from the exterior environment. This will allow for the interior temperature, humidity, and air flow to be maintained at the optimum levels required for the efficient operation of the computing equipment. The outward facing portions of the frame members 602, 604, 606, 608, and 610 provide an exterior mating surface 628 which is used to mate the modules together or mount modular walls 640. An exterior mating surface 628 is smooth, straight, and uniform which will allow for pairs of exterior mating surfaces 628 to come into full contact along the length of the mating surfaces 628. The exterior mating surfaces 628 of a facilities module 650, a computing equipment module 652, end cap 660, or modular wall 640, when in full contact, will form a continuous barrier between the external environment and internal environment. A gasket like device, or other similar device known in the industry, may be inserted between the external mating surfaces 628 to facilitate the forming of a barrier between the interior and exterior environment. After the modules are set in place and mated together such that a barrier is formed, a ‘C’ style clamp may be applied to a plurality of locations around the mating surfaces to hold the modules in place and maintain continuity between the mating surfaces 628. It is to be appreciated that any method known in the industry may be used to hold the modules together in close proximity thereby maintaining continuity between the sealing surfaces thereby maintaining the environmental barrier. By way of non-limiting example, standard nuts, bolts, and washers may be used in conjunction with matching pre-drilled holes through the mating surfaces to hold the modules and/or end cap together.
  • Modular Wall and Base Frame Design
  • The modular walls 640 of the modular data center consist of three layers. The layers consist of an inner wall 642, and an outer wall 646, and an insulation layer 644 that is located between the inner wall 642 and the outer wall 646. The three layers are connected together, by any method known in the industry, to form the modular wall 640. The inner wall 642 has a mating surface 628 located around the outside perimeter. The mating surface is of a width that allows for a modular wall 640 to fully engage the exterior mating surface 628 of the modules. By way of non-limiting example, the width of the modular wall's mating surface 628 is 2 inches wide.
  • A modular wall 640 is connected to the exterior mating surface 628 of a module frame whereby a continuous barrier is formed between the internal and external environments. The modular wall 640 is connected to the external mating surface 628 of a module by any method generally known in the industry. By way of non-limiting example, the modular wall 640 may be connected to the exterior surface 628 by way of screws, washers, and threaded inserts that use predrilled holes through the exterior mating surface 628 of the basic frame 600 and in the modular wall 640.
  • The facilities module and the computing equipment module each comprise a base frame 600. The frame consists of two lower longitudinally extending frame members 604, two lower transversely extending frame members 602, two upper longitudinally extending frame members 608, and two upper transversely extending frame members 606. The frame also consists of four vertically extending frame members 610. The twelve extending frame members, when combined together, form the base frame 600 provide the necessary structural support required by the additional interior frame members, computing equipment and other hardware. Corner support braces 612 may be used to provide additional structural support for the base frame 600. Each intersection of frame members may contain up to three corner braces 612. The base frame 600 will have additional support members 618, 622 and 624 added to it as necessary depending on the type of module to be built and the use requirements associated with the module.
  • Facilities Module
  • Referring to FIGS. 20 and 25, the facilities module 650 consists of a base frame 600. Connected to the base frame 600 is a first side modular wall 653, opposite a second side modular wall 654. The module also contains an upper modular wall 655 and a lower modular wall 656. Also connected to the base frame 600 is an end modular wall 658. The side opposite the end modular wall 658 is open and contains the external mating surface 628 (not shown) allow for the connection to a computing equipment module 652.
  • Similar to the data center contained within a container, the facilities module may contain one or more of the following: water/refrigerant heat exchanger 300, inlet T-shaped valve 330, outlet T-shaped valve 332, a basin or drip pan 340, power distribution panel 1208, disconnect switch, humidifier 123, dehumidifier 125, humidity control unit, controller unit 134, power supplies, lighting system, internal private network, UPS, and DC control system. It is to be appreciated that the UPS may be located in the computing equipment module depending on the operational requirements of the modular data center. The functions performed by the above mentioned components in a modular data center are similar, if not identical, to the functions performed by the components in a data center contained within a container.
  • As an alternative to refrigerant being circulated to individual refrigerant/air heat exchangers located above each of the upright equipment receiving areas of each module, all cooled air is generated within the facilities module which is then supplied to each module to remove heat generated from the computing equipment and associated hardware.
  • An additional alternative to the above mentioned cooling system is a water/refrigerant heat exchanger located within each computing equipment module. The cooled water will be supplied to the computing equipment module's 652 water/refrigerant heat exchanger via the facilities module 650. The refrigerant will circulate within a closed loop that includes a refrigerant/air heat exchanger. The complete refrigerant loop will be contained within each module for ease of modular data center 699 assembly and maintenance.
  • Computing Equipment Module
  • Referring to FIGS. 17, 19, 23 and 24, the computing equipment module consists of a base frame 600. Connected to the base frame 600 is a first side modular wall 653, opposite a second side modular wall 654. The module also contains an upper modular wall 655 and a lower modular wall 656. The ends of the module are open to allow for the module to be connected to another computing equipment module 652, a facilities module 650, or an end cap 660 or 661.
  • The computing equipment module contains transversely extending C-shaped frame members 614 that are laterally spaced apart to form a series of lower air plenums 616. The lower air plenums 616 allow for air flow from the center aisle 615, down through the perforated floor 663, flow transversely through the lower air plenums 616, upward into and through the equipment receiving area 670, into the upper air plenum 617, then back into the center aisle 615. Above the transversely extending C-shaped frame members 614 are located four longitudinally extending floor support members 618, laterally spaced apart, which are supported by the transversely extending C-shaped frame members 614. Four longitudinally extending side support members 622, two on each side and laterally spaced apart, are mounted to the base frame 600. Two longitudinally extending top support members 624, laterally spaced apart, are mounted to the base frame 600. The computing equipment receiving areas 670, or other module hardware, will be mounted to, or supported by, the longitudinally extending frame members 618, 622 and 624. Also mounted above the transversely extending frame members 614, adjacent to the perforated floor 663 and in front of the equipment receiving areas 670 are cable conduits with covers 620 to allow for the efficient and manageable routing of various cables between modules.
  • The module may also contain four vertically extending support members (not shown), two on each side, which are laterally spaced apart and mounted adjacent to the first and second side portions, to provide additional support for the equipment receiving area or other module hardware. Additionally, the module may contain two transversely extending frame members (not shown), laterally spaced apart and mounted adjacent to the upper or roof portion of the module, which can be used to provide additional support for the vertical cooling system or other module hardware.
  • Referring to FIGS. 18 and 19, the equipment receiving area 670 may consist of an equipment receiving carriage 630. Similar to the data center within a data center 10, the function of the equipment receiving area is to store computing equipment 102 or other associated hardware to support data center functions such as air moving assemblies 260. The design of the carriage is very similar to the carriage 70 described above. The equipment receiving carriage 630 of the modular data center 699 consists of a front upright support 632A, a rear upright support 6328, front to back extending members 672 that are connected between the front and rear upright supports 632A-B, front carriage vertical support 680A, rear carriage vertical support 680B, carriage front to rear extending members 682, front carriage rail 678A, and a rear carriage rail 678B. The front and rear carriage rails 678A-B of the equipment receiving carriage 630 may be mounted to isolators 86 or directly to longitudinally extending floor support members 618. The back of the equipment receiving carriage 630 may be mounted to isolators 86 or directly to longitudinally extending side support members 622A-B.
  • The front upright support 632A may contain openings 636A-C, which allow for the mounting of networking equipment 244 or other computing hardware to support the computing equipment 103 located in areas S1-S4. The front to rear extending members 672 form transverse cable conduits 634A-C that may be used to route and manage the various cables associated with connecting computing and networking equipment.
  • Each computing equipment module may contain a center aisle portion 615 which exists between the front edges of the equipment receiving areas 670. The center aisle portion 615 will be wide enough to allow for the computing equipment 102, which is mounted within the equipment receiving carriage 630, to be “racked” out to allow for inspection and maintenance.
  • Referring to FIG. 23, an alternative embodiment of the computing equipment module contains an insulated personnel door 648 and is generally designated 662. The personnel door 648 will allow access to the center aisle portion 615. In this embodiment, only one equipment receiving area 670 is located within the module space and the personnel door 648 is located opposite of equipment receiving area 670. This module embodiment may be used when the space where the modular data center 699 is to be located is not sufficient to allow for personnel access to the outside end areas. This module may replace any regular computing equipment module 652 that is part of the modular data center.
  • End Cap
  • Referring to FIG. 22, the end cap with personnel door is similar in construction to the end portion of the facilities module and is generally designated 660. The end cap 660 consists of a lower transversely extending member 602, an upper transversely extending member 606, and two vertically extending frame members 610. Additionally, the frame contains corner braces 612 at the intersection of the transversely extending frame members 602 and 606, and the vertically extending frame members 610. The end cap 660 has an exterior mating surface 628 and is used to mate the end cap 660 to the external mating surface 628 of a facilities module 650 thereby forming a continuous barrier between the inside and outside environment. An end modular wall with personnel door 657 is connected exterior mating surface 628 to the outside of the end cap frame to create a continuous barrier between the inside and outside environment.
  • If the alternative embodiment of the computing module 663, where an equipment receiving area 670 is removed to allow for a personnel door 648, is implemented, the end cap may not include a personnel door and is generally designated 661. In this implementation, the end modular wall with personnel door 657 is replaced by end modular wall 658.
  • Split Module
  • To support the need to locate a modular data center 699 where access to the area for installation is supported by a standard freight elevator of limited size or is limited by other factors that will prevent the delivery of standard size modules, the modules may be manufactured such that the modules can be separated along the longitudinal centerline. Allowing for longitudinal separation would enable the individual modules to be separated prior to loading them onto an elevator or moving them through a space of restricted size, and then reassembled in the designated data center space. The functionality of the modular data center 699 would not be limited by the split module design.
  • While particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from this invention and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of this invention. Furthermore, it is to be understood that the invention is solely defined by the appended claims. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations).
  • Accordingly, the invention is not limited except as by the appended claims.

Claims (1)

1. A modular data center comprising:
a facilities module having a base frame of longitudinal, vertical, and transverse frame members which form a rectangular shaped volume and define two side portions, two end portions, a top portion and a bottom portion wherein said frame members have external mating surfaces, a chilled water inlet connection for receiving chilled water from an external source, a chilled water outlet connection for exiting the chilled water from the interior portion of the modular data center, one or more water/refrigerant heat exchangers that receive said chilled water from said chilled water inlet and returns said chilled water to said chilled water outlet, said water/refrigerant heat exchangers also receiving refrigerant from and returning refrigerant to air/refrigerant heat exchangers located internally to the modular data center, a power connection to receive power from an external source, a power distribution system consisting of said power connection, switches, breakers, and electrical panels which control and distribute power to the equipment located internally to the modular data center, an environment control system for sensing and controlling the environmental conditions internal to the modular data center, a network connection, a controller for controlling and modifying conditions internal to the modular data center, modular walls comprising an outer layer, a middle insulation layer, and an inner layer having a mating surface which is in direct communication with said side portions, one said end portion, said top portion, and said bottom portion;
a computing equipment module having a base frame of longitudinal, vertical and transverse frame members which form a rectangular shaped volume and define two side portions, two end portions, a top portion and a bottom portion wherein said frame members have external mating surfaces, modular walls comprising an outer layer, a middle insulation layer, and an inner layer having a mating surface, where said mating surfaces of said modular walls are in direct communication with said side portions, said top portion, and said bottom portion, an upper plenum, a lower plenum, one or more equipment receiving areas located between said upper and said lower plenum and adjacent to said side portions, a center aisle section located between said equipment receiving areas, a floor section located at the bottom of said center aisle section and above said lower plenum, a lighting system, a carriage located in the equipment receiving areas and having a plurality of upwardly and longitudinally directed air moving devices, a cable tray system for routing cables between modules, one or more air/refrigerant heat exchangers where the refrigerant lines of said air/refrigerant heat exchangers are in communication with one of said water/refrigerant heat exchangers; and
an end cap having a frame consisting of two vertically extending frame members and two transversely extending frame members wherein said frame members have external mating surfaces, a modular wall comprising an outer layer, a middle insulation layer, and an inner layer having a mating surface, wherein said mating surfaces of said modular wall are in direct communication with said frame, and a door located integrally to said modular wall for personnel access.
US13/195,817 2008-12-31 2011-08-01 Data center Abandoned US20120147552A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/195,817 US20120147552A1 (en) 2008-12-31 2011-08-01 Data center
CN2012102718440A CN103257952A (en) 2011-08-01 2012-08-01 Data center

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/347,415 US7990710B2 (en) 2008-12-31 2008-12-31 Data center
US13/195,817 US20120147552A1 (en) 2008-12-31 2011-08-01 Data center

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/347,415 Division US7990710B2 (en) 2008-12-31 2008-12-31 Data center

Publications (1)

Publication Number Publication Date
US20120147552A1 true US20120147552A1 (en) 2012-06-14

Family

ID=42284683

Family Applications (8)

Application Number Title Priority Date Filing Date
US12/347,415 Expired - Fee Related US7990710B2 (en) 2008-12-31 2008-12-31 Data center
US13/159,222 Expired - Fee Related US8833094B2 (en) 2008-12-31 2011-06-13 Data center
US13/166,786 Expired - Fee Related US8842430B2 (en) 2008-12-31 2011-06-22 Data center
US13/195,817 Abandoned US20120147552A1 (en) 2008-12-31 2011-08-01 Data center
US13/195,805 Expired - Fee Related US8842420B2 (en) 2008-12-31 2011-08-01 Data center
US13/195,814 Abandoned US20120140415A1 (en) 2008-12-31 2011-08-01 Data center
US14/493,082 Abandoned US20150011152A1 (en) 2008-12-31 2014-09-22 Data center
US14/494,530 Expired - Fee Related US9116536B2 (en) 2008-12-31 2014-09-23 Data center

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US12/347,415 Expired - Fee Related US7990710B2 (en) 2008-12-31 2008-12-31 Data center
US13/159,222 Expired - Fee Related US8833094B2 (en) 2008-12-31 2011-06-13 Data center
US13/166,786 Expired - Fee Related US8842430B2 (en) 2008-12-31 2011-06-22 Data center

Family Applications After (4)

Application Number Title Priority Date Filing Date
US13/195,805 Expired - Fee Related US8842420B2 (en) 2008-12-31 2011-08-01 Data center
US13/195,814 Abandoned US20120140415A1 (en) 2008-12-31 2011-08-01 Data center
US14/493,082 Abandoned US20150011152A1 (en) 2008-12-31 2014-09-22 Data center
US14/494,530 Expired - Fee Related US9116536B2 (en) 2008-12-31 2014-09-23 Data center

Country Status (1)

Country Link
US (8) US7990710B2 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110172829A1 (en) * 2008-07-07 2011-07-14 Edouard Serras Method and device for adjusting the temperature and hygrometry inside a building
US20120173894A1 (en) * 2008-12-31 2012-07-05 David Driggers Data center
RU2509329C1 (en) * 2012-10-22 2014-03-10 Федеральное государственное бюджетное образовательное учреждение высшего профессионального образования "Воронежский государственный университет инженерных технологий" (ФГБОУ ВПО "ВГУИТ") Universal information processing system
US20160157387A1 (en) * 2013-03-15 2016-06-02 Switch Ltd Data Center Facility Design Configuration
US9439322B1 (en) * 2014-01-09 2016-09-06 Nautilus Data Technologies, Inc. Modular data center deployment method and system for waterborne data center vessels
US9529940B2 (en) 2012-06-19 2016-12-27 AEP Transmission Holding Company, LLC Modular substation protection and control system
US9850655B2 (en) 2013-10-03 2017-12-26 Liebert Corporation System and method for modular data center
US9864669B1 (en) * 2012-02-22 2018-01-09 Amazon Technologies, Inc. Managing data center resources
US9999166B1 (en) 2007-06-14 2018-06-12 Switch, Ltd. Integrated wiring system for a data center
US10178796B2 (en) 2007-06-14 2019-01-08 Switch, Ltd. Electronic equipment data center or co-location facility designs and methods of making and using the same
EP3480135A1 (en) * 2017-11-06 2019-05-08 Kärcher Futuretech GmbH Mobile container building for military, humanitarian and/or expedition-like use
US10408712B2 (en) 2013-03-15 2019-09-10 Vertiv Corporation System and method for energy analysis and predictive modeling of components of a cooling system
CN111857298A (en) * 2020-07-31 2020-10-30 广州狸园科技有限公司 Heat abstractor for computer based on big data service
US10888034B2 (en) 2007-06-14 2021-01-05 Switch, Ltd. Air handling unit with a canopy thereover for use with a data center and method of using the same
US11275413B2 (en) 2007-06-14 2022-03-15 Switch, Ltd. Data center air handling unit including uninterruptable cooling fan with weighted rotor and method of using the same
US11825627B2 (en) 2016-09-14 2023-11-21 Switch, Ltd. Ventilation and air flow control with heat insulated compartment

Families Citing this family (153)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9072200B2 (en) * 2008-09-10 2015-06-30 Schneider Electric It Corporation Hot aisle containment panel system and method
US8733812B2 (en) * 2008-12-04 2014-05-27 Io Data Centers, Llc Modular data center
US8783336B2 (en) 2008-12-04 2014-07-22 Io Data Centers, Llc Apparatus and method of environmental condition management for electronic equipment
US8184435B2 (en) 2009-01-28 2012-05-22 American Power Conversion Corporation Hot aisle containment cooling system and method
US8197124B2 (en) * 2009-02-05 2012-06-12 International Business Machines Corporation Heat flow measurement tool for a rack mounted assembly of electronic equipment
US8331086B1 (en) * 2009-02-17 2012-12-11 Silver Linings Systems, LLC Modular integrated mobile cooling system and methods of operation thereof
US8077457B2 (en) 2009-02-27 2011-12-13 Microsoft Corporation Modularization of data center functions
US8264840B2 (en) * 2009-05-15 2012-09-11 NxGen Modular, LLC Modular data center and associated methods
US8360833B2 (en) * 2009-05-28 2013-01-29 American Power Conversion Corporation Method and apparatus for attachment and removal of fans while in operation and without the need for tools
US7944692B2 (en) 2009-06-12 2011-05-17 American Power Conversion Corporation Method and apparatus for installation and removal of overhead cooling equipment
US9101080B2 (en) * 2009-09-28 2015-08-04 Amazon Technologies, Inc. Modular computing system for a data center
US8301316B2 (en) * 2010-01-25 2012-10-30 Hewlett-Packard Develpment Company, L.P. System and method for orienting a baffle proximate an array of fans that cool electronic components
US20110189936A1 (en) * 2010-02-01 2011-08-04 Dataxenter Ip B.V Modular datacenter element and modular datacenter cooling element
ES2392775B1 (en) * 2010-03-30 2013-10-18 Advanced Shielding Technologies Europe S.L. SYSTEM FOR THE AIR CONDITIONING OF THE INTERNAL SPACE OF A DATA PROCESSING CENTER
US8441792B2 (en) 2010-07-21 2013-05-14 Birchbridge Incorporated Universal conduction cooling platform
US8410364B2 (en) 2010-07-21 2013-04-02 Birchbridge Incorporated Universal rack cable management system
US8411440B2 (en) 2010-07-21 2013-04-02 Birchbridge Incorporated Cooled universal hardware platform
US8441793B2 (en) 2010-07-21 2013-05-14 Birchbridge Incorporated Universal rack backplane system
US8259450B2 (en) * 2010-07-21 2012-09-04 Birchbridge Incorporated Mobile universal hardware platform
US8238082B2 (en) * 2010-08-09 2012-08-07 Amazon Technologies, Inc. Modular system for outdoor data center
TW201210433A (en) * 2010-08-18 2012-03-01 Hon Hai Prec Ind Co Ltd Computer server cabinet
TW201209545A (en) * 2010-08-23 2012-03-01 Hon Hai Prec Ind Co Ltd Computer server cabinet
TW201209555A (en) * 2010-08-26 2012-03-01 Hon Hai Prec Ind Co Ltd Container data center and cooling cycle system thereof
TW201213212A (en) * 2010-09-24 2012-04-01 Hon Hai Prec Ind Co Ltd Container data center and heat dissipation apparatus thereof
TW201214096A (en) * 2010-09-28 2012-04-01 Hon Hai Prec Ind Co Ltd Container data center and power generation system thereof
JP5596493B2 (en) * 2010-10-22 2014-09-24 株式会社岡村製作所 Heat dissipation cage installed in the server room
TW201217716A (en) * 2010-10-27 2012-05-01 Hon Hai Prec Ind Co Ltd Container data center
JP5682025B2 (en) * 2010-10-29 2015-03-11 清水建設株式会社 Server room unit and unit type data center
US8554390B2 (en) * 2010-11-16 2013-10-08 International Business Machines Corporation Free cooling solution for a containerized data center
TW201221236A (en) * 2010-11-19 2012-06-01 Hon Hai Prec Ind Co Ltd Container data center
US8994231B1 (en) * 2010-12-03 2015-03-31 Exaflop Llc Medium voltage power distribution
TWI409618B (en) * 2010-12-24 2013-09-21 Hon Hai Prec Ind Co Ltd Movable server room
TW201227229A (en) * 2010-12-31 2012-07-01 Hon Hai Prec Ind Co Ltd Container data center
CN103443550B (en) 2011-01-11 2018-11-20 施耐德电气It公司 cooling unit and method
US20120200206A1 (en) * 2011-02-07 2012-08-09 Dell Products L.P. System and method for designing a configurable modular data center
US8446710B2 (en) * 2011-02-07 2013-05-21 Dell Products L.P. System and method for structural, modular power distribution in a modular data center
CN102759963A (en) * 2011-04-28 2012-10-31 鸿富锦精密工业(深圳)有限公司 Server case
JP5419926B2 (en) * 2011-06-06 2014-02-19 日本フルハーフ株式会社 A simple building with electronic equipment
US9092209B2 (en) * 2011-06-17 2015-07-28 Microsoft Technology Licensing, Llc Wireless cloud-based computing for rural and developing areas
GB201113556D0 (en) * 2011-08-05 2011-09-21 Bripco Bvba Data centre
TW201310198A (en) * 2011-08-23 2013-03-01 Hon Hai Prec Ind Co Ltd Container data center
JP2013088031A (en) * 2011-10-18 2013-05-13 Hitachi Plant Technologies Ltd Cooling system, and method for controlling the same
CN103138940B (en) * 2011-11-28 2016-06-01 英业达科技有限公司 Server rack system
US20130163185A1 (en) * 2011-12-21 2013-06-27 Microsoft Corporation Data center docking station and cartridge
US9121617B2 (en) 2012-01-20 2015-09-01 Berg Companies, Inc. Expandable shelter HVAC systems
WO2013119243A1 (en) 2012-02-09 2013-08-15 Hewlett-Packard Development Company, L.P. Heat dissipating system
CN103287758A (en) * 2012-02-22 2013-09-11 鸿富锦精密工业(深圳)有限公司 Container
US8964374B1 (en) 2012-02-28 2015-02-24 Google Inc. Vertical tray structure for rack in data center
CN103287760A (en) * 2012-03-05 2013-09-11 鸿富锦精密工业(深圳)有限公司 Container
US9529395B2 (en) 2012-03-12 2016-12-27 Hewlett Packard Enterprise Development Lp Liquid temperature control cooling
EP2826348B1 (en) * 2012-03-12 2019-12-04 Hewlett-Packard Enterprise Development LP Rack cooling system with a cooling section
TW201345391A (en) * 2012-04-25 2013-11-01 Hon Hai Prec Ind Co Ltd Container data center
CN103376850A (en) * 2012-04-27 2013-10-30 鸿富锦精密工业(深圳)有限公司 Container data center
CN103429022B (en) * 2012-05-23 2016-09-07 华为技术有限公司 A kind of container data center
US9485887B1 (en) 2012-06-15 2016-11-01 Amazon Technologies, Inc. Data center with streamlined power and cooling
US10531597B1 (en) 2012-06-15 2020-01-07 Amazon Technologies, Inc. Negative pressure air handling system
US9395974B1 (en) 2012-06-15 2016-07-19 Amazon Technologies, Inc. Mixed operating environment
TW201405289A (en) * 2012-07-16 2014-02-01 Hon Hai Prec Ind Co Ltd Container data center
CN103577383A (en) * 2012-07-31 2014-02-12 希莱斯凯尔公司 Data center
CN103577384A (en) * 2012-08-01 2014-02-12 希莱斯凯尔公司 Data center
US9258930B2 (en) 2012-09-04 2016-02-09 Amazon Technologies, Inc. Expandable data center with side modules
US8833001B2 (en) 2012-09-04 2014-09-16 Amazon Technologies, Inc. Expandable data center with movable wall
US9618991B1 (en) 2012-09-27 2017-04-11 Google Inc. Large-scale power back-up for data centers
CN104685984A (en) 2012-09-28 2015-06-03 惠普发展公司,有限责任合伙企业 Cooling assembly
CN103729043A (en) * 2012-10-12 2014-04-16 英业达科技有限公司 Servo system
US8931221B2 (en) * 2012-11-21 2015-01-13 Google Inc. Alternative data center building designs
US9713289B2 (en) * 2013-01-28 2017-07-18 Ch2M Hill Engineers, Inc. Modular pod
CN104919914B (en) 2013-01-31 2017-10-27 慧与发展有限责任合伙企业 Component, system and the method for removing heat for providing liquid cooling
US9198310B2 (en) 2013-03-11 2015-11-24 Amazon Technologies, Inc. Stall containment of rack in a data center
CA2914797A1 (en) * 2013-05-06 2014-11-13 Green Revolution Cooling, Inc. System and method of packaging computing resources for space and fire-resistance
US9351430B2 (en) 2013-06-13 2016-05-24 Microsoft Technology Licensing, Llc Renewable energy based datacenter cooling
US9851726B2 (en) 2013-09-04 2017-12-26 Panduit Corp. Thermal capacity management
CN203691803U (en) * 2013-12-27 2014-07-02 中兴通讯股份有限公司 Plug-in box and terminal
US20170223866A1 (en) * 2014-01-08 2017-08-03 Nautilus Data Technologies, Inc. Thermal containment system with integrated cooling unit for waterborne or land-based data centers
US11297742B2 (en) * 2014-01-08 2022-04-05 Nautilus True, Llc Thermal containment system with integrated cooling unit for waterborne or land-based data centers
CN104955286A (en) * 2014-03-27 2015-09-30 鸿富锦精密电子(天津)有限公司 Cabinet
US20150305204A1 (en) * 2014-04-18 2015-10-22 Hon Hai Precision Industry Co., Ltd. Data center with cooling system
WO2015175693A1 (en) 2014-05-13 2015-11-19 Green Revolution Cooling, Inc. System and method for air-cooling hard drives in liquid-cooled server rack
US10465492B2 (en) 2014-05-20 2019-11-05 KATA Systems LLC System and method for oil and condensate processing
US9357681B2 (en) * 2014-05-22 2016-05-31 Amazon Technologies, Inc. Modular data center row infrastructure
US9483090B1 (en) * 2014-07-21 2016-11-01 Google Inc. Self-contained power and cooling domains
KR101491418B1 (en) * 2014-07-31 2015-02-12 주식회사 유니트하우스 Container house
CN105451504B (en) * 2014-08-19 2018-02-23 阿里巴巴集团控股有限公司 Computer room, data center and data center systems
US9431798B2 (en) * 2014-09-17 2016-08-30 Rosendin Electric, Inc. Various methods and apparatuses for a low profile integrated power distribution platform
US9414531B1 (en) * 2014-09-24 2016-08-09 Amazon Technologies, Inc. Modular data center without active cooling
US10129611B2 (en) 2014-09-27 2018-11-13 Rf Code, Inc. System and method for monitoring sensor output
CN105578835B (en) * 2014-10-14 2018-01-16 鸿富锦精密工业(深圳)有限公司 Container data center
US10582635B1 (en) 2015-02-04 2020-03-03 Amazon Technologies, Inc. Portable data center
WO2016131138A1 (en) * 2015-02-17 2016-08-25 Vert.Com Inc Modular high-rise data centers and methods thereof
US9701323B2 (en) 2015-04-06 2017-07-11 Bedloe Industries Llc Railcar coupler
CN104909079B (en) * 2015-05-25 2017-06-09 浪潮电子信息产业股份有限公司 A kind of embedded AHU frame type containers data center systems
JP6613665B2 (en) * 2015-07-10 2019-12-04 富士通株式会社 Electronics
TW201714042A (en) * 2015-10-13 2017-04-16 鴻海精密工業股份有限公司 Container data center
CN106604600A (en) * 2015-10-14 2017-04-26 鸿富锦精密工业(深圳)有限公司 Container-type data center
KR102159975B1 (en) 2016-01-07 2020-09-28 주식회사 엘지화학 Container
WO2017131722A1 (en) * 2016-01-28 2017-08-03 Hewlett Packard Enterprise Development Lp Enclosure monitoring devices having battery backup
US9801308B2 (en) * 2016-03-09 2017-10-24 Dell Products Lp Managing cable connections and air flow in a data center
US9795062B1 (en) 2016-06-29 2017-10-17 Amazon Technologies, Inc. Portable data center for data transfer
US10965525B1 (en) * 2016-06-29 2021-03-30 Amazon Technologies, Inc. Portable data center for data transfer
US10398061B1 (en) * 2016-06-29 2019-08-27 Amazon Technologies, Inc. Portable data center for data transfer
CN107663954A (en) * 2016-07-28 2018-02-06 苏州安瑞可机柜系统有限公司 A kind of modular server computer room
CN106659086B (en) * 2016-12-30 2019-08-13 华为数字技术(苏州)有限公司 A kind of refrigerated container and container data center
WO2018132866A1 (en) * 2017-01-17 2018-07-26 The Data Exchange Network Ltd. High-density modular data centre
WO2018145201A1 (en) 2017-02-08 2018-08-16 Upstream Data Inc. Blockchain mine at oil or gas facility
US20180295750A1 (en) * 2017-04-04 2018-10-11 Scalematrix Modular rack with adjustable size structures
US11054457B2 (en) 2017-05-24 2021-07-06 Cisco Technology, Inc. Safety monitoring for cables transmitting data and power
US10809134B2 (en) 2017-05-24 2020-10-20 Cisco Technology, Inc. Thermal modeling for cables transmitting data and power
US11431420B2 (en) 2017-09-18 2022-08-30 Cisco Technology, Inc. Power delivery through an optical system
US10541758B2 (en) 2017-09-18 2020-01-21 Cisco Technology, Inc. Power delivery through an optical system
DE202017006578U1 (en) * 2017-12-22 2019-03-25 Thomas Roggenkamp climate chamber
US11093012B2 (en) 2018-03-02 2021-08-17 Cisco Technology, Inc. Combined power, data, and cooling delivery in a communications network
US20200033837A1 (en) * 2018-03-04 2020-01-30 Cube Hydro Partners, LLC Cooling system and method for cryptocurrency miners
US10732688B2 (en) 2018-03-09 2020-08-04 Cisco Technology, Inc. Delivery of AC power with higher power PoE (power over ethernet) systems
US10281513B1 (en) 2018-03-09 2019-05-07 Cisco Technology, Inc. Verification of cable application and reduced load cable removal in power over communications systems
US10631443B2 (en) 2018-03-12 2020-04-21 Cisco Technology, Inc. Splitting of combined delivery power, data, and cooling in a communications network
US10672537B2 (en) 2018-03-30 2020-06-02 Cisco Technology, Inc. Interface module for combined delivery power, data, and cooling at a network device
US10958471B2 (en) 2018-04-05 2021-03-23 Cisco Technology, Inc. Method and apparatus for detecting wire fault and electrical imbalance for power over communications cabling
US20190343024A1 (en) * 2018-05-01 2019-11-07 DCIM Solutions, LLC Mini-split hvac ducted return and supply system
US10735105B2 (en) 2018-05-04 2020-08-04 Cisco Technology, Inc. High power and data delivery in a communications network with safety and fault protection
US10588241B2 (en) 2018-05-11 2020-03-10 Cisco Technology, Inc. Cooling fan control in a modular electronic system during online insertion and removal
US11038307B2 (en) 2018-05-25 2021-06-15 Cisco Technology, Inc. Cable power rating identification for power distribution over communications cabling
US11359865B2 (en) * 2018-07-23 2022-06-14 Green Revolution Cooling, Inc. Dual Cooling Tower Time Share Water Treatment System
US10582639B1 (en) 2018-09-14 2020-03-03 Cisco Technology, Inc. Liquid cooling distribution in a modular electronic system
US11191185B2 (en) 2018-09-14 2021-11-30 Cisco Technology, Inc. Liquid cooling distribution in a modular electronic system
WO2020076320A1 (en) * 2018-10-11 2020-04-16 General Electric Company Systems and methods for mechanical load isolation in transportable storage container
US10763749B2 (en) 2018-11-14 2020-09-01 Cisco Technology, Inc Multi-resonant converter power supply
CN109289424A (en) * 2018-11-14 2019-02-01 苏州安瑞可信息科技有限公司 A kind of warehouse style micromodule
USD915766S1 (en) * 2018-11-15 2021-04-13 Bitmain Technologies Inc. Container
US11061456B2 (en) 2019-01-23 2021-07-13 Cisco Technology, Inc. Transmission of pulse power and data over a wire pair
US10790997B2 (en) 2019-01-23 2020-09-29 Cisco Technology, Inc. Transmission of pulse power and data in a communications network
EP3924801A4 (en) 2019-02-15 2022-11-16 Scot Arthur Johnson Transportable datacenter
WO2020163968A2 (en) 2019-02-15 2020-08-20 Scot Arthur Johnson Transportable datacenter
US10680836B1 (en) 2019-02-25 2020-06-09 Cisco Technology, Inc. Virtualized chassis with power-over-Ethernet for networking applications
US11456883B2 (en) 2019-03-13 2022-09-27 Cisco Technology, Inc. Multiple phase pulse power in a network communications system
US10849250B2 (en) 2019-03-14 2020-11-24 Cisco Technology, Inc. Integration of power, data, cooling, and management in a network communications system
US11212937B2 (en) 2019-03-21 2021-12-28 Cisco Technology, Inc. Method and system for preventing or correcting fan reverse rotation during online installation and removal
AU2020276342A1 (en) 2019-05-15 2021-12-16 Upstream Data Inc. Portable blockchain mining system and methods of use
US11063630B2 (en) 2019-11-01 2021-07-13 Cisco Technology, Inc. Initialization and synchronization for pulse power in a network system
US11252811B2 (en) 2020-01-15 2022-02-15 Cisco Technology, Inc. Power distribution from point-of-load with cooling
US11088547B1 (en) 2020-01-17 2021-08-10 Cisco Technology, Inc. Method and system for integration and control of power for consumer power circuits
US11853138B2 (en) 2020-01-17 2023-12-26 Cisco Technology, Inc. Modular power controller
US11490541B2 (en) 2020-01-29 2022-11-01 Daedalus Industrial Llc Building management system container and skids
US11438183B2 (en) 2020-02-25 2022-09-06 Cisco Technology, Inc. Power adapter for power supply unit
US11637497B2 (en) 2020-02-28 2023-04-25 Cisco Technology, Inc. Multi-phase pulse power short reach distribution
US11320610B2 (en) 2020-04-07 2022-05-03 Cisco Technology, Inc. Integration of power and optics through cold plate for delivery to electronic and photonic integrated circuits
US11307368B2 (en) 2020-04-07 2022-04-19 Cisco Technology, Inc. Integration of power and optics through cold plates for delivery to electronic and photonic integrated circuits
US11785748B2 (en) * 2020-05-29 2023-10-10 Baidu Usa Llc Backup cooling for a data center and servers
USD982145S1 (en) 2020-10-19 2023-03-28 Green Revolution Cooling, Inc. Cooling system enclosure
USD998770S1 (en) 2020-10-19 2023-09-12 Green Revolution Cooling, Inc. Cooling system enclosure
US11805624B2 (en) 2021-09-17 2023-10-31 Green Revolution Cooling, Inc. Coolant shroud
US11925946B2 (en) 2022-03-28 2024-03-12 Green Revolution Cooling, Inc. Fluid delivery wand
CN115183418B (en) * 2022-05-31 2023-07-28 国网浙江省电力有限公司嘉兴供电公司 Indoor temperature regulation and control method and system for intelligent building
CN115912086B (en) * 2022-11-29 2023-06-23 浙江华发电气有限公司 High-low voltage intelligent power distribution cabinet

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3925679A (en) * 1973-09-21 1975-12-09 Westinghouse Electric Corp Modular operating centers and methods of building same for use in electric power generating plants and other industrial and commercial plants, processes and systems
US5150277A (en) * 1990-05-04 1992-09-22 At&T Bell Laboratories Cooling of electronic equipment cabinets
US5544012A (en) * 1993-12-28 1996-08-06 Kabushiki Kaisha Toshiba Cooling system for cooling electronic apparatus
US5642827A (en) * 1993-12-02 1997-07-01 Maersk Container Industri As Refrigerated container and a gable frame
US5966956A (en) * 1996-11-20 1999-10-19 Shelter Technologies, Inc. Portable refrigerated storage unit
US20040075984A1 (en) * 2002-10-03 2004-04-22 Bash Cullen E. Cooling of data centers
US20050099770A1 (en) * 2003-03-19 2005-05-12 James Fink Data center cooling system
US20050207116A1 (en) * 2004-03-22 2005-09-22 Yatskov Alexander I Systems and methods for inter-cooling computer cabinets
US20070044411A1 (en) * 2005-05-09 2007-03-01 Meredith Walter D Panel structures
US7278273B1 (en) * 2003-12-30 2007-10-09 Google Inc. Modular data center
US20080223051A1 (en) * 2004-08-11 2008-09-18 Lawrence Kates Intelligent thermostat system for monitoring a refrigerant-cycle apparatus
US7738251B2 (en) * 2006-06-01 2010-06-15 Google Inc. Modular computing environments
US7883023B1 (en) * 2007-01-29 2011-02-08 Hewlett-Packard Development Company, L.P. Fluid moving device having a fail-safe operation

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5830057A (en) * 1996-10-17 1998-11-03 Coldwall Technologies Limited Integrated temperature-controlled container
US5758464A (en) * 1997-01-30 1998-06-02 Celotex Corporation Insulation system for metal furred walls
US5991163A (en) * 1998-11-12 1999-11-23 Nexabit Networks, Inc. Electronic circuit board assembly and method of closely stacking boards and cooling the same
FI108962B (en) * 1999-08-20 2002-04-30 Nokia Corp Cabinet cooling system
US6557357B2 (en) * 2000-02-18 2003-05-06 Toc Technology, Llc Computer rack heat extraction device
US7718889B2 (en) * 2001-03-20 2010-05-18 American Power Conversion Corporation Adjustable scalable rack power system and method
US6658091B1 (en) * 2002-02-01 2003-12-02 @Security Broadband Corp. LIfestyle multimedia security system
US6836030B2 (en) 2002-05-31 2004-12-28 Verari Systems, Inc. Rack mountable computer component power distribution unit and method
US6867966B2 (en) 2002-05-31 2005-03-15 Verari Systems, Inc. Method and apparatus for rack mounting computer components
US6909611B2 (en) 2002-05-31 2005-06-21 Verari System, Inc. Rack mountable computer component and method of making same
US6801428B2 (en) 2002-05-31 2004-10-05 Racksaver, Inc. Rack mountable computer component fan cooling arrangement and method
US6826036B2 (en) * 2002-06-28 2004-11-30 Hewlett-Packard Development Company, L.P. Modular power distribution system for use in computer equipment racks
US6938433B2 (en) * 2002-08-02 2005-09-06 Hewlett-Packard Development Company, Lp. Cooling system with evaporators distributed in series
US6603660B1 (en) * 2002-08-12 2003-08-05 Netrix Technologies, Inc. Remote distribution frame
US7127865B2 (en) * 2002-10-11 2006-10-31 Douglas Robert B Modular structure for building panels and methods of making and using same
US6775137B2 (en) * 2002-11-25 2004-08-10 International Business Machines Corporation Method and apparatus for combined air and liquid cooling of stacked electronics components
US7170745B2 (en) * 2003-04-30 2007-01-30 Hewlett-Packard Development Company, L.P. Electronics rack having an angled panel
US6819563B1 (en) * 2003-07-02 2004-11-16 International Business Machines Corporation Method and system for cooling electronics racks using pre-cooled air
US6981915B2 (en) * 2004-03-15 2006-01-03 Hewlett-Packard Development Company, L.P. Airflow volume control system
US7330350B2 (en) * 2004-06-04 2008-02-12 Cray Inc. Systems and methods for cooling computer modules in computer cabinets
US8596079B2 (en) * 2005-02-02 2013-12-03 American Power Conversion Corporation Intelligent venting
US7624740B2 (en) * 2005-07-01 2009-12-01 Philip Morris Usa Inc. Controlled ventilation air curing system
US7365973B2 (en) * 2006-01-19 2008-04-29 American Power Conversion Corporation Cooling system and method
US20090126600A1 (en) * 2006-03-15 2009-05-21 Zupancich Ronald J Insulated cargo container and methods for manufacturing same using vacuum insulated panels and foam insulated liners
CA2653808C (en) * 2006-06-01 2014-10-14 Exaflop Llc Controlled warm air capture
US7856838B2 (en) * 2006-09-13 2010-12-28 Oracle America, Inc. Cooling air flow loop for a data center in a shipping container
US7854652B2 (en) * 2006-09-13 2010-12-21 Oracle America, Inc. Server rack service utilities for a data center in a shipping container
US7511960B2 (en) * 2006-09-13 2009-03-31 Sun Microsystems, Inc. Balanced chilled fluid cooling system for a data center in a shipping container
US7724513B2 (en) * 2006-09-25 2010-05-25 Silicon Graphics International Corp. Container-based data center
GB2446454B (en) * 2007-02-07 2011-09-21 Robert Michael Tozer Cool design data centre
US7430118B1 (en) * 2007-06-04 2008-09-30 Yahoo! Inc. Cold row encapsulation for server farm cooling system
US8721409B1 (en) * 2007-07-31 2014-05-13 Amazon Technologies, Inc. Airflow control system with external air control
US9435552B2 (en) * 2007-12-14 2016-09-06 Ge-Hitachi Nuclear Energy Americas Llc Air filtration and handling for nuclear reactor habitability area
US8395621B2 (en) * 2008-02-12 2013-03-12 Accenture Global Services Limited System for providing strategies for increasing efficiency of data centers
US8033122B2 (en) * 2008-03-04 2011-10-11 American Power Conversion Corporation Dehumidifier apparatus and method
US7961463B2 (en) * 2008-04-02 2011-06-14 Microsoft Corporation Power efficient data center
US8251785B2 (en) * 2008-10-31 2012-08-28 Cirrus Logic, Inc. System and method for vertically stacked information handling system and infrastructure enclosures
US8141374B2 (en) * 2008-12-22 2012-03-27 Amazon Technologies, Inc. Multi-mode cooling system and method with evaporative cooling
US7990710B2 (en) * 2008-12-31 2011-08-02 Vs Acquisition Co. Llc Data center
US8902569B1 (en) * 2012-07-27 2014-12-02 Amazon Technologies, Inc. Rack power distribution unit with detachable cables

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3925679A (en) * 1973-09-21 1975-12-09 Westinghouse Electric Corp Modular operating centers and methods of building same for use in electric power generating plants and other industrial and commercial plants, processes and systems
US5150277A (en) * 1990-05-04 1992-09-22 At&T Bell Laboratories Cooling of electronic equipment cabinets
US5642827A (en) * 1993-12-02 1997-07-01 Maersk Container Industri As Refrigerated container and a gable frame
US5544012A (en) * 1993-12-28 1996-08-06 Kabushiki Kaisha Toshiba Cooling system for cooling electronic apparatus
US5966956A (en) * 1996-11-20 1999-10-19 Shelter Technologies, Inc. Portable refrigerated storage unit
US20040075984A1 (en) * 2002-10-03 2004-04-22 Bash Cullen E. Cooling of data centers
US20050099770A1 (en) * 2003-03-19 2005-05-12 James Fink Data center cooling system
US7278273B1 (en) * 2003-12-30 2007-10-09 Google Inc. Modular data center
US20050207116A1 (en) * 2004-03-22 2005-09-22 Yatskov Alexander I Systems and methods for inter-cooling computer cabinets
US20080223051A1 (en) * 2004-08-11 2008-09-18 Lawrence Kates Intelligent thermostat system for monitoring a refrigerant-cycle apparatus
US20070044411A1 (en) * 2005-05-09 2007-03-01 Meredith Walter D Panel structures
US7738251B2 (en) * 2006-06-01 2010-06-15 Google Inc. Modular computing environments
US7883023B1 (en) * 2007-01-29 2011-02-08 Hewlett-Packard Development Company, L.P. Fluid moving device having a fail-safe operation

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10888034B2 (en) 2007-06-14 2021-01-05 Switch, Ltd. Air handling unit with a canopy thereover for use with a data center and method of using the same
US11889630B2 (en) 2007-06-14 2024-01-30 Switch, Ltd. Data center facility including external wall penetrating air handling units
US9999166B1 (en) 2007-06-14 2018-06-12 Switch, Ltd. Integrated wiring system for a data center
US11622484B2 (en) 2007-06-14 2023-04-04 Switch, Ltd. Data center exterior wall penetrating air handling technology
US10356968B2 (en) 2007-06-14 2019-07-16 Switch, Ltd. Facility including externally disposed data center air handling units
US10356939B2 (en) 2007-06-14 2019-07-16 Switch, Ltd. Electronic equipment data center or co-location facility designs and methods of making and using the same
US10178796B2 (en) 2007-06-14 2019-01-08 Switch, Ltd. Electronic equipment data center or co-location facility designs and methods of making and using the same
US11275413B2 (en) 2007-06-14 2022-03-15 Switch, Ltd. Data center air handling unit including uninterruptable cooling fan with weighted rotor and method of using the same
US20110172829A1 (en) * 2008-07-07 2011-07-14 Edouard Serras Method and device for adjusting the temperature and hygrometry inside a building
US8594849B2 (en) * 2008-07-07 2013-11-26 Edouard Serras Method and device for adjusting the temperature and hygrometry inside a building
US20150130352A1 (en) * 2008-12-31 2015-05-14 Stephen V. R. Hellriegel Data center
US8842420B2 (en) * 2008-12-31 2014-09-23 Cirrascale Corporation Data center
US9116536B2 (en) * 2008-12-31 2015-08-25 Cirrascale Corporation Data center
US20120173894A1 (en) * 2008-12-31 2012-07-05 David Driggers Data center
US9864669B1 (en) * 2012-02-22 2018-01-09 Amazon Technologies, Inc. Managing data center resources
US9529940B2 (en) 2012-06-19 2016-12-27 AEP Transmission Holding Company, LLC Modular substation protection and control system
RU2509329C1 (en) * 2012-10-22 2014-03-10 Федеральное государственное бюджетное образовательное учреждение высшего профессионального образования "Воронежский государственный университет инженерных технологий" (ФГБОУ ВПО "ВГУИТ") Universal information processing system
US20160157387A1 (en) * 2013-03-15 2016-06-02 Switch Ltd Data Center Facility Design Configuration
US10408712B2 (en) 2013-03-15 2019-09-10 Vertiv Corporation System and method for energy analysis and predictive modeling of components of a cooling system
US9795061B2 (en) * 2013-03-15 2017-10-17 Switch, Ltd. Data center facility design configuration
US10172261B2 (en) * 2013-10-03 2019-01-01 Vertiv Corporation System and method for modular data center
US9850655B2 (en) 2013-10-03 2017-12-26 Liebert Corporation System and method for modular data center
US9814163B2 (en) * 2014-01-09 2017-11-07 Nautilus Data Technologies, Inc. Modular data center deployment method and system for data center vessels
US20170202113A1 (en) * 2014-01-09 2017-07-13 Nautilus Data Technologies, Inc. Modular data center deployment method and system for data center vessels
CN106165019A (en) * 2014-01-09 2016-11-23 鹦鹉螺数据科技有限公司 Modular data center dispositions method and system for water transport data center boats and ships
US9439322B1 (en) * 2014-01-09 2016-09-06 Nautilus Data Technologies, Inc. Modular data center deployment method and system for waterborne data center vessels
US11825627B2 (en) 2016-09-14 2023-11-21 Switch, Ltd. Ventilation and air flow control with heat insulated compartment
EP3480135A1 (en) * 2017-11-06 2019-05-08 Kärcher Futuretech GmbH Mobile container building for military, humanitarian and/or expedition-like use
US10640998B2 (en) 2017-11-06 2020-05-05 Kaercher Futuretech Gmbh Mobile container building for personnel deployed in military, humanitarian and/or expeditionary operations
CN111857298A (en) * 2020-07-31 2020-10-30 广州狸园科技有限公司 Heat abstractor for computer based on big data service

Also Published As

Publication number Publication date
US20120173894A1 (en) 2012-07-05
US20150011152A1 (en) 2015-01-08
US20100165565A1 (en) 2010-07-01
US20120134104A1 (en) 2012-05-31
US8833094B2 (en) 2014-09-16
US20150130352A1 (en) 2015-05-14
US7990710B2 (en) 2011-08-02
US8842420B2 (en) 2014-09-23
US20120127656A1 (en) 2012-05-24
US20120140415A1 (en) 2012-06-07
US9116536B2 (en) 2015-08-25
US8842430B2 (en) 2014-09-23

Similar Documents

Publication Publication Date Title
US20120147552A1 (en) Data center
US11889630B2 (en) Data center facility including external wall penetrating air handling units
US10624242B2 (en) System and method of packaging computing resources for space and fire-resistance
US20210092866A1 (en) Fluid-cooled data centres without air coinditioning, and methods for operating same
US8469782B1 (en) Data center air handling unit
US8180495B1 (en) Air handling control system for a data center
US9392733B2 (en) Data center cooling
ES2392076T3 (en) Cooling system and method
US20210219472A1 (en) Tri-redundant data center power supply system
US11889654B2 (en) Security panels for use in data centers

Legal Events

Date Code Title Description
AS Assignment

Owner name: VS ACQUISITION CO LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BARANDIARAN, ALEIX;REEL/FRAME:027711/0922

Effective date: 20120215

AS Assignment

Owner name: CIRRASCALE CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VS ACQUISITION CO LLC;REEL/FRAME:027967/0791

Effective date: 20120329

AS Assignment

Owner name: VINDRAUGA CORPORATION, A CALIFORNIA CORPORATION, C

Free format text: SECURITY AGREEMENT;ASSIGNOR:CIRRASCALE CORPORATION, A CALIFORNIA CORPORATION;REEL/FRAME:027982/0917

Effective date: 20120330

Owner name: VINDRAUGA CORPORATION, A CALIFORNIA CORPORATION, C

Free format text: SECURITY AGREEMENT;ASSIGNOR:CIRRASCALE CORPORATION, A CALIFORNIA CORPORATION;REEL/FRAME:027982/0891

Effective date: 20120330

AS Assignment

Owner name: VINDRAUGA CORPORATION, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:CIRRASCALE CORPORATION;REEL/FRAME:033568/0634

Effective date: 20140814

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION