WO2022041083A1 - 一种数据中心及扩容方法 - Google Patents

一种数据中心及扩容方法 Download PDF

Info

Publication number
WO2022041083A1
WO2022041083A1 PCT/CN2020/111928 CN2020111928W WO2022041083A1 WO 2022041083 A1 WO2022041083 A1 WO 2022041083A1 CN 2020111928 W CN2020111928 W CN 2020111928W WO 2022041083 A1 WO2022041083 A1 WO 2022041083A1
Authority
WO
WIPO (PCT)
Prior art keywords
power
module
data center
equipment
computing
Prior art date
Application number
PCT/CN2020/111928
Other languages
English (en)
French (fr)
Inventor
彭永辉
Original Assignee
华为数字能源技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为数字能源技术有限公司 filed Critical 华为数字能源技术有限公司
Priority to PCT/CN2020/111928 priority Critical patent/WO2022041083A1/zh
Priority to EP20950753.2A priority patent/EP4191364A4/en
Priority to CN202080005884.3A priority patent/CN114503052A/zh
Publication of WO2022041083A1 publication Critical patent/WO2022041083A1/zh
Priority to US18/174,080 priority patent/US20230199998A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/14Mounting supporting structure in casing or on frame or rack
    • H05K7/1485Servers; Data center rooms, e.g. 19-inch computer racks
    • H05K7/1488Cabinets therefor, e.g. chassis or racks or mechanical interfaces between blades and support structures
    • H05K7/1492Cabinets therefor, e.g. chassis or racks or mechanical interfaces between blades and support structures having electrical distribution arrangements, e.g. power supply or data communications
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/14Mounting supporting structure in casing or on frame or rack
    • H05K7/1485Servers; Data center rooms, e.g. 19-inch computer racks
    • H05K7/1498Resource management, Optimisation arrangements, e.g. configuration, identification, tracking, physical location
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • H05K7/20709Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
    • H05K7/20763Liquid cooling without phase change
    • H05K7/20781Liquid cooling without phase change within cabinets for removing heat from server blades
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • H05K7/20709Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
    • H05K7/20763Liquid cooling without phase change
    • H05K7/2079Liquid cooling without phase change within rooms for removing heat from cabinets

Definitions

  • the present application relates to the technical field of power electronics, and in particular, to a data center and a capacity expansion method.
  • DC Data Center
  • Power equipment and various types of computing equipment such as servers, monitoring equipment, management equipment, and security equipment, can be deployed in the data center room. Power equipment can supply power to these computing devices, and these computing devices can be deployed on different cabinets in the data center room according to their respective types, and the information communication process of the data center can be realized through a fixed connection between each other.
  • the construction period of the data center takes about 1 to 2 years, and the life of the data center can be maintained for more than 10 years.
  • the power and space occupied by the equipment has been determined at the initial design of the data center, which means that the equipment power in the data center will be maintained for 10 years of the life cycle without equipment upgrades.
  • Device power at initial design With the rapid development of data centers, the equipment power required by data centers is also gradually increasing. For example, the equipment power density of mainstream data centers at this stage is about 8 to 15kW/cabinet, and in the next 1 to 2 years, the equipment power density of data centers is expected to reach 30kW/cabinet or higher.
  • the equipment power in the initial design has been difficult to meet the gradually increasing power demand.
  • the computing equipment of the data center needs to be adjusted. Upgrading, such as increasing the power of the device through software or hardware.
  • the existing power equipment in the data center may no longer support the increased power of the equipment, resulting in the need to add new power equipment in the data center (called capacity expansion of the data center).
  • capacity expansion of the data center the existing power equipment in the data center and each computing equipment are connected through fixed wiring. Therefore, when adding new power equipment, it is necessary to New wiring is added between the device and each computing device, which makes the wiring in the data center very dense, which increases the construction cost of the data center.
  • the present application provides a data center, which is used to avoid the phenomenon of dense wiring during expansion of the data center.
  • the present application provides a data center, the data center includes N computing devices and K power modules, each of the K power modules may include a first power device and a bus, and the bus includes A first electrical connection terminal, N intermediate terminals, and a second electrical connection terminal.
  • the first electrical connection terminal is used to connect the first power equipment
  • the N intermediate terminals are respectively used to connect the N computing devices
  • the bus is at the No. 1 of the N intermediate terminals.
  • a plug-in node is also provided between the I-1 intermediate terminal and the I-th intermediate terminal, and the plug-in node divides the bus into a first sub-line and a second sub-line.
  • K is a positive integer
  • N and I are positive integers greater than or equal to 2
  • I ⁇ N is a positive integer
  • the plug-in node of the power module connects the first sub-line and the second sub-line, at least The second electrical connection end of one power module is idle, so that the first power device in the power module can supply power to N computing devices through the connected first sub-line and the second sub-line.
  • the plug node of at least one power module disconnects the first sub-line and the second sub-line, and the second electrical connection end of at least one power module is connected to the expanded second power device.
  • the power The first power device in the module can supply power to L computing devices among the N computing devices through the first sub-line
  • the expanded second power device can supply power to the remaining NL computing devices through the second sub-line.
  • the plug-in node and the idle electrical connection terminal on the bus, it is possible to directly add capacity-expanding power equipment on the idle electrical connection terminal when capacity expansion is required, and the plug-in node can be used to disconnect the bus,
  • the expanded power device can use the disconnected part of the bus to supply power to the computing device, while the original power device can use another part of the disconnected bus to supply power to the computing device.
  • the original power device can be N computing devices.
  • the power supply expansion is that the original power equipment and the expanded power equipment comprehensively supply power to N computing devices.
  • this design does not need to add new wiring when the data center is expanded, but can directly disconnect the existing wiring to support the power supply of the expanded power equipment.
  • the wiring in the initial design of the data center is the data center. the final line.
  • the wiring in the initial design is not dense, the wiring will not become too dense even when the data center is expanded, which helps to solve the problem of dense wiring during the expansion of the data center. Phenomenal technical problems.
  • this method does not need to rebuild the data center when expanding the capacity, so it also helps to reduce the construction cost of the data center.
  • N computing devices may be placed in one or more computing device module boxes, a first power device in the T power modules may be placed in one or more power device module boxes, and One or more computing equipment module boxes are co-located with one or more power equipment module boxes in the data center.
  • N computing devices can directly use the power devices deployed on the same layer to supply power. Since the path between the computing devices deployed on the same layer and the power devices is short, this method helps reduce wiring costs and simplifies wiring. Complexity.
  • one or more computing device module boxes can be arranged in parallel to form a computing device module box module, in this case , one or more power equipment module boxes can be placed on one or both sides of the computing equipment module box module.
  • the wiring from the power equipment to the computing device can be routed from a fixed side position or two side positions.
  • This method helps to improve the balance of wiring deployment, facilitate wiring management, and avoid wiring confusion. condition.
  • the side lead method also helps to reduce the path distance between the power equipment and the computing equipment, and effectively reduces the wiring cost.
  • the data center may further include a main water inlet pipe, a main water outlet pipe, and L cooling devices, where the L cooling devices are used for cooling the N computing devices.
  • L water inlets may be arranged on the main water inlet pipe
  • L water inlets in the P water inlets are respectively connected to the water inlet ends of the L refrigeration equipments
  • P water outlets may be arranged on the water outlet main pipe
  • P water inlets The L water outlets in the water outlets are respectively connected to the water outlet ends of the L refrigeration equipments
  • P and L are positive integers, and P>L.
  • the data center can realize the cooling function of the L refrigeration equipment through the water inlet operation of the L water inlets and the water outlet operation of the L water outlets.
  • the data center is expanded, there can be one or more water inlets in the idle PL water inlets that are respectively connected to the water inlets of one or more expanded refrigeration equipment, and there can be one or more water outlets in the idle PL water outlets Connect the water outlets of one or more expanded refrigeration equipment respectively. In this way, the data center can realize L original refrigeration equipment and at least L A cooling function of an expanded refrigeration unit.
  • the data center also supports the addition of any number of expanded cooling devices while adding expanded power equipment.
  • This method can improve the cooling effect when the power of the equipment increases without adding too much cooling. equipment, thereby also helping to reduce data center room capital expenditures.
  • L refrigeration equipment can be placed in one or more refrigeration equipment module boxes, when the number of one or more refrigeration equipment module boxes is greater than or equal to 2, one or more refrigeration equipment module boxes They can be placed side by side, and the sides of any two adjacent refrigeration equipment module boxes are connected. In this way, there is no extra space between each refrigeration equipment module box, thereby helping to reduce space waste in the data center and improve the space utilization rate of the data center.
  • one or more cooling equipment module boxes and one or more computing equipment module boxes may be arranged on the same floor in the data center, and one or more cooling equipment module boxes may be placed in one or more One side of the end wall of the computing equipment module box, the end wall is arranged on the side perpendicular to the side of one or more computing equipment module boxes arranged in parallel, the end wall may be provided with air supply channels and return air channels, one or more The L cooling devices in the cooling device module box cool the N computing devices through the air supply channel and the return air channel.
  • the cooling equipment in the cooling equipment module box is separated from the computing equipment by only one end wall distance, so that the cold air emitted by the cooling module can be directly output to the computing equipment module box through a short distance, thereby helping to achieve a relatively low cost. Good cooling effect.
  • the bus of the power module can The cable exits from the power equipment module box where the power equipment is located, traverses one or more refrigeration equipment module boxes, and leads out N sub-wires at the end wall to connect N computing devices respectively, and then continue to the first power of another power module. Terminates in the electrical equipment module box where the equipment is located.
  • the first electrical connection end of the power module is located in the power equipment module box where the first power equipment of the power module is located, and the idle second electrical connection end of the power module is located in the first electrical connection end of another power module.
  • the wiring of the data center can make full use of the space in the power equipment module box and the cooling equipment module box, so that the space utilization rate of the data center can be better improved. Furthermore, when there are two power devices, by placing the two power devices in two power module boxes respectively, it is also helpful to balance the device resources deployed in each part of the space in the data center.
  • one or more cooling equipment module boxes are placed side by side to form a cooling equipment module box module.
  • the data center may also include one or more tube wells.
  • the one or more tube wells It can be arranged on one side or both sides of the refrigeration equipment module box module.
  • Guanjing can play the role of auxiliary power expansion and cooling expansion, and is used to accommodate power wiring and pipeline wiring on the left and right sides of the refrigeration equipment module box.
  • the present application provides a method for expanding the capacity of a data center.
  • the method can be applied to the data center according to any of the designs in the first aspect.
  • the method includes: firstly detecting N computing devices in the data center If the device power is greater than the preset capacity expansion device power threshold, at least one power module is determined from the T power modules, and for each power module in the at least one power module, the power module is controlled.
  • the first power device in the module is in a power-off state, and the plug-in node of the power module is disconnected.
  • the second power device that detects the expansion is connected to the idle second electrical connection end of the power module. After that, the first power equipment and the expanded second power equipment are controlled to be in a power supply state.
  • the data center can directly use the original wiring to achieve capacity expansion without adding new wiring, thus avoiding the occurrence of dense wiring when expanding the data center, saving wiring
  • the occupied space increases the space utilization of the data center.
  • the wiring in the initial construction of the data center is the final wiring, so even if the data center is expanded later, it can not be rebuilt, and there is no need to increase the wiring, so this method also helps to reduce the cost of the data center. The cost of building a data center.
  • the data center also includes a water inlet main pipe, an outlet water main pipe, and L cooling devices, after controlling the first power device and the expanded second power device to be in the power supply state, it is also possible to detect The temperature of N computing devices, if the detected temperature is greater than the preset temperature threshold, the water inlet operation of the water inlet main pipe and the water outlet operation of the water outlet main pipe can be suspended, and then it is detected that the expanded refrigeration equipment is connected to the inlet water. After the main pipe and the water outlet main pipe, resume the water inlet operation of the water inlet main pipe and the water outlet operation of the water outlet main pipe.
  • the data center can also expand the cooling capacity while expanding the power capacity, thereby helping to match the cooling effect of the computing equipment with the power of the computing equipment.
  • FIG. 1 exemplarily shows a schematic diagram of an optional data center deployment structure
  • FIG. 2 exemplarily shows a schematic diagram of the deployment architecture of the data center provided in Embodiment 1 of the present application;
  • FIG. 3A exemplarily shows a schematic diagram of a connection state of a data center before capacity expansion in Embodiment 1 of the present application;
  • FIG. 3B exemplarily shows a schematic diagram of a connection state of a data center after capacity expansion in Embodiment 1 of the present application;
  • FIG. 4 exemplarily shows a schematic diagram of the deployment architecture of the data center provided in Embodiment 2 of the present application
  • 5A exemplarily shows a schematic diagram of a connection state of a data center before capacity expansion in Embodiment 2 of the present application;
  • 5B to 5E exemplarily show schematic diagrams of connection states of the data center after capacity expansion in Embodiment 2 of the present application;
  • FIG. 6 exemplarily shows a schematic diagram of the overall architecture of a data center provided in Embodiment 3 of the present application.
  • FIG. 7A exemplarily shows a schematic diagram of the deployment architecture of a refrigeration equipment module box during initial construction
  • FIG. 7B exemplarily shows a schematic diagram of the deployment architecture of a refrigeration equipment module box during initial construction
  • FIG. 8A exemplarily shows a schematic diagram of the deployment architecture of another refrigeration equipment module box during initial construction
  • FIG. 8B exemplarily shows a schematic diagram of the deployment architecture of another refrigeration equipment module box during initial construction
  • FIG. 9A exemplarily shows a schematic diagram of the overall architecture of a data center provided in an embodiment of the present application during initial construction
  • FIG. 9B exemplarily shows a schematic diagram of an overall architecture of a data center after capacity expansion provided by an embodiment of the present application.
  • the reconstructed wiring can simultaneously include the wiring from the original power equipment to the computing device and the wiring from the new power device to the computing device.
  • this traditional expansion method requires re-planning and building the space of the data center room, which is not only time-consuming and labor-intensive, but also makes the data center unavailable for a long time, which seriously affects the normal service capability of the data center.
  • FIG. 1 exemplarily shows a schematic diagram of an optional data center deployment structure.
  • the expansion space and wiring of power equipment are reserved in the initial design.
  • the reserved expansion space of the power equipment includes the expansion power of channel A and the expansion power of channel B in FIG. 1
  • the reserved routing includes routing routing 1 and routing routing 2 in FIG. 1 .
  • the reserved capacity can be directly expanded.
  • the cable will not only occupy a large space in the data center, reduce the space utilization of the data center, but also increase the construction cost of the data center.
  • this method needs to add new wiring between the newly added power equipment and each computing equipment. In order to ensure the safety of power consumption, each computing equipment needs to be offline during the expansion of the data center. Therefore, this method also It will make the data center unavailable for a long time, resulting in poor service capability of the data center.
  • the present application provides a data center, which is used to avoid the phenomenon of dense wiring when expanding the data center, and further realize the online capacity expansion of the data center.
  • going online means power supply
  • going offline means power off.
  • the power device can provide power to other devices, and when the power device is offline, the power device cannot provide power to other devices.
  • a switch can be set between the power device and the computing device, and when the switch is turned on, the power device can be implemented.
  • On-line when the switch is turned off, the power equipment can be off-line.
  • a switch is provided between the internal power supply of the power device and the output port. When the switch is turned on, the power device can be brought online, and when the switch is turned off, the power device can be turned off.
  • FIG. 2 exemplarily shows a schematic diagram of the deployment architecture of the data center provided in Embodiment 1 of the present application.
  • the data center may include one power module and N computing devices, that is, computing device 1 and computing device 2 , ..., computing device I-1, computing device I, ..., computing device N-1, and computing device N.
  • the power module may include a bus and a first power device (A1), two ends of the bus are a first electrical connection end p 1 and a second electrical connection end p 2 respectively, and the first electrical connection end p 1 and the power device Corresponding to A1, the second electrical connection end p2 may correspond to different states before and after the expansion of the data center.
  • the bus respectively leads out N sub-lines at the N intermediate ends to connect the computing device 1 to the computing device N.
  • a plug-in node is also provided between the middle end k I-1 and the middle end k I of the bus, and the plug-in node divides the bus into a first sub-line and a second sub-line (not shown in Fig. 2 ).
  • the first sub-line includes a bus segment from the first electrical connection end p 1 to the plug node
  • the second sub line includes a bus segment from the plug node to the second electrical connection end p 2 .
  • FIG. 3A exemplarily shows a schematic diagram of a connection state of a data center before capacity expansion in Embodiment 1 of the present application
  • FIG. 3B exemplarily shows a schematic diagram of a connection state of a data center after capacity expansion in Embodiment 1 of the present application.
  • the data center is designed according to the architecture shown in FIG. 2 during initial construction, wherein the electrical energy of the power device A1 can support the initial device power required by N computing devices.
  • the second electrical connection terminal p 2 can be set to an idle state (for example, the second electrical connection terminal p 2 Snap on the insulated plug, or cover it with insulated wire to avoid electric shock).
  • the power equipment A1 can be temporarily suspended, but the first sub-line and the second sub-line are plugged together at the plug-in node to conduct the first sub-line and the second sub-line. Second sub-line, after that, put the power equipment A1 online.
  • the power device A1 can supply power to the computing device 1 to the computing device N through the first sub-line and the second sub-line that are turned on, respectively.
  • the power device A1 can be disconnected first, then the first sub-line and the second sub-line can be disconnected at the plug-in node, and then added in the data center.
  • the second power equipment for capacity expansion (A2, in order to facilitate the distinction between the original power equipment and the power equipment used for capacity expansion, the power equipment used for capacity expansion is hereinafter referred to as the capacity expansion power equipment), and the second electrical connection end p2 is connected to the On the capacity expansion power device A2, the power device A1 and the capacity expansion power device A2 are finally online.
  • the power device A1 can supply power to the computing device 1 to the computing device I-1 through the first sub-line
  • the expansion power device A2 can supply power to the computing device 1 to the computing device N through the second sub-line.
  • the “last online power device A1 and capacity expansion power device A2" is only an optional implementation.
  • the power of the power device A1 cannot be transmitted.
  • the online power can be immediately turned on.
  • the device A1 restores the power supply from the computing device 1 to the computing device I-1 in a timely manner.
  • the capacity-expanding power device A2 is added to the data center, and the second electrical connection terminal p 2 is connected to the capacity-expanding power device A2, and finally the capacity-expanding power device A2 is brought online.
  • the connection between the expansion power device A2 and the second sub-line is added after the power device A1 goes online, since the second sub-line has been disconnected from the power device A1, the second sub-line is The operation of the power equipment A1 will not be affected by the power equipment A1, that is to say, the operation performed after the power equipment A1 goes online complies with the power consumption specification.
  • Embodiment 1 The capacity expansion solution in Embodiment 1 is described below with a specific scenario.
  • the plug-in node is set between the middle end k 3 and the middle end k 4 , and both the power equipment A1 and the expansion power equipment A2 can provide 6KW of electrical power, then:
  • the power device A1 Before the expansion of the data center, the power device A1 provides a total of 6KW of electrical power for 6 computing devices. If the 6 computing devices share the electrical power equally, each computing device can obtain 1KW of electrical power;
  • the power device A1 After the expansion of the data center, the power device A1 provides 6KW of electrical power for computing devices 1 to 3, while the expanded power device A2 provides 6KW of electrical power for computing devices 4 to 6. If the six computing devices share the electrical power equally, then Each computing device is capable of 2KW of electrical power.
  • the capacity-expansion solution in the first embodiment supports the capacity-expansion of the computing equipment whose power is doubled.
  • the data center can directly use the original wiring to realize capacity expansion without adding new wiring, thereby avoiding the occurrence of dense wiring during the expansion of the data center, saving the space occupied by the wiring, and increasing the Data center space utilization.
  • the wiring in the initial construction of the data center is the final wiring, so even if the data center is expanded later, it can not be rebuilt, and there is no need to increase the wiring, so this method also helps to reduce the cost of the data center. The cost of building a data center.
  • FIG. 4 exemplarily shows a schematic diagram of the deployment architecture of the data center provided in Embodiment 2 of the present application.
  • the data center may include two power modules (ie, power module 1 and power module 2 ) and N computing devices, ie, computing device 1, computing device 2, ..., computing device 1-1, computing device 1, ..., computing device N-1, and computing device N.
  • the power module 1 includes a bus 1 and a power device A11. Both ends of the bus 1 are an electrical connection end p 11 and an electrical connection end p 12 respectively.
  • the electrical connection end p 11 corresponds to the power device A11, and the electrical connection end p 12 is in Before the data center expansion and after the data center expansion can correspond to different states.
  • the bus 1 is also provided with N intermediate terminals, namely intermediate terminal k 11 , intermediate terminal k 12 , ..., intermediate terminal k 1(I-1) , intermediate terminal k 1I , ..., intermediate terminal k 1(N- 1) and the intermediate terminals k 1N , the bus 1 respectively leads out N sub-lines at the N intermediate terminals to connect the computing device 1 to the computing device N.
  • the power module 2 includes a bus 2 and a power device A21. Both ends of the bus 2 are an electrical connection end p 21 and an electrical connection end p 22 respectively.
  • the electrical connection end p 21 corresponds to the power device A21, and the electrical connection end p 22 It can correspond to different states before the data center is expanded and after the data center is expanded.
  • the bus 2 may also be provided with N intermediate terminals, namely intermediate terminal k 21 , intermediate terminal k 22 , . . . , intermediate terminal k 2(I-1) , intermediate terminal k 2I , ..., intermediate terminal k 2(N -1) and the intermediate terminal k 2N , the bus 2 respectively leads out N sub-lines at the N intermediate terminals to connect the computing device 1 to the computing device N.
  • a plug-in node 1 may also be provided between the intermediate terminal k 1 (I-1) and the intermediate terminal k 1I of the bus 1, and the plug-in node 1 divides the bus 1 into a sub-line 1 and a sub-line 2 (not shown in FIG. 4 ), the sub-line 1 includes a bus segment from the electrical connection terminal p 11 to the plug-in node 1, and the sub-line 2 includes a bus from the plug-in node 1 to the electrical connection terminal p 12 part.
  • a plug-in node 2 may also be provided between the intermediate terminal k 2 (I-1) and the intermediate terminal k 2I of the bus 2, and the plug-in node 2 divides the bus 2 into a sub-line 3 and a sub-line 4 (Fig. 4).
  • the sub-line 3 includes a bus segment from the electrical connection terminal p 21 to the plug node 2
  • the sub-line 4 includes a bus segment from the plug node 2 to the electrical connection terminal p 22 .
  • the plug node 1 is set between the intermediate end k 1 (I-1) and the intermediate end k 1I
  • the plug node 2 is set between the intermediate end k 2 (I-1) and the intermediate end k 2I
  • Inter is only an optional implementation manner, in this implementation, both the plug node 1 and the plug node 2 are divided between the computing device 1 to the computing device 1-1, and the computing device 1 to the computing device N. Come on. In other optional implementation manners, the computing devices separated by the plug-in node 1 and the plug-in node 2 may also be different.
  • the plug-in node 1 may be set at the middle end k 1 (I-1) and Between the middle ends k 1I , the plug node 2 is set between the middle end k 2I and the middle end k 2(I+1) , so that the plug node 1 connects the computing device 1 to the computing device I-1, and the computing device Device 1 to computing device N is split, while plug node 2 splits computing device 1 to computing device 1, and computing device 1+1 to computing device N.
  • plug node 1 may be placed between mid-end k 1 (I+1) and mid-end k 1 (I+2)
  • plug-in node 2 may be placed at mid-end k 2 (I-1) and the intermediate terminal k 2I , so that the plug node 1 divides the computing device 1 to the computing device 1+1 and the computing device 1+2 to the computing device N, and the plug node 2 divides the Computing device 1 to computing device 1-1, and computing device 1 to computing device N are separated.
  • FIGS. 5A to 5E exemplarily show schematic diagrams of a connection state of a data center after capacity expansion in Embodiment 2 of the present application.
  • 5A to 5E describe the capacity expansion process of the data center in the second embodiment of the present application:
  • the data center is designed according to the architecture shown in FIG. 4 during initial construction, wherein the electrical energy of the power equipment A11 and the power equipment A21 can support the initial equipment power required by N computing devices.
  • the electrical connection terminals p 12 and p 22 can be set to an idle state.
  • the power equipment A11 and the power equipment A21 can be temporarily suspended, but the sub-line 1 and the sub-line 2 are plugged together at the plug-in node 1 to conduct the sub-line 1 and the sub-line 2.
  • the power device A11 supplies power to the computing device 1 to the computing device N through the connected sub-line 1 and the sub-line 2
  • the power device A21 supplies power to the computing device 1 to the computing device N through the conductive sub-line 3 and the sub-line 4
  • the total electrical power of the N computing devices is provided by the power device A11 and the power device A21 in a comprehensive manner.
  • the device power of the N computing devices is increased, and the power provided by the power device A11 and the power device A21 cannot support the upgrade. If the equipment power is larger, it is determined that the data center needs to be expanded. Referring to FIG. 5B , during capacity expansion, in order to ensure the safety of electricity consumption, the power device A11 can be disconnected first, and the power device A21 can supply power to the computing device 1 to the computing device N in a short time.
  • the sub-line 1 and the sub-line 2 can be disconnected at the plug-in node 1, and then the capacity-expanding power device A12 can be added to the power module 1, and the electrical connection terminal p 12 can be connected to the capacity-expanding power device. on the A12.
  • the bus 1 is divided into a sub-line 1 and a sub-line 2 by the plug-in node 1.
  • the sub-line 1 connects the power device A11 and the computing device 1 to the computing device I-1
  • the sub-line 2 connects the expansion power device A21 and the computing device. I to computing device N. Referring to FIG.
  • the power equipment A11 and the capacity-expanded power equipment A12 can be brought online.
  • the power device A11 can supply power to the computing device 1 to the computing device I-1 through the sub-line 1
  • the expansion power device A12 can supply power to the computing device I-1 to the computing device N through the sub-line 2
  • the power device A21 can pass the lead
  • the connected sub-line 3 and sub-line 4 respectively supply power for computing device 1 to computing device N, and the total electrical power of these N computing devices is comprehensively provided by power device A11, capacity expansion power device A12, and power device A21.
  • the data center needs to be expanded for a second time.
  • the power module 1 is saturated (that is, the space reserved during the initial construction can no longer place new capacity-expanding power equipment), only the power module 2 can be used for the secondary expansion.
  • the power device A21 can be offline first, and the power device A11 and the power device A12 can supply power to the computing device 1 to the computing device N in a short time.
  • the sub-line 3 and the sub-line 4 can be disconnected at the plug-in node 2, and then the capacity expansion power device A22 is added to the power module 2, and the electrical connection terminal p 22 is connected to the capacity expansion power device. on the A22.
  • the bus 2 is divided into a sub-line 3 and a sub-line 4 by the plug-in node 2.
  • the sub-line 3 connects the power device A21, the computing device 1 to the computing device N
  • the sub-line 4 connects the expansion power device A22, and the computing device 1 to the computing device N.
  • Computing Device I-1 Referring to FIG. 5E , the power device A21 and the capacity-expanding power device A22 can be brought online at this time.
  • the power device A11 supplies power from the computing device 1 to the computing device I-1 through the sub-line 1
  • the power device A22 passes through the sub-line.
  • 4 supplies power to computing device 1 to computing device 1-1
  • power device A21 supplies power to computing device 1 to computing device N through sub-line 3
  • power device A12 supplies power to computing device 1 to computing device N through sub-line 2, which N
  • the total electrical power of each computing device is comprehensively provided by the power device A11, the capacity-expanding power device 12, the power device A21, and the capacity-expanding power device A22.
  • FIGS. 5B to 5E exemplarily take the capacity expansion of the power module 1 first and then the capacity expansion of the power module 2 as an example for introduction. Expand the power module 1.
  • the power device A21 can be disconnected first, and the power device A11 can supply power to the computing device 1 to the computing device N for a short time, and then disconnect the bus 2 at the plug node 2.
  • the power equipment A11 and the expansion power equipment A22 are online, the power equipment A11 is offline again, and the power equipment A21 and the expansion power equipment A22 supply power to the computing equipment 1 to the computing equipment N in a short time, and finally the bus 1 is connected to the plug-in node. 1 is disconnected, and the power equipment A11 and the capacity expansion power equipment A12 are brought online.
  • This implementation manner is similar to the implementation process in FIG. 5B to FIG. 5E , except that the sequence of using each power module for capacity expansion is different, which will not be repeated here.
  • Embodiment 2 The capacity expansion solution in Embodiment 2 is described below with a specific scenario.
  • N the value of N is 6, the plug node 1 is arranged between the mid-end k 13 and the mid-end k 14 , the plug-in node 2 is set between the mid-end k 23 and the mid-end k 24 , and Power equipment A11, capacity expansion power equipment A12, power equipment A21 and capacity expansion power equipment A22 can all provide 6KW electric power, then:
  • power equipment A11 and power equipment A21 provide a total of 12KW of electrical power for 6 computing devices. If the 6 computing devices share the electrical power equally, each computing device can obtain 2KW of electrical power;
  • the power equipment A11 After the expansion of the data center, if only the first-level expansion is performed, the power equipment A11 provides 6KW of electrical power for computing equipment 1 to computing equipment 3. If computing equipment 1 to computing equipment 3 share the electric power equally, then computing equipment 1 to computing equipment 3 Each computing device in can get 2KW of electrical power, the same as before the expansion.
  • the expansion power equipment A12 and the power equipment A21 provide 12KW of electrical power for the computing equipment 4 to the computing equipment 6. If the computing equipment 4 to the computing equipment 6 share the electric power equally, then each computing equipment from the computing equipment 4 to the computing equipment 6 can obtain The electric power of 4KW doubles the electric power of computing device 4 to computing device 6;
  • the power equipment A11 and the expanded power equipment A22 provide 12KW of electrical power for computing equipment 1 to 3, while the expanded power equipment A12 and power equipment A21 provide 12 kW of electrical power for computing equipment 4 to computing equipment 6 , if computing device 1 to computing device 3 share the electrical power equally, and computing device 4 to computing device 6 also share the electrical power equally, then each computing device can obtain 4KW of electrical power, and the total electrical power from computing device 1 to computing device 6 doubles .
  • the capacity expansion solution in the second embodiment can support the expansion of only part of the computing equipment whose power is doubled, and can also support the expansion of the computing equipment with double the power of all the power. Expansion.
  • the capacity expansion scheme in the second embodiment also has the following beneficial effects:
  • the capacity expansion scheme in the second embodiment supports the step-by-step capacity expansion according to the equipment power
  • the power upgrade amount is within the capacity expansion limit of one power module (that is, after adding a power expansion device to one power module, the power provided by the power module supports the power of the upgraded device)
  • only one power module can be upgraded. Expand the capacity of the modules; when the power upgrade amount of the equipment exceeds the capacity expansion limit of one power module, you can first expand the capacity of one of the power modules, and then perform the second expansion of the other power module.
  • the other power module can still be used to supply power to N computing devices.
  • the data center does not need to be downloaded even when the capacity is expanded.
  • This method can realize the online upgrade and expansion of the data center, which helps to make the data center always available and improve the service capability of the data center.
  • the first embodiment and the second embodiment described above respectively use one power module and two power modules as examples to introduce the deployment architecture and capacity expansion process of the data center.
  • the data center may also include more power modules than two power modules, such as three or more power modules, and the initial deployment architecture of each power module Refer to Embodiment 1 or Embodiment 2.
  • the expanded data center can meet the power requirements of the upgraded equipment, you can end Expansion; if the expanded data center still cannot meet the power requirements of the upgraded equipment, add the expanded power equipment to the next power module until the expanded data center can meet the equipment power upgrade requirements, or all power modules Expansion power equipment has been added to the group, which makes it impossible to expand the location.
  • three or more power modules are preset during initial construction, which can not only achieve the beneficial effects of the first and second embodiments, but also improve the capacity expansion capability of the data center. However, three or more power modules may take up a lot of space, which will reduce the space utilization rate of the data center and increase the initial construction cost of the data center.
  • the foregoing embodiment mainly introduces the deployment manner of the computing equipment and power equipment related to capacity expansion in the data center.
  • other equipment such as cooling equipment
  • other equipment such as cooling equipment
  • a possible deployment solution of a data center is introduced from the overall architecture.
  • each type of equipment in the data center can be deployed in a corresponding container, and the container and the equipment deployed therein are pre-integrated in the factory.
  • This type of deployment is also known as a modular data center (or containerized data center).
  • Modular data centers only need 1% of the construction cost of traditional data centers, and can be relocated directly by moving containers. Therefore, modular data centers have flexible mobility and can greatly reduce the deployment cycle of data centers.
  • FIG. 6 exemplarily shows a schematic diagram of the overall architecture of a data center provided in Embodiment 3 of the present application.
  • N computing devices may be deployed in one or more computing device module boxes , when deployed in one computing device module box, N computing devices can be arranged in parallel, when deployed in multiple computing device module boxes, each computing device module box can be arranged in parallel.
  • electrical equipment may be deployed in one or more power supply and distribution module boxes, for example, electrical equipment A11 is deployed in power supply and distribution module box 1 , and electrical equipment A21 is deployed in power supply and distribution module box 2 .
  • the computing device module box and the power supply and distribution module box may be located on the same layer.
  • the power supply and distribution module box corresponding to the power device may be deployed on the computing device.
  • the power supply and distribution module boxes corresponding to the two power devices can be respectively deployed on both sides of the computing device module box.
  • the three or more power devices may also correspond to the two power supply and distribution module boxes shown in FIG. The boxes are respectively deployed on both sides of the computing device module box.
  • each layer when the data center is deployed as a multi-layer, each layer can be independently deployed with a power equipment module box and a computing device module box, and the computing devices in the computing device module box on each layer can be compatible with this In this way, the wiring distance between the computing device and the power device is shortened, which not only reduces the deployment cost, but also helps to improve the efficiency of troubleshooting. Convenience.
  • the power equipment A11 is deployed in the power supply and distribution module box 1, and the power equipment A21 is deployed in the power supply and distribution module box 2" is only an optional implementation.
  • the power equipment A11 and the power equipment A21 may also be deployed in the same power supply and distribution module box, for example, simultaneously deployed in the power supply and distribution module box 1, or simultaneously deployed in the power supply and distribution module box in module box 2. Since the deployment of different power equipment in the same power supply and distribution module box or in different power supply and distribution module boxes does not affect the operation process of the data center expansion, it will not be described here.
  • the data center may also include refrigeration equipment module boxes and auxiliary tube well boxes, such as auxiliary
  • the tube well box 1 and the auxiliary tube well box 2 are used for arranging refrigeration equipment in the refrigeration equipment module box, and the auxiliary tube well box is used for arranging auxiliary tube wells.
  • the auxiliary pipe well box can be arranged on one side of the end wall of the power supply and distribution module box and communicated with the side surface of the refrigeration equipment module box.
  • the cooling equipment module box can also be configured on the same layer as the computing equipment module box, and the cooling equipment module box can be arranged on one side of the end wall of the computing equipment module box.
  • the device and the computing device are only separated by an end wall distance, so that the cold air emitted by the cooling module can be directly output to the computing device module box through a short distance, which helps to achieve a better cooling effect.
  • the refrigeration function of the refrigeration equipment may depend on the hydroelectric heating (mechanical systems electrical plumbing, MEP) technology, and the type of the refrigeration equipment may be set by those skilled in the art based on experience, for example, it may be a computer room air handler (computer room air handler).
  • MEP mechanical systems electrical plumbing
  • the number of CRAH machines in the cooling equipment module box may match the initial equipment power requirement of the computing equipment, for example, when the initial equipment power requirement of the computing equipment is 6KW , 5 CRAH machines or 6 CRAH machines can be set. Further, in order to enable the cooling function to support capacity expansion synchronously, the cooling equipment module box can also be preset with a capacity expansion space for a CRAH machine. It can also be set in advance in the initial construction. Two optional deployment methods of the cooling equipment module box are introduced as follows:
  • Fig. 7A exemplarily shows a schematic diagram of the deployment architecture of a refrigeration equipment module box during initial construction.
  • the data center may be provided with a water inlet main pipe, a water outlet main pipe, and three refrigeration units.
  • Equipment module box, the three refrigeration equipment module boxes are connected through end face combination.
  • each refrigeration equipment module box can be provided with 2 CRAH machines, each of the 2 CRAH machines includes a water inlet end and a water outlet end, and the main water inlet pipe is set in each refrigeration equipment module box.
  • the main water outlet pipeline is provided with 4 water outlets in each refrigeration equipment module box, and 2 of the 4 water outlets are connected to the water outlets of the two CRAH machines through pipes.
  • the two water inlets in the water inlet main pipe that are not connected to the CRAH machine can be left idle (for example, blocked by corks to avoid water outlet) during the initial construction, and the two water outlets that are not connected to the CRAH machine in the water outlet main pipe can be idle during the initial construction. It can also be left idle (eg plugged with a cork to avoid water coming out).
  • each refrigeration equipment module box can actually accommodate 4 CRAH machines, while only 2 CRAH machines were placed in the initial construction.
  • Fig. 7B exemplarily shows a schematic diagram of the deployment architecture of the refrigeration equipment module box after expansion in this case, as shown in Fig. 7B .
  • the number of CRAH machines in each cooling equipment module box is increased to 4.
  • FIG. 8A exemplarily shows a schematic diagram of the deployment structure of another refrigeration equipment module box during initial construction.
  • three refrigeration equipment module boxes may be installed in the data center, and the three cooling equipment module boxes may be installed in the data center.
  • the equipment module boxes are communicated through end face combinations.
  • one refrigeration equipment module box is idle among the three refrigeration equipment module boxes, and four CRAH machines can be set in each refrigeration equipment module box of the other two refrigeration equipment module boxes.
  • Each refrigeration equipment module box is provided with 4 water inlets, and the 4 water inlets are connected to the water inlet ends of the 4 CRAH machines through pipes.
  • the main water outlet pipeline is provided with 4 water outlets in each refrigeration equipment module box, and these 4 water outlets are connected to the water outlets of the 4 CRAH machines through pipes.
  • the main water inlet pipe can also include 4 water inlets in the idle refrigeration equipment module box, the 4 water inlets are idle during the initial construction, and the water outlet main pipe can also include 4 water outlets in the idle refrigeration equipment module box , 4 water outlets were also idle during the initial construction.
  • each of the 3 refrigeration equipment module boxes can actually accommodate 4 CRAH machines, while only 4 CRAH machines were placed in the 2 refrigeration equipment module boxes during the initial construction.
  • FIG. 8B exemplarily shows a schematic diagram of the deployment architecture of the cooling equipment module box after expansion in this case. As shown in FIG. 8B , after the data center is expanded, the idle Two CRAH machines are newly added to the cooling equipment module box, and 10 CRAH machines are used to cool the computing equipment module box, which can improve the corresponding cooling effect when the equipment power is increased.
  • the data center supports adding any number of CRAH machines during capacity expansion, that is, the number of CRAH machines can be incremented by 1 as the minimum unit.
  • the number can match the power of the equipment, which can improve the cooling effect when the power of the equipment increases, without adding too many CRAH machines, which helps to reduce the capital expenditure of the data center equipment room.
  • the power device may be any device or system that implements power output.
  • the power device may include a transformer incoming switch cabinet, a transformer, and a low-voltage distribution panel (LVP). , uninterruptible power supply (uninterruptible power supply, UPS) and one or more lithium batteries.
  • UPS uninterruptible power supply
  • the input end of the transformer can be connected to the power grid through the incoming switch cabinet of the transformer, the output end of the transformer can be connected to the input end of the LVP, the input end of the LVP can be connected to the input end of the UPS, and the first output end of the UPS (which can also be used as the input end ) is connected to one or more lithium batteries, and the second output terminal is connected to the electrical connection terminal.
  • the transformer when the incoming switch cabinet of the transformer is turned on, the transformer can receive the high-voltage electricity input from the grid through the input terminal, and then convert the high-voltage electricity into low-voltage electricity suitable for use, and then convert the low-voltage electricity through the output terminal.
  • Output to LVP which transmits the low voltage to the UPS.
  • the UPS When there is power on the grid side, the UPS will not only use the power output from the grid side to the LVP to power the computing equipment, but also use the power to charge the lithium battery. In this way, after a power failure occurs on the grid side, the UPS can obtain the electrical energy input by the lithium battery and use the electrical energy to supply power to the computing device. It can be seen from this that in this example, no matter whether there is power on the grid side or no power, the power equipment can supply power to the computing equipment, so that the availability of the data center is better.
  • the data center in the embodiment of the present application is introduced from the overall architecture.
  • FIG. 9A exemplarily shows a schematic diagram of the overall architecture of a data center provided by an embodiment of the present application during initial construction
  • FIG. 9B exemplarily shows a schematic diagram of the overall architecture of a data center provided by an embodiment of the present application after capacity expansion.
  • the data center includes 6 computing devices, 2 power devices, 6 cooling devices, and 2 tube wells, and these devices are deployed on the same floor.
  • each of the 6 computing devices can be set in its corresponding computing device module box.
  • the data center can include 6 computing device module boxes, and each computing device module box is arranged in parallel to form a computing device Module box module.
  • each power supply and distribution module box also includes a reserved expansion space that can accommodate one expansion power device.
  • Two power supply and distribution module boxes are respectively arranged on both sides of the computing equipment module box module.
  • 6 refrigeration equipment are deployed in 3 refrigeration equipment module boxes, each refrigeration equipment module box can include 2 refrigeration equipment, and each refrigeration equipment module box also includes reserved to accommodate 2 expansion refrigeration equipment expansion space and corresponding pipeline.
  • the three refrigeration equipment module boxes are connected to each other through the left and right sides as shown in the figure to achieve end-face communication, and the three refrigeration equipment module boxes can also be connected to the end wall of the space where the computing equipment module box is located through the lower side as shown in the figure.
  • the three refrigeration equipment module boxes connected at the end faces constitute a refrigeration equipment module box module, and two tube wells are arranged on both sides of the refrigeration equipment module box module.
  • each refrigeration equipment module box is provided with an air supply channel and a return air channel on the lower side as shown (not shown in FIG. 9A and FIG. 9B ). During cooling, each refrigeration equipment module box sends cold air through the air supply channel.
  • each pipeline in the refrigeration equipment module box can be connected in a flexible connection manner, so that each pipeline can not only achieve tightness through expansion and contraction at the connection, but also have excellent impermeability.
  • the two power supply and distribution module boxes respectively have their own corresponding buses, and the bus lines span the power supply and distribution module box 1, the tube well 1, Refrigeration equipment module box module, tube well 2 and power supply and distribution module box 2, and lead out sub-lines at the three refrigeration equipment module boxes to supply power for computing equipment. For example, taking the power equipment on the left as shown in FIG.
  • the output end of the power supply and distribution module box 1 where the power equipment is located is connected to the input end of the bus segment in the tube well 1, and the output end of the bus segment in the tube well 1 is connected to the The input end of the bus segment in the refrigeration equipment module box module is connected, and the bus segment in the refrigeration equipment module box module leads out 6 sub-lines at the middle end to connect with the small bus bars of the 6 computing devices, so as to supply power for the 6 computing devices. And there are one or more plug-in nodes on the bus segment in the refrigeration equipment module box module, and the output end of the bus segment in the refrigeration equipment module box module is docked with the input end of the bus segment in the tube well 2. The output end of the bus segment is led out to the power supply and distribution module box 2.
  • FIG. 9A before the expansion of the data center, there is one power device in the power supply distribution box 1 on the left as shown in FIG. 9A , and there is also one power device in the power supply distribution box 2 on the right as shown in FIG. 9A , these two power devices supply power to 6 computing devices through two buses respectively.
  • FIG. 9B after the expansion of the data center, an expanded power device is newly added to the power supply distribution box 1 on the left as shown in FIG. 9B , and the power supply distribution box 2 on the right shown in FIG. 9B is also newly added.
  • One expansion power device is installed, and both buses are disconnected from the middle position. In this way, the two power devices in the power supply distribution box 1 on the left as shown in FIG.
  • FIG. 9B supply power to the computing device 1 to the computing device 3, while FIG. 9B
  • the 2 electrical devices in the power distribution box 2 on the right side of the illustration provide power for computing device 4 to computing device 6 .
  • the cooling effect of the original 6 cooling devices in the cooling device module box may be insufficient.
  • the expansion space reserved in the cooling device module box can also be used.
  • Set up new capacity expansion refrigeration equipment for example, as shown in Figure 9B, add 6 new capacity expansion refrigeration equipment, and the water outlet and water inlet of the new capacity expansion refrigeration equipment are connected to the restricted water outlet and water inlet respectively.
  • FIG. 9B is to add a whole machine of LVP, UPS and lithium battery cabinets in the power supply and distribution module box to realize capacity expansion.
  • LVP low-power supply
  • UPS low-power supply
  • lithium battery cabinets can be added.
  • other accessories can directly use the accessories of the original power equipment, which is not specifically limited in this application.
  • the above method can also perform cooling expansion when expanding the capacity of the power equipment, and during the cooling expansion, only the expansion refrigeration equipment can be added and the reserved water inlet and outlet can be directly used without re-opening the water inlet and outlet pipes. It can be seen from this that in the initial construction of the data center, wirings and pipes are deployed according to the final specifications, so that the original wirings and pipes can be directly used during subsequent expansion, and no new wirings and pipes can be set up. pipelines, which can also reduce the cost of data center expansion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Physics & Mathematics (AREA)
  • Thermal Sciences (AREA)
  • Power Sources (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种数据中心及扩容方法,用以避免在扩容数据中心时产生走线密集的现象。在该数据中心中,第一电力设备通过总线连接计算设备,且总线上预先设置插接节点和闲置的电连接端,如此,在需要扩容时可以直接在闲置的电连接端上增加扩容的第二电力设备,并使用插接节点断开总线,以使第一电力设备和扩容的第二电力设备分别使用断开的两部分总线为计算设备供电。该种方式在数据中心扩容时无需新增走线,而是直接断开已有的走线来支持扩容电力设备的供电,因此,只要数据中心在扩容前的走线不密集,则扩容后的走线也不会变得过于密集,该种方式有助于解决在扩容数据中心时产生走线密集现象的问题。

Description

一种数据中心及扩容方法 技术领域
本申请涉及电力电子技术领域,尤其涉及一种数据中心及扩容方法。
背景技术
数据中心(Data Center,DC)是一种利用互联网通信线路和带宽资源,在全球的各个区域之间传递信息、加速数据、展示内容、计算数据和存储资源的信息基础设施。数据中心机房中可以部署有电力设备和各种类型的计算设备,例如服务器、监控设备、管理设备和安全设备等。电力设备能够为这些计算设备供电,而这些计算设备则可以按照各自的类型部署在数据中心机房的不同机柜上,通过相互之间固定的连接方式实现数据中心的信息通信过程。
按照传统土建楼宇的设计方式,数据中心的建设周期大概需要1~2年,而数据中心的寿命可以维持在10年以上。然而,数据中心在初期设计时就已经确定了其设备的功率和占用的空间,这就意味着在生命周期的10年之内,如果不进行设备升级,则数据中心中的设备功率会一直维持在初期设计时的设备功率。随着数据中心的飞速发展,数据中心需要的设备功率也逐渐增大。例如现阶段主流的数据中心的设备功率密度大概在8~15kW/柜,而在未来的1~2年内,数据中心的设备功率密度预计会达到30kW/柜或更高。
然而,当前正处于全球数据爆发的阶段,在数据中心的整个生命周期内,初期设计时的设备功率已经很难满足逐渐增大的功率需求,这种情况下则需要对数据中心的计算设备进行升级,例如通过软件方式或硬件方式增大设备功率。随着设备功率的增大,数据中心中已有的电力设备可能无法再支持增大后的设备功率,导致数据中心中还需要增加新的电力设备(称为数据中心的扩容)。然而,按照现有技术中的扩容方式,数据中心中已有的电力设备和每个计算设备之前都通过固定的走线方式连接,因此,在增加新的电力设备时,还需要在新的电力设备和每个计算设备之间新增走线,这种方式会使数据中心中的走线非常密集,无形中增大了数据中心的建设成本。
发明内容
本申请提供一种数据中心,用以避免在扩容数据中心时产生走线密集的现象。
第一方面,本申请提供一种数据中心,该数据中心包括N个计算设备和K个电力模组,K个电力模组中的每个电力模组可以包括第一电力设备和总线,总线包括第一电连接端、N个中间端和第二电连接端,第一电连接端用于连接第一电力设备,N个中间端分别用于连接N个计算设备,总线在N个中间端的第I-1个中间端和第I个中间端之间还设置有插接节点,插接节点将总线分为第一子线和第二子线。其中,K为正整数,N、I为大于或等于2的正整数,且I<N。采用该种数据中心的部署架构,针对于K个电力模组中的至少一个电力模组:在数据中心扩容前,该电力模组的插接节点连通第一子线和第二子线,至少一个电力模组的第二电连接端闲置,如此,该电力模组中的第一电力设备可以通过连通的第一子线和第二子线为N个计算设备供电。在数据中心扩容后,至少一个电力模组的插接节点断开第一子线和第二子线,至少一个电力模组的第二电连接端连接扩容的第二电力设 备,如此,该电力模组中的第一电力设备可以通过第一子线为N个计算设备中的L个计算设备供电,而扩容的第二电力设备则可以通过第二子线为剩余的N-L个计算设备供电。
在上述设计中,通过在总线上设置插接节点和闲置的电连接端,使得在需要扩容时可以直接在闲置的电连接端上增加扩容的电力设备,并可以使用插接节点断开总线,如此,扩容的电力设备即可使用断开的部分总线为计算设备供电,而原电力设备则可以使用断开的另一部分总线为计算设备供电,这种方式能够将原电力设备为N个计算设备供电扩容为原电力设备和扩容的电力设备综合为N个计算设备供电。且,该种设计在数据中心扩容时无需新增走线,而是可以直接断开已有的走线来支持扩容的电力设备的供电,因此在数据中心初期设计时的走线即为数据中心的最终走线。采用该种设计,只要初期设计时的走线不密集,则即使是在数据中心扩容的情况下其走线也不会变得过于密集,从而有助于解决在扩容数据中心时产生走线密集现象的技术问题。此外,这种方式在扩容时无需重建数据中心,因此还有助于降低数据中心的建设成本。
在一种可能的设计中,N个计算设备可以放置在一个或多个计算设备模块箱中,T个电力模组中的第一电力设备可以放置在一个或多个电力设备模块箱中,且一个或多个计算设备模块箱与一个或多个电力设备模块箱在数据中心中同层设置。如此,N个计算设备可以直接使用同层部署的电力设备来供电,由于同层部署的计算设备与电力设备之间的路径较短,因此这种方式有助于降低走线成本,简化布线的复杂性。
在一种可能的设计中,当一个或多个计算设备模块箱的数量大于或等于2时,一个或多个计算设备模块箱可以在并列设置后构成计算设备模块箱模组,这种情况下,一个或多个电力设备模块箱可以放置在计算设备模块箱模组的一侧或两侧。如此,电力设备向计算设备的走线可以从固定的一个侧面位置或两个侧面位置来引线,这种方式有助于提高走线部署的均衡性,便于管理走线,避免出现走线混乱的情况。且,侧面引线的方式还有助于降低电力设备与计算设备之间的路径距离,有效降低走线成本。
在一种可能的设计中,数据中心还可以包括进水总管道、出水总管道和L个制冷设备,L个制冷设备用于为N个计算设备降温。其中,进水总管道上可以设置有P个进水口,P个进水口中的L个进水口分别连接L个制冷设备的进水端,出水总管道上可以设置有P个出水口,P个出水口中的L个出水口分别连接L个制冷设备的出水端,P、L为正整数,且P>L。这种情况下,在数据中心扩容前,P个进水口中除L个进水口以外的P-L个进水口可以闲置,P个出水口中除L个出水口以外的P-L个出水口可以闲置,如此,数据中心可以通过L个进水口的进水操作和L个出水口的出水操作实现L个制冷设备的降温功能。在数据中心扩容后,闲置的P-L个进水口中可以存在一个或多个进水口分别连接一个或多个扩容的制冷设备的进水端,闲置的P-L个出水口中可以存在一个或多个出水口分别连接一个或多个扩容的制冷设备的出水端,如此,数据中心可以通过至少L+1个进水口的进水操作和至少L+1个出水口的出水操作实现L个原制冷设备和至少一个扩容的制冷设备的降温功能。
通过上述设计,数据中心还支持在增加扩容的电力设备的同时增加任意数量的扩容的制冷设备,这种方式既能够在设备功率增大的情况下提高制冷效果,又不需要增加过多的制冷设备,从而还有助于降低数据中心机房的资本性支出。
在一种可能的设计中,L个制冷设备可以放置在一个或多个制冷设备模块箱中,当一个或多个制冷设备模块箱的数量大于或等于2时,一个或多个制冷设备模块箱可以并列放 置,且任意相邻的两个制冷设备模块箱的侧面连通。如此,各个制冷设备模块箱之间不存在额外的空间,从而有助于减少数据中心中的空间浪费,提高数据中心的空间利用率。
在一种可能的设计中,一个或多个制冷设备模块箱与一个或多个计算设备模块箱在数据中心中可以同层设置,且一个或多个制冷设备模块箱可以放置在一个或多个计算设备模块箱的端墙一侧,端墙设置在与一个或多个计算设备模块箱并列设置的侧面垂直的侧面上,端墙上可以设置有送风通道和回风通道,一个或多个制冷设备模块箱中的L个制冷设备通过送风通道和回风通道为N个计算设备降温。通过该设计,制冷设备模块箱中的制冷设备与计算设备只间隔一个端墙的距离,从而制冷模块散出的冷气能够经由较短的距离直接输出至计算设备模块箱,从而有助于实现较好的降温效果。
在一种可能的设计中,当T为大于或等于2的正整数时,针对于T个电力模组中的每个电力模组:该电力模组的总线可以从该电力模组的第一电力设备所在的电力设备模块箱中出线,横穿一个或多个制冷设备模块箱,并在端墙处引出N条子线分别连接N个计算设备,进而延续至另一电力模组的第一电力设备所在的电力设备模块箱内终止。其中,该电力模组的第一电连接端位于该电力模组的第一电力设备所在的电力设备模块箱内,该电力模组的闲置的第二电连接端位于另一电力模组的第一电力设备所在的电力设备模块箱内。通过该设计,数据中心的走线可以充分利用到电力设备模块箱以及制冷设备模块箱中的空间,从而数据中心的空间利用率能够得到较好地提升。更进一步的,当存在两个电力设备时,通过将两个电力设备分别放置在两个电力模块箱中,还有助于均衡数据中心中各部分空间中部署的设备资源。
在一种可能的设计中,一个或多个制冷设备模块箱并列放置后构成制冷设备模块箱模组,这种情况下,数据中心中还可以包括一个或多个管井,这一个或多个管井可以设置在制冷设备模块箱模组的一侧或两侧。通过该设计,官井可以起到辅助电力扩容和制冷扩容的作用,用于在制冷设备模块箱的左右两侧容置电力走线和管道走线。
第二方面,本申请提供一种数据中心的扩容方法,该方法可以应用于如上述第一方面中的任一设计所述的数据中心,该方法包括:先检测数据中心中的N个计算设备的设备功率,若设备功率大于预设的扩容设备功率阈值,则从T个电力模组中确定出至少一个电力模组,针对于至少一个电力模组中的每个电力模组,控制该电力模组中的第一电力设备处于断电状态,并断开该电力模组的插接节点,如此,在检测到扩容的第二电力设备连接至该电力模组的闲置的第二电连接端后,控制第一电力设备和扩容的第二电力设备处于供电状态。
采用上述设计,在计算设备执行功率升级后,数据中心可以直接利用原有的走线实现扩容,而无需新增走线,从而可以避免在扩容数据中心时发生密集走线的情况,节省走线占用的空间,增加数据中心的空间利用率。且,该数据中心在初期建设时的走线即为最终走线,因此即使数据中心后期进行了扩容,也可以不再重新建设,且不用再增加走线,从而该种方式还有助于降低数据中心的建设成本。
在一种可能的设计中,若数据中心中还包括进水总管道、出水总管道和L个制冷设备,则在控制第一电力设备和扩容的第二电力设备处于供电状态之后,还可以检测N个计算设备的温度,若检测到的温度大于预设的温度阈值,则可以暂停进水总管道的进水操作和出水总管道的出水操作,然后在检测到扩容的制冷设备连接至进水总管道和出水总管道后,再恢复进水总管道的进水操作和出水总管道的出水操作。通过该设计,数据中心还能在电 力扩容的同时进行制冷扩容,从而有助于使对计算设备制冷效果与计算设备的功率相匹配。
本申请的这些方面或其它方面将在以下的实施例中进行详细的介绍。
附图说明
图1示例性示出一种可选的数据中心的部署结构示意图;
图2示例性示出本申请实施例一提供的数据中心的部署架构示意图;
图3A示例性示出本申请实施例一中数据中心在扩容之前的连接状态示意图;
图3B示例性示出本申请实施例一中数据中心在扩容之后的连接状态示意图;
图4示例性示出本申请实施例二提供的数据中心的部署架构示意图;
图5A示例性示出本申请实施例二中数据中心在扩容之前的连接状态示意图;
图5B至图5E示例性示出本申请实施例二中数据中心在扩容之后的连接状态示意图;
图6示例性示出本申请实施例三提供的一种数据中心的整体架构示意图;
图7A示例性示出一种制冷设备模块箱在初期建设时的部署架构示意图;
图7B示例性示出一种制冷设备模块箱在初期建设时的部署架构示意图;
图8A示例性示出另一种制冷设备模块箱在初期建设时的部署架构示意图;
图8B示例性示出另一种制冷设备模块箱在初期建设时的部署架构示意图;
图9A示例性示出本申请实施例提供的一种数据中心在初期建设时的整体架构示意图;
图9B示例性示出本申请实施例提供的一种数据中心在扩容后的整体架构示意图。
具体实施方式
近年来,数据中心的飞速发展,导致对数据中心中的设备功率需求也不断增加。在这种场景下,如果对数据中心中的计算设备进行升级来增加设备功率,则原有的电力设备所提供的电能可以就无法再支撑增大后的设备功率,导致数据中心需要进行扩容。然而,传统数据中心在初期设计时就已经固定好了电力设备所在的空间,该空间内无法再容纳新的电力设备,因此为了实现扩容,传统扩容方式则需要对传统数据中心机房进行重新规划建设。例如先扩建电力设备所在的空间,再在扩建后的空间中放置新的电力设备,最后改建电力设备向计算设备的走线(增加走线数量或增加包揽走线的线缆宽度等),以使改建后的走线能同时包括原电力设备向计算设备的走线和新的电力设备向计算设备的走线。显然地,这种传统扩容方式需要重新规划建设数据中心机房的空间,不仅费时费力,还使得数据中心在长时间内处于不可用状态,严重影响数据中心的正常服务能力。
为了解决上述问题,图1示例性示出一种可选的数据中心的部署结构示意图,如图1所示,该种实施方式在初期设计时就先预留出电力设备的扩容空间和走线路由,例如预留出的电力设备的扩容空间包括图1中的A路扩容电力和B路扩容电力,预留出的走线路由包括图1中的走线路由1和走线路由2。在这种部署方式下,当升级各计算设备后,若各计算设备的设备功率增大导致原有的A路电力和B路电力无法提供对应的设备功率,则可以直接在预留出的扩容空间中为A路电力增加A路扩容电力,并在预留出的走线路由1中部署A路扩容电力到各计算设备的走线,为B路电力增加B路扩容电力,并在预留出的走线路由2中部署B路扩容电力到各计算设备的走线。参照图1所示,虽然这种方式能够通过预留的扩容空间和走线路由来避免扩容时重新规划建设数据中心机房的空间,但是 这种方式仍然需要在新的电力设备和每个计算设备之间部署新的走线,导致电力设备所在的空间和计算设备所在的空间之间走线非常密集(如果采用同一个线缆包覆多条走线,则线缆较宽),如此密集的走线会不仅会占用数据中心较大的空间,降低数据中心的空间利用率,还会增大数据中心的建设成本。此外,这种方式需要在新增的电力设备和每个计算设备之间新增走线,为了保证用电安全性,每个计算设备在数据中心扩容期间都需要下线,因此这种方式还会使得数据中心在较长时间内处于不可用状态,导致数据中心的服务能力变差。
有鉴于此,本申请提供一种数据中心,用以在扩容数据中心时避免密集走线现象发生,并进而实现数据中心的在线扩容。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。需要理解的是,在本申请的描述中,“多个”理解为“至少两个”。“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。例如,下文所指出的“第一电连接端”、“第二电连接端”均是用于指示不同位置的连接端,而并不具有先后顺序或重要程度上的不同。
需要说明的是,在本申请的下列实施例中,上线是指供电,下线是指断电。当电力设备上线时,电力设备可以向其它设备提供电能,当电力设备下线时,电力设备无法向其它设备提供电能。实现电力设备上线和电力设备下线的方式有多种,例如在一个示例中,当电力设备连接计算设备时,电力设备和计算设备之间可以设置有开关,当导通开关则可以实现电力设备的上线,当断开开关则可以实现电力设备的下线。又例如,在另一个示例中,电力设备的内部电源和输出端口之间设置有开关,当导通开关则可以实现电力设备的上线,当断开开关则可以实现电力设备的下线。
实施例一
图2示例性示出本申请实施例一提供的数据中心的部署架构示意图,如图2所示,该数据中心可以包括1个电力模组和N个计算设备,即计算设备1、计算设备2、……、计算设备I-1、计算设备I、……、计算设备N-1和计算设备N。其中,电力模组中可以包括总线和第一电力设备(A1),总线的两端分别为第一电连接端p 1和第二电连接端p 2,第一电连接端p 1与电力设备A1对应,第二电连接端p 2在数据中心扩容前和数据中心扩容后可以对应为不同的状态。总线上还可以设置有N个中间端,即中间端k 1、中间端k 2、……、中间端k I-1、中间端k I、……、中间端k N-1和中间端k N,总线在这N个中间端处分别引出N条子线连接计算设备1至计算设备N。如图2所示,在总线的中间端k I-1和中间端k I之间还设置有插接节点,插接节点将总线划分为第一子线和第二子线(图2中未进行示意),第一子线包括从第一电连接端p 1至插接节点之间的总线段,第二子线包括从插接节点至第二电连接端p 2之间的总线段。
图3A示例性示出本申请实施例一中数据中心在扩容之前的连接状态示意图,图3B示例性示出本申请实施例一中数据中心在扩容之后的连接状态示意图,下面参照图3A和图3B介绍本申请实施例一中数据中心的扩容过程:
在实施例一中,数据中心在初期建设时即按照图2所示意的架构进行设计,其中电力设备A1的电能能够支撑N个计算设备所需的初始设备功率。如此,当设计完成后并投入使用之前,参照图3A所示,由于电力设备A1的电能已足够使用,因此第二电连接端p 2 可以设置为闲置状态(例如将第二电连接端p 2扣合在绝缘插头上,或使用绝缘线包覆,以避免发生触电事故)。且,为了保证用电安全性,还可以暂不上线电力设备A1,而是先将第一子线和第二子线在插接节点处插接在一起,以导通第一子线和第二子线,之后,再将电力设备A1上线。这种情况下,电力设备A1可以通过导通的第一子线和第二子线分别为计算设备1至计算设备N供电。
进一步地,在数据中心的使用过程中,如果检测到N个计算设备执行了升级操作,导致N个计算设备的设备功率增大,而电力设备A1所能提供的电能无法支持增大后的设备功率,则确定需要对数据中心进行扩容。参照图3B所示,在扩容时,为了保证用电安全性,可以先下线电力设备A1,再将第一子线和第二子线在插接节点处断开,然后在数据中心中添加扩容的第二电力设备(A2,为了便于区分原电力设备和用于扩容的电力设备,下文统一将用于扩容的电力设备称为扩容电力设备),并将第二电连接端p 2连接在扩容电力设备A2上,最后上线电力设备A1和扩容电力设备A2。这种情况下,电力设备A1可以通过第一子线为计算设备1至计算设备I-1供电,而扩容电力设备A2则可以通过第二子线为计算设备I至计算设备N供电。
需要说明的是,“最后上线电力设备A1和扩容电力设备A2”只是一种可选地实施方式。本申请实施例中,在将第一子线和第二子线在插接节点处断开后,由于第一子线和第二子线不再导通,因此电力设备A1的电能已经无法传输给计算设备I至计算设备N。因此,在另一种可选地实施方式中,为了避免N个计算设备都长时间处于不可用状态,还可以在使用插接节点断开第一子线和第二子线后,立马上线电力设备A1,以及时恢复计算设备1至计算设备I-1的供电。然后再在数据中心中添加扩容电力设备A2,并将第二电连接端p 2连接在扩容电力设备A2上,最后上线扩容电力设备A2。这种情况下,虽然扩容电力设备A2和第二子线的连接是在电力设备A1上线后才添加的,但是由于第二子线已经与电力设备A1断开连接,因此在第二子线上的操作并不会受到电力设备A1的影响,也就是说,上线电力设备A1后执行的操作符合用电规范。
下面以一个具体的场景介绍实施例一中的扩容方案。在该场景中,假设N的取值为6,插接节点设置在中间端k 3和中间端k 4之间,且电力设备A1和扩容电力设备A2均可以提供6KW的电功率,则:
在数据中心扩容之前,电力设备A1为6个计算设备共提供6KW的电功率,如果6个计算设备均分电功率,则每个计算设备能够得到1KW的电功率;
在数据中心扩容之后,电力设备A1为计算设备1至计算设备3提供6KW的电功率,而扩容电力设备A2为计算设备4至计算设备6提供6KW的电功率,如果6个计算设备均分电功率,则每个计算设备都能够得到2KW的电功率。
由此可知,当扩容电力设备和电力设备提供的电功率相同时,实施例一中的扩容方案支持对功率翻倍的计算设备进行扩容。
在上述实施例一中,数据中心可以直接利用原有的走线实现扩容,而无需新增走线,从而可以避免在扩容数据中心时发生密集走线的情况,节省走线占用的空间,增加数据中心的空间利用率。且,该数据中心在初期建设时的走线即为最终走线,因此即使数据中心后期进行了扩容,也可以不再重新建设,且不用再增加走线,从而该种方式还有助于降低数据中心的建设成本。
实施例二
图4示例性示出本申请实施例二提供的数据中心的部署架构示意图,如图4所示,该数据中心可以包括2个电力模组(即电力模组1和电力模组2)和N个计算设备,即计算设备1、计算设备2、……、计算设备I-1、计算设备I、……、计算设备N-1和计算设备N。其中,电力模组1包括总线1和电力设备A11,总线1的两端分别为电连接端p 11和电连接端p 12,电连接端p 11与电力设备A11对应,电连接端p 12在数据中心扩容前和数据中心扩容后可以对应为不同的状态。总线1上还设置有N个中间端,即中间端k 11、中间端k 12、……、中间端k 1(I-1)、中间端k 1I、……、中间端k 1(N-1)和中间端k 1N,总线1在这N个中间端处分别引出N条子线连接计算设备1至计算设备N。对应的,电力模组2包括总线2和电力设备A21,总线2的两端分别为电连接端p 21和电连接端p 22,电连接端p 21与电力设备A21对应,电连接端p 22在数据中心扩容前和数据中心扩容后可以对应为不同的状态。总线2上也可以设置有N个中间端,即中间端k 21、中间端k 22、……、中间端k 2(I-1)、中间端k 2I、……、中间端k 2(N-1)和中间端k 2N,总线2在这N个中间端处分别引出N条子线连接计算设备1至计算设备N。
继续参照图4所示,在总线1的中间端k 1(I-1)和中间端k 1I之间还可以设置有插接节点1,插接节点1将总线1划分为子线1和子线2(图4中未进行示意),子线1包括从电连接端p 11至插接节点1之间的总线段,子线2包括从插接节点1至电连接端p 12之间的总线段。对应的,在总线2的中间端k 2(I-1)和中间端k 2I之间还可以设置有插接节点2,插接节点2将总线2划分为子线3和子线4(图4中未进行示意),子线3包括从电连接端p 21至插接节点2之间的总线段,子线4包括从插接节点2至电连接端p 22之间的总线段。
需要说明的是,“插接节点1设置在中间端k 1(I-1)和中间端k 1I之间,插接节点2设置在中间端k 2(I-1)和中间端k 2I之间”仅是一种可选地实施方式,在这种实施方式中,插接节点1和插接节点2均是将计算设备1至计算设备I-1、和计算设备I至计算设备N分割开来。在其它可选地实施方式中,插接节点1和插接节点2分割的计算设备也可以不同,例如一种情况下,可以将插接节点1设置在中间端k 1(I-1)和中间端k 1I之间,将插接节点2设置在中间端k 2I和中间端k 2(I+1)之间,如此,插接节点1将计算设备1至计算设备I-1、和计算设备I至计算设备N分割开来,而插接节点2将计算设备1至计算设备I、和计算设备I+1至计算设备N分割开来。或者,在另一种情况下,可以将插接节点1设置在中间端k 1(I+1)和中间端k 1(I+2)之间,将插接节点2设置在中间端k 2(I-1)和中间端k 2I之间,如此,插接节点1将计算设备1至计算设备I+1、和计算设备I+2至计算设备N分割开来,而插接节点2将计算设备1至计算设备I-1、和计算设备I至计算设备N分割开来。可选地实施方式有很多,此处不再一一介绍。
图5A示例性示出本申请实施例二中数据中心在扩容之前的连接状态示意图,图5B至图5E示例性示出本申请实施例二中数据中心在扩容之后的连接状态示意图,下面参照图5A至图5E介绍本申请实施例二中数据中心的扩容过程:
在实施例二中,数据中心在初期建设时即按照图4所示意的架构进行设计,其中电力设备A11和电力设备A21的电能能够支撑N个计算设备所需的初始设备功率。如此,当设计完成后投入使用之前,参照图5A所示,由于电力设备A11和电力设备A21的电能已足够使用,因此电连接端p 12和电连接端p 22可以设置为闲置状态。且,为了保证用电安全性,还可以暂不上线电力设备A11和电力设备A21,而是先将子线1和子线2在插接节点 1处插接在一起,以导通子线1和子线2,之后,再将电力设备A11上线,然后将子线3和子线4在插接节点2处插接在一起,以导通子线3和子线4,之后,再将电力设备A21上线。这种情况下,电力设备A11通过导通的子线1和子线2为计算设备1至计算设备N供电,电力设备A21通过导通的子线3和子线4为计算设备1至计算设备N供电,这N个计算设备的总电功率由电力设备A11和电力设备A21综合提供。
进一步地,在数据中心的使用过程中,如果检测到N个计算设备执行了升级操作,导致N个计算设备的设备功率增大,而电力设备A11和电力设备A21所能提供的电能无法支持增大后的设备功率,则确定需要对数据中心进行扩容。参照图5B所示,在扩容时,为了保证用电安全性,可以先下线电力设备A11,由电力设备A21在短时间内为计算设备1至计算设备N供电。电力设备A11下线后,可以再将子线1和子线2在插接节点1处断开,然后在电力模组1中添加扩容电力设备A12,并将电连接端p 12连接在扩容电力设备A12上。这种情况下,总线1被插接节点1分割为子线1和子线2,子线1连通电力设备A11和计算设备1至计算设备I-1,子线2连通扩容电力设备A21和计算设备I至计算设备N。参照图5C所示,此时可以上线电力设备A11和扩容电力设备A12。如此,电力设备A11可以通过子线1为计算设备1至计算设备I-1供电,扩容电力设备A12可以通过子线2为计算设备I-1至计算设备N供电,且电力设备A21可以通过导通的子线3和子线4分别为计算设备1至计算设备N供电,这N个计算设备的总电功率由电力设备A11、扩容电力设备A12和电力设备A21综合提供。
进一步地,如果电力设备A11、扩容电力设备A12和电力设备A21的总电能还是无法满足N个计算设备的升级需求,则还需要对数据中心进行二次扩容。在二次扩容时,由于电力模组1已饱和(即初期建设时预留出来的空间无法再放置新的扩容电力设备),因此只能使用电力模组2进行二次扩容。参照图5D,为了保证二次扩容的用电安全性,可以先下线电力设备A21,由电力设备A11和电力设备A12在短时间内为计算设备1至计算设备N供电。电力设备A21下线后,可以再将子线3和子线4在插接节点2处断开,然后在电力模组2中添加扩容电力设备A22,并将电连接端p 22连接在扩容电力设备A22上。这种情况下,总线2被插接节点2分割为子线3和子线4,子线3连接电力设备A21、计算设备I至计算设备N,子线4连接扩容电力设备A22、计算设备1至计算设备I-1。参照图5E所示,此时可以上线电力设备A21和扩容电力设备A22,这种情况下,电力设备A11通过子线1为计算设备1至计算设备I-1供电,且电力设备A22通过子线4为计算设备1至计算设备I-1供电,电力设备A21通过子线3为计算设备I至计算设备N供电,且电力设备A12通过子线2为计算设备I至计算设备N供电,这N个计算设备的总电功率由电力设备A11、扩容电力设备12、电力设备A21和扩容电力设备A22综合提供。
需要说明的是,图5B至图5E示例性以先扩容电力模组1再扩容电力模组2为例进行介绍,在另一种可选地实施方式中,也可以先扩容电力模组2再扩容电力模组1,这种情况下,可以先下线电力设备A21,由电力设备A11短时间内为计算设备1至计算设备N供电,然后再将总线2在插接节点2处断开,并在上线电力设备A21和扩容电力设备A22后,再下线电力设备A11,由电力设备A21和扩容电力设备A22短时间内为计算设备1至计算设备N供电,最后将总线1在插接节点1处断开,并上线电力设备A11和扩容电力设备A12。该种实施方式与图5B至图5E中的实施过程类似,只是使用各个电力模组扩容的顺序不同,此处不再赘述。
下面以一个具体的场景介绍实施例二中的扩容方案。在该场景中,假设N的取值为6,插接节点1设置在中间端k 13和中间端k 14之间,插接节点2设置在中间端k 23和中间端k 24之间,且电力设备A11、扩容电力设备A12、电力设备A21和扩容电力设备A22均可以提供6KW的电功率,则:
在数据中心扩容之前,电力设备A11和电力设备A21为6个计算设备共提供12KW的电功率,如果6个计算设备均分电功率,则每个计算设备能够得到2KW的电功率;
在数据中心扩容之后,如果只进行一级扩容,则电力设备A11为计算设备1至计算设备3提供6KW的电功率,如果计算设备1至计算设备3均分电功率,则计算设备1至计算设备3中的每个计算设备能够得到2KW的电功率,与扩容之前相同。而扩容电力设备A12和电力设备A21为计算设备4至计算设备6提供12KW的电功率,如果计算设备4至计算设备6均分电功率,则计算设备4至计算设备6中的每个计算设备能够得到4KW的电功率,计算设备4至计算设备6的电功率翻倍;
如果还进行二级扩容,则电力设备A11和扩容电力设备A22为计算设备1至计算设备3提供12KW的电功率,而扩容电力设备A12和电力设备A21为计算设备4至计算设备6提供12KW的电功率,如果计算设备1至计算设备3均分电功率、且计算设备4至计算设备6也均分电功率,则每个计算设备都能够得到4KW的电功率,计算设备1至计算设备6的总电功率翻倍。
由此可知,当扩容电力设备和电力设备提供的电功率相同时,实施例二中的扩容方案能够支持仅对部分功率翻倍的计算设备进行扩容,也能够支持对全部功率翻倍的计算设备进行扩容。
上述实施例二中的扩容方案除了具有实施例一中的有益效果之外,还具有如下有益效果:一方面,实施例二中的扩容方案支持根据设备功率升级的情况进行逐级扩容,当设备功率升级量位于1个电力模组的扩容极限以内(即在1个电力模组中添加扩容电力设备后,该电力模组所提供的电能支持升级后的设备功率)时,可以只对一个电力模组进行扩容;当设备功率升级量超出1个电力模组的扩容极限时,可以先对其中一个电力模组进行一级扩容,再对另一个电力模组进行二次扩容。另一方面,通过设置两个电力模组,使得在扩容其中一个电力模组时仍然可以使用另外一个电力模组为N个计算设备供电,如此,数据中心即使是在扩容的情况下也无需下线,该种方式可以实现数据中心的线上升级扩容,有助于使数据中心始终处于可用状态,提高数据中心的服务能力。
上述实施例一和实施例二分别以设置1个电力模组和设置2个电力模组为例介绍数据中心的部署架构和扩容过程。应理解,在其它的实施例中,数据中心还可以包括比2个电力模组更多的个电力模组,例如3个或3个以上的电力模组,每个电力模组的初始部署架构可以参照实施例一或实施例二。在扩容数据中心时,可以先在其中一个电力模组中添加扩容电力设备,并使用对应的插接节点断开对应的总线,如果扩容后的数据中心能够达到升级设备功率的需求,则可以结束扩容;如果扩容后的数据中心仍然达不到升级设备功率的需求,则再在下一个电力模组中添加扩容电力设备,直至扩容后的数据中心能够达到设备功率的升级需求,或者所有的电力模组中都添加了扩容电力设备导致无法再扩容位置。该种实施方式在初期建设时预先设置3个或3个以上的电力模组,不仅能够实现实施例一和实施例二中的有益效果,还能够提高数据中心的扩容能力。但是3个或3个以上的电力模组可能会占用较多空间,导致数据中心的空间利用率降低,数据中心的初期建设成本增 大。
上述实施例主要介绍了数据中心中与扩容相关的计算设备和电力设备的部署方式。然而,在实际部署时,数据中心中还可能会包括其它设备,例如制冷设备。下面基于实施例二中的计算设备和电力模组为例,从整体架构上介绍一种数据中心的可能部署方案。在该方案中,数据中心中的每种类型的设备都可以部署在对应的集装箱内,集装箱和其中部署的设备预先在工厂集成。这种部署方式又称为模块化数据中心(或集装箱数据中心)。模块化数据中心仅需要传统数据中心1%的建造成本,且直接通过移动集装箱即可实现搬迁,因此模块化数据中心拥有灵活的机动性,能大幅降低数据中心的部署周期。
实施例三
图6示例性示出本申请实施例三提供的一种数据中心的整体架构示意图,如图6所示,在该数据中心中,N个计算设备可以部署在一个或多个计算设备模块箱中,当部署在一个计算设备模块箱时,N个计算设备可以并列设置,当部署在多个计算设备模块箱时,各个计算设备模块箱可以并列设置。对应的,电气设备可以部署在一个或多个供配电模块箱中,例如电气设备A11部署在供配电模块箱1中,电气设备A21部署在供配电模块箱2中。其中,计算设备模块箱和供配电模块箱可以位于同一层,当仅包括一个电力设备(例如电力设备A11或电力设备A21)时,该电力设备对应的供配电模块箱可以部署在计算设备模块箱一侧,当包括两个电力设备(例如电力设备A11和电力设备A21)时,两个电力设备对应的供配电模块箱可以分别部署在计算设备模块箱的两侧。应理解,当包括三个或三个以上的电力设备时,三个或三个以上的电力设备也可以对应在图6所示的两个供配电模块箱中,这两个供配电模块箱分别部署在计算设备模块箱的两侧。
现有技术中,当数据中心部署为多层时,电力设备通常是部署在最低层,而计算设备分散部署在其它各层,电力设备从最底层引线至其它各层为计算设备供电。显然地,这种方式下电力设备和计算设备之间的距离较远,两者之间的走线也会较长,长距离的走线不仅会增大部署成本,还会使得故障检修不便。然而,在本申请实施例中,当数据中心部署为多层时,每层都可以单独部署电力设备模块箱和计算设备模块箱,且每层的计算设备模块箱中的计算设备都能够和本层设置的电力设备模块箱中的电力设备连接,该种实施方式使得计算设备和电力设备之间的走线距离变短,从而不仅能够较好地降低部署成本,还有助于提高故障检修的便捷性。
需要说明的是,“将电力设备A11部署在供配电模块箱1,将电力设备A21部署在供配电模块箱2”仅是一种可选地实施方式。在另一种可选地实施方式中,电力设备A11和电力设备A21也可以部署在同一个供配电模块箱中,例如同时部署在供配电模块箱1中,或者同时部署在供配电模块箱2中。由于将不同的电力设备部署在同一供配电模块箱或不同的供配电模块箱中并不影响数据中心扩容时的操作过程,因此此处不作过多说明。
在一种可选地实施方式中,继续参照图6所示,数据中心中除了可以包括计算设备模块箱和供配电模块箱之外,还可以包括制冷设备模块箱和辅助管井箱,例如辅助管井箱1和辅助管井箱2,制冷设备模块箱中用于设置制冷设备,而辅助管井箱中用于设置辅助管井。其中,辅助管井箱可以设置在供配电模块箱的端墙一侧,并与制冷设备模块箱侧面连通。当数据中心包括多层时,制冷设备模块箱也可以与计算设备模块箱同层配置,且制冷设备模块箱可以设置在计算设备模块箱的端墙一侧,如此,制冷设备模块箱中的制冷设 备与计算设备只间隔一个端墙的距离,从而制冷模块散出的冷气能够经由较短的距离直接输出至计算设备模块箱,有助于实现较好的降温效果。
本申请实施例中,制冷设备的制冷功能可以依赖于水电暖(mechanical systems electrical plumbing,MEP)技术,制冷设备的类型可以由本领域技术人员根据经验进行设置,例如可以为机房空气处理器(computer room air handling,CRAH)机、冷水机、制冰机、电镀冷冻机中的一项,不作限定。由于CRAH机的制冷效果较好,且成本较低,因此本申请实施例使用CRAH机作为制冷设备。
在一种可选地实施方式中,在初期建设数据中心时,制冷设备模块箱中的CRAH机的数量可以与计算设备的初始设备功率需求相匹配,例如当计算设备的初始设备功率需求为6KW时,可以设置5台CRAH机或6台CRAH机。进一步地,为了使降温功能同步支持扩容,制冷设备模块箱中还可以预设有CRAH机的扩容空间,该扩容空间用于在扩容时放置新的CRAH机,而新的CRAH机对应的制冷管道也可以预先在初期建设时设置好。下面分别介绍制冷设备模块箱的两种可选地部署方式:
图7A示例性示出一种制冷设备模块箱在初期建设时的部署架构示意图,如图7A所示,在初期建设时,数据中心中可以设置有进水总管道、出水总管道和3个制冷设备模块箱,这3个制冷设备模块箱通过端面组合连通。其中,每个制冷设备模块箱中可以设置有2台CRAH机,这2台CRAH机中的每台CRAH机都包括进水端和出水端,进水总管道在每个制冷设备模块箱中设置有4个进水口,这4个进水口中的2个进水口通过管道连通2台CRAH机的进水端。出水总管道在每个制冷设备模块箱中设置有4个出水口,这4个出水口中的2个出水口通过管道连通2台CRAH机的出水端。此外,进水总管道中未连接CRAH机的两个进水口在初期建设时可以闲置(例如通过软木塞堵塞,以避免出水),出水总管道未连接CRAH机的两个出水口在初期建设时也可以闲置(例如通过软木塞堵塞,以避免出水)。按照图7A所示意的部署方式,每个制冷设备模块箱中实际上可以容纳4台CRAH机,而初期建设时只放置了2台CRAH机。如此,在后续扩容时,如果2台CRAH机提供的散热能力不足以支撑计算设备的设备功率,则可以再在一个或多个制冷设备模块箱中添加1台新的CRAH机或2台新的CRAH机,并连通新添加的CRAH机的进水端和闲置的1个进水口,连通新添加的CRAH机的出水端和闲置的1个出水口。假设设备功率增大至需要每个制冷设备模块箱中都部署4台CRAH机,则图7B示例性示出该种情况下制冷设备模块箱在扩容后的部署架构示意图,如图7B所示,在数据中心扩容后,每个制冷设备模块箱中CRAH机的数量都增大至4台,通过4台CRAH机给计算设备模块箱制冷,能够在设备功率增大的情况下相应提高制冷效果。
图8A示例性示出另一种制冷设备模块箱在初期建设时的部署架构示意图,如图8A所示,在初期建设时,数据中心中可以设置有3个制冷设备模块箱,这3个制冷设备模块箱通过端面组合连通。其中,这3个制冷设备模块箱中存在1个制冷设备模块箱闲置,而另外2个制冷设备模块箱的每个制冷设备模块箱中都可以设置有4台CRAH机,进水总管道在每个制冷设备模块箱中设置有4个进水口,这4个进水口通过管道连通4台CRAH机的进水端。出水总管道在每个制冷设备模块箱中设置有4个出水口,这4个出水口通过管道连通4台CRAH机的出水端。此外,进水总管道在闲置的制冷设备模块箱中还可以包括4个进水口,4个进水口在初期建设时闲置,出水总管道在闲置的制冷设备模块箱中还可以包括4个出水口,4个出水口在初期建设时也闲置。如此,3个制冷设备模块箱的每个制 冷设备模块箱中实际上都可以容纳4台CRAH机,而初期建设时只在2个制冷设备模块箱中放置了4台CRAH机,如此,后续在扩容数据中心时,如果2个制冷设备模块箱提供的散热能力不足以支撑计算设备的设备功率,则可以再在闲置的制冷设备模块箱中添加1至4个中任意数量的CRAH机,并连通新添加的每个CRAH机的进水端和闲置的1个进水口,连通新添加的每个CRAH机的出水端和闲置的1个出水口。假设设备功率增大至需要部署10台CRAH机,则图8B示例性示出该种情况下制冷设备模块箱在扩容后的部署架构示意图,如图8B所示,在数据中心扩容后,闲置的制冷设备模块箱中新增加了2台CRAH机,通过10台CRAH机给计算设备模块箱制冷,能在设备功率增大的情况下提高对应的制冷效果。
在上述两种制冷设备模块箱的部署方式中,数据中心支持在扩容时增加任意多台CRAH机,即CRAH机的数量可以以1为最小单元递增,这种方式使得每次增加的CRAH机的数量可以与设备功率相匹配,既能够在设备功率增大的情况下提高制冷效果,又不需要增加过多的CRAH机,有助于降低数据中心机房的资本性支出。应理解,上述只是两种可选地实施方式,所有通过模块化方式部署并在扩容时增加制冷设备数量的方案都在本申请的保护范围内,本申请对此不作具体限定。
本申请实施例中,电力设备可以是实现电能输出的任意设备或系统,例如在一个示例中,电力设备可以包括变压器进线开关柜、变压器、低压配电屏(low-voltage distribution panel,LVP)、不间断电源(uninterruptible power supply,UPS)和一个或多个锂电池。其中,变压器的输入端可以通过变压器进线开关柜连接电网,变压器的输出端可以连接LVP的输入端,LVP的输入端可以连接UPS的输入端,UPS的第一输出端(也可以作为输入端)连接一个或多个锂电池,第二输出端连接电连接端。具体实施中,在变压器进线开关柜导通的情况下,变压器可以通过输入端接收电网输入的高压电,然后将该高压电转换为适合使用的低压电后,通过输出端将低压电输出至LVP,LVP将低电压传输至UPS。在电网端有电时,UPS不仅会使用电网侧输出至LVP的电能给计算设备供电,还会使用该电能向锂电池充电。如此,在电网侧发生故障导致断电后,UPS可以获取锂电池输入的电能,并使用该电能为计算设备供电。由此可知,在该示例中,无论是电网侧有电还是没电,电力设备都能为计算设备供电,从而数据中心的可用性较好。
下面基于上述内容,从整体架构上介绍本申请实施例中的数据中心。
图9A示例性示出本申请实施例提供的一种数据中心在初期建设时的整体架构示意图,图9B示例性示出本申请实施例提供的一种数据中心在扩容后的整体架构示意图,如图9A和图9B所示,该数据中心中包括6个计算设备、2个电力设备、6个制冷设备和2个管井,这些设备同层部署。其中,6个计算设备中的每个计算设备都可以设置在各自对应的计算设备模块箱中,如此,数据中心可以包括6个计算设备模块箱,且各个计算设备模块箱并列设置后构成计算设备模块箱模组。2个电力设备分散在2个供配电模块箱中,且每个供配电模块箱中还包括预留出来的能够容纳1个扩容电力设备的扩容空间。2个供配电模块箱分别设置在计算设备模块箱模组的两侧。6个制冷设备分散部署在3个制冷设备模块箱中,每个制冷设备模块箱中可以包括2个制冷设备,且每个制冷设备模块箱中还包括预留出来的能够容纳2个扩容制冷设备的扩容空间和对应管道。3个制冷设备模块箱之间通过图示的左右两侧面相接实现端面连通,且3个制冷设备模块箱还可以通过图示的下侧面与计算设备模块箱所在的空间的端墙连通。端面连通的3个制冷设备模块箱构成制冷设备模 块箱模组,2个管井设置在制冷设备模块箱模组的两侧。
下面先对数据中心的制冷功能进行介绍:继续参照图9A和图9B所示,进水总管道和出水总管道横跨了3个制冷设备模块箱,且进水总管道和出水总管道在每个制冷设备模块箱中引出了4条进水口和4条出水口,以实现2个制冷设备和预留的2个扩容制冷设备的水循环。每个制冷设备模块箱在图示的下侧面上设置有送风通道和回风通道(图9A和图9B中未进行示意),在制冷时,每个制冷设备模块箱通过送风通道将冷风送入计算设备模块箱所在的空间进行降温处理,然后通过回风通道将计算设备模块箱所在的空间中的热风回收至制冷设备模块箱,以循环生成冷风。为了保证制冷效果,还需要使用密封通道组件将计算设备模块箱所在的空间中除了送风通道和回风通道以外的其他空隙堵塞,例如将端墙上总线对应的中间端至计算设备的路径上除了子线以外的空隙密封,如此,计算设备模块箱所在的空间与制冷设备模块箱构成密闭区域,有助于制冷效果和制冷利用率。此外,制冷设备模块箱中的各个管道均可以使用柔性连接方式实现对接,如此,各条管道在连接处不仅能够通过伸缩来实现密封性,还能具有优良的抗渗性能。
下面再对数据中心的扩容功能进行介绍:继续参照图9A和图9B所示,2个供配电模块箱分别具有各自对应的总线,该总线横跨了供配电模块箱1、管井1、制冷设备模块箱模组、管井2和供配电模块箱2,并在3个制冷设备模块箱处引出子线为计算设备供电。例如以图9A所示左侧的电力设备为例,该电力设备所在的供配电模块箱1的输出端与管井1中的总线段的输入端对接,管井1中的总线段的输出端与制冷设备模块箱模组中的总线段的输入端对接,制冷设备模块箱模组中的总线段在中间端引出6条子线与6个计算设备的小母线连通,从而为6个计算设备供电,且制冷设备模块箱模组中的总线段上还存在一个或多个插接节点,制冷设备模块箱模组中的总线段的输出端与管井2中的总线段的输入端对接,管井2中的总线段的输出端引出至供配电模块箱2。如图9A所示,在数据中心扩容之前,图9A所示左侧的供电配电箱1中存在1个电力设备,图9A所示右侧的供电配电箱2中也存在1个电力设备,这两个电力设备分别通过两条总线为6个计算设备供电。如图9B所示,在数据中心扩容之后,图9B所示左侧的供电配电箱1中新添加了1个扩容电力设备,图9B所示右侧的供电配电箱2中也新添加了1个扩容电力设备,两条总线均从中间位置断开,如此,图9B所示左侧的供电配电箱1中的2个电力设备为计算设备1至计算设备3供电,而图9B所示右侧的供电配电箱2中的2个电力设备为计算设备4至计算设备6供电。在数据中心扩容后,由于计算设备的设备功率增大,制冷设备模块箱中原有的6个制冷设备的制冷效果可能不足,这种情况下还可以在制冷设备模块箱中预留出的扩容空间处设置新的扩容制冷设备,例如如图9B所示新增加6个扩容制冷设备,而新的扩容制冷设备的出水端和进水端分别连通限制的出水口和进水口。
可以理解的,图9B中的示例是在供配电模块箱内增加LVP、UPS和锂电柜的整机来实现扩容,在其他可选的实施方式中,也可以只增加LVP、UPS和锂电柜,而其它配件则可以直接利用原电力设备的配件,本申请对此不作具体限定。
可以理解的,上述实施例所示意出的计算设备的数量、制冷设备的数量、电力设备的数量、管井的数量、计算设备模块箱的数量、制冷设备模块箱的数量、供配电模块箱的数量和辅助管井箱的数量等均是为了便于理解方案而设置,本申请对这些数量不作具体限定。
按照图9A和图9B所示意的实施方式,当需要扩容数据中心时,可以只添加扩容电力设备并直接利用原有的走线,而无需设置新的走线,从而能够避免数据中心出现走线密集 的现象。且,上述方式还能够在扩容电力设备时进行制冷扩容,且制冷扩容时也可以只添加扩容制冷设备并直接利用预留出来的进水口和出水口,而无需重新开设进水管道和出水管道。由此可知,这种实施方式在数据中心初期建设时就按终期规格来部署走线及管道,使得后续扩容时可以直接利用原有走线和管道,而可以不再设置新的走线和管道,从而还能够降低数据中心的扩容成本。
以上,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (10)

  1. 一种数据中心,其特征在于,所述数据中心包括N个计算设备和T个电力模组,所述T个电力模组中的每个电力模组包括第一电力设备和总线,所述总线包括第一电连接端、N个中间端和第二电连接端,所述第一电连接端连接所述第一电力设备,所述N个中间端分别连接所述N个计算设备,所述总线在所述N个中间端的第I-1个中间端和第I个中间端之间设置有插接节点,所述插接节点将所述总线分为第一子线和第二子线;其中,T为正整数,N、I为大于或等于2的正整数,且I<N;
    针对于所述T个电力模组中的至少一个电力模组:
    在所述数据中心扩容前,所述至少一个电力模组的插接节点连通所述第一子线和所述第二子线,所述至少一个电力模组的第二电连接端闲置;
    在所述数据中心扩容后,所述至少一个电力模组的插接节点断开所述第一子线和所述第二子线,所述至少一个电力模组的第二电连接端连接扩容的第二电力设备。
  2. 如权利要求1所述的数据中心,其特征在于,所述N个计算设备放置在一个或多个计算设备模块箱中,所述T个电力模组中的第一电力设备放置在一个或多个电力设备模块箱中,所述一个或多个计算设备模块箱与所述一个或多个电力设备模块箱在所述数据中心中同层设置。
  3. 如权利要求2所述的数据中心,其特征在于,当所述一个或多个计算设备模块箱的数量大于或等于2时,所述一个或多个计算设备模块箱并列设置后构成计算设备模块箱模组,所述一个或多个电力设备模块箱放置在所述计算设备模块箱模组的一侧或两侧。
  4. 如权利要求1至3中任一项所述的数据中心,其特征在于,所述数据中心还包括进水总管道、出水总管道和L个制冷设备,所述L个制冷设备用于为所述N个计算设备降温;所述进水总管道上设置有P个进水口,所述出水总管道上设置有P个出水口,所述P个进水口中的L个进水口分别连接所述L个制冷设备的进水端,所述P个出水口中的L个出水口分别连接所述L个制冷设备的出水端;P、L为正整数,且P>L;
    在所述数据中心扩容前,所述P个进水口中除所述L个进水口以外的P-L个进水口闲置,所述P个出水口中除所述L个出水口以外的P-L个出水口闲置;
    在所述数据中心扩容后,闲置的所述P-L个进水口中存在一个或多个进水口分别连接一个或多个扩容的制冷设备的进水端,闲置的所述P-L个出水口中存在一个或多个出水口分别连接所述一个或多个扩容的制冷设备的出水端。
  5. 如权利要求4所述的数据中心,其特征在于,所述L个制冷设备放置在一个或多个制冷设备模块箱中,当所述一个或多个制冷设备模块箱的数量大于或等于2时,所述一个或多个制冷设备模块箱并列放置,且任意相邻两个制冷设备模块箱的侧面连通。
  6. 如权利要求5所述的数据中心,其特征在于,所述一个或多个制冷设备模块箱与所述一个或多个计算设备模块箱在所述数据中心中同层设置,且所述一个或多个制冷设备模块箱放置在所述一个或多个计算设备模块箱的端墙一侧;其中,所述端墙设置在与所述一个或多个计算设备模块箱并列设置的侧面垂直的侧面上;
    所述端墙上设置有送风通道和回风通道,所述一个或多个制冷设备模块箱中的L个制冷设备通过所述送风通道和所述回风通道为所述N个计算设备降温。
  7. 如权利要求6所述的数据中心,其特征在于,当所述T为大于或等于2的正整数 时,针对于所述T个电力模组中的每个电力模组:
    所述电力模组的总线从所述电力模组的第一电力设备所在的电力设备模块箱中出线,横穿所述一个或多个制冷设备模块箱,并在所述端墙处引出N条子线分别连接所述N个计算设备,以及延续至另一电力模组的第一电力设备所在的电力设备模块箱内终止;
    其中,所述电力模组的第一电连接端位于所述电力模组的第一电力设备所在的电力设备模块箱内,所述电力模组的闲置的第二电连接端位于所述另一电力模组的第一电力设备所在的电力设备模块箱内。
  8. 如权利要求5至7中任一项所述的数据中心,其特征在于,所述一个或多个制冷设备模块箱并列放置后构成制冷设备模块箱模组;
    所述数据中心还包括一个或多个管井,所述一个或多个管井设置在所述制冷设备模块箱模组的一侧或两侧。
  9. 一种数据中心的扩容方法,其特征在于,所述方法应用于如权利要求1至8中任一项所述的数据中心,所述方法包括:
    检测所述数据中心中的N个计算设备的设备功率;
    若所述设备功率大于预设的扩容设备功率阈值,则从T个电力模组中确定出至少一个电力模组,针对于所述至少一个电力模组中的每个电力模组,执行:
    控制所述电力模组中的第一电力设备处于断电状态,并断开所述电力模组的插接节点;
    在检测到扩容的第二电力设备连接至所述电力模组的闲置的第二电连接端后,控制所述第一电力设备和所述扩容的第二电力设备处于供电状态。
  10. 如权利要求9所述的方法,其特征在于,当所述数据中心还包括进水总管道、出水总管道和L个制冷设备,则:
    所述控制所述第一电力设备和所述扩容的第二电力设备处于供电状态之后,还包括:
    检测所述N个计算设备的温度;
    若所述温度大于预设的温度阈值,则暂停所述进水总管道的进水操作和所述出水总管道的出水操作;
    在检测到扩容的制冷设备连接至所述进水总管道和出水总管道后,恢复所述进水总管道的进水操作和所述出水总管道的出水操作。
PCT/CN2020/111928 2020-08-27 2020-08-27 一种数据中心及扩容方法 WO2022041083A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2020/111928 WO2022041083A1 (zh) 2020-08-27 2020-08-27 一种数据中心及扩容方法
EP20950753.2A EP4191364A4 (en) 2020-08-27 2020-08-27 DATA CENTER AND EXPANSION METHOD
CN202080005884.3A CN114503052A (zh) 2020-08-27 2020-08-27 一种数据中心及扩容方法
US18/174,080 US20230199998A1 (en) 2020-08-27 2023-02-24 Data center and expansion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/111928 WO2022041083A1 (zh) 2020-08-27 2020-08-27 一种数据中心及扩容方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/174,080 Continuation US20230199998A1 (en) 2020-08-27 2023-02-24 Data center and expansion method

Publications (1)

Publication Number Publication Date
WO2022041083A1 true WO2022041083A1 (zh) 2022-03-03

Family

ID=80354456

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111928 WO2022041083A1 (zh) 2020-08-27 2020-08-27 一种数据中心及扩容方法

Country Status (4)

Country Link
US (1) US20230199998A1 (zh)
EP (1) EP4191364A4 (zh)
CN (1) CN114503052A (zh)
WO (1) WO2022041083A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090152216A1 (en) * 2007-12-13 2009-06-18 International Business Machines Corporation Rack system providing flexible configuration of computer systems with front access
CN102436610A (zh) * 2011-12-13 2012-05-02 无锡互惠信息技术有限公司 一种基于射频识别的数据中心智能管理系统和方法
CN102662434A (zh) * 2012-03-23 2012-09-12 华为技术有限公司 一种模块化数据中心
CN103595138A (zh) * 2013-11-21 2014-02-19 国网上海市电力公司 一种智能微电网系统
US20190171485A1 (en) * 2017-12-05 2019-06-06 Western Digital Technologies, Inc. Data Processing Offload Using In-Storage Code Execution

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2194656B1 (en) * 2008-12-03 2014-06-25 ABB Research Ltd. Electrical power network management system
WO2012109401A1 (en) * 2011-02-09 2012-08-16 Avocent Infrastructure control fabric system and method
US9606316B1 (en) * 2014-05-01 2017-03-28 Amazon Technologies, Inc. Data center infrastructure
US9929554B2 (en) * 2014-06-25 2018-03-27 Amazon Technologies, Inc. Power busway interposer
CN105376986B (zh) * 2014-07-16 2018-01-02 阿里巴巴集团控股有限公司 模块化数据中心
US9983248B1 (en) * 2015-03-19 2018-05-29 Amazon Technologies, Inc. Incremental data center infrastructure commissioning
US9454189B1 (en) * 2015-04-16 2016-09-27 Quanta Computer Inc. Systems and methods for distributing power in a server system
CN206302197U (zh) * 2016-11-18 2017-07-04 中国电力建设股份有限公司 一种用于集装箱式数据中心的离线式光伏发电系统
US10340669B1 (en) * 2017-06-28 2019-07-02 Amazon Technologies, Inc. Power distribution loop with flow-through junction locations
US10681836B2 (en) * 2018-04-23 2020-06-09 Dell Products, L.P. Configurable fuse box for modular data center
CN209119844U (zh) * 2018-11-16 2019-07-16 北京中网华通设计咨询有限公司 一种关于数据中心主机房内微模块it机柜的配电系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090152216A1 (en) * 2007-12-13 2009-06-18 International Business Machines Corporation Rack system providing flexible configuration of computer systems with front access
CN102436610A (zh) * 2011-12-13 2012-05-02 无锡互惠信息技术有限公司 一种基于射频识别的数据中心智能管理系统和方法
CN102662434A (zh) * 2012-03-23 2012-09-12 华为技术有限公司 一种模块化数据中心
CN103595138A (zh) * 2013-11-21 2014-02-19 国网上海市电力公司 一种智能微电网系统
US20190171485A1 (en) * 2017-12-05 2019-06-06 Western Digital Technologies, Inc. Data Processing Offload Using In-Storage Code Execution

Also Published As

Publication number Publication date
EP4191364A1 (en) 2023-06-07
EP4191364A4 (en) 2023-10-25
CN114503052A (zh) 2022-05-13
US20230199998A1 (en) 2023-06-22

Similar Documents

Publication Publication Date Title
US10568232B2 (en) Modular uninterruptible power supply apparatus and methods of operating same
CN100349350C (zh) N个用电设备中有m个用电设备被同时供电的结构配置
US11196291B2 (en) Data center with backup power system
WO2015196807A1 (zh) 一种配电系统及集装箱式数据中心
CN206077055U (zh) 数据中心的供电系统和机房
KR20020007131A (ko) 전원 시스템
WO2020149494A1 (ko) 직류 마이크로 그리드 내의 수용가 상호간 직류 자율배전 제어시스템 및 그 운용방법
CN108879698A (zh) 一种包含四端口柔性开关的中压配电网双环拓扑结构
WO2018230831A1 (ko) 에너지 저장 시스템
CN115742827A (zh) 功率分配系统、方法及充电桩
WO2022041083A1 (zh) 一种数据中心及扩容方法
WO2024098996A1 (zh) 储能阀控系统及储能设备
US20180097341A1 (en) Energy storage units and systems with segmented branch interconnects
CN113852205A (zh) 一种高性能即插即用型微电网储能系统
WO2022178990A1 (zh) 一种新型母线配电微模块
US20220216716A1 (en) Plug-in type energy storage system
WO2016184185A1 (zh) 一种实现配电的方法及装置
CN108233361B (zh) 面向园区微网的综合供能系统分层分区协同控制方法
WO2022141395A1 (zh) 数据中心供配电系统
CN105388986A (zh) 服务器的供电系统
CN116316526B (zh) 供电备电系统及方法
CN107219912A (zh) 一种新型机柜式服务器供电系统及供电模式控制方法
CN213305637U (zh) 一种大型场馆光网络系统
CN214379372U (zh) 一种城市变电站用svg
CN210075840U (zh) 一种机柜、机柜组及其监控设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20950753

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020950753

Country of ref document: EP

Effective date: 20230228

NENP Non-entry into the national phase

Ref country code: DE