WO2020103129A1 - 服务器 - Google Patents

服务器

Info

Publication number
WO2020103129A1
WO2020103129A1 PCT/CN2018/117184 CN2018117184W WO2020103129A1 WO 2020103129 A1 WO2020103129 A1 WO 2020103129A1 CN 2018117184 W CN2018117184 W CN 2018117184W WO 2020103129 A1 WO2020103129 A1 WO 2020103129A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
accommodating cavity
arithmetic circuit
partition
cavity
Prior art date
Application number
PCT/CN2018/117184
Other languages
English (en)
French (fr)
Inventor
张黎明
Original Assignee
北京比特大陆科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京比特大陆科技有限公司 filed Critical 北京比特大陆科技有限公司
Priority to PCT/CN2018/117184 priority Critical patent/WO2020103129A1/zh
Publication of WO2020103129A1 publication Critical patent/WO2020103129A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/18Packaging or power distribution

Definitions

  • This application relates to the field of data processing technology, and in particular to a server.
  • the server As a commonly used data processing device, the server must have the ability to process large amounts of data. Therefore, how to properly plan the internal structure of the server to improve the data processing capacity of the server is also getting more and more attention.
  • the server includes a chassis and an x86 motherboard.
  • the x86 motherboard is provided with a central processing unit (CPU chip) for processing data.
  • the x86 motherboard is installed at the bottom of the chassis.
  • An embodiment of the present disclosure provides a server to solve the problem of low computing power of the server.
  • An embodiment of the present disclosure provides a server, including: a box, a partition, and a data processing device; a storage cavity is formed inside the box, and the partition is disposed in the storage cavity to store the storage cavity Divided into a plurality of sub-accommodating chambers; the data processing device includes an x86 motherboard and a first arithmetic circuit board connected to the x86 motherboard, the x86 motherboard and the first arithmetic circuit board are respectively accommodated in different Inside the cavity.
  • the box includes a bottom wall, a top wall disposed opposite to the bottom wall, and a side wall fixed between the bottom wall and the top wall, the bottom wall, all The top wall and the side wall together form the accommodating cavity; the partition is connected to the side wall.
  • a slot is provided on the side wall, and the partition is inserted into the slot.
  • the server as described above, wherein the x86 main board is provided with a CPU chip, the first computing circuit board includes a plurality of first AI chips, and the x86 main board is provided in the first sub-accommodation cavity, so The first arithmetic circuit board is arranged in the second sub-accommodating cavity.
  • the data processing device further includes: the data processing device further includes: a control board and a routing board; the control board is connected to the x86 motherboard through the routing board, and the first The arithmetic circuit board is respectively connected to the routing board and the control board; and the control board and the routing board are disposed in the second sub-accommodating cavity.
  • the data processing device further includes: a plurality of second arithmetic circuit boards, each of the second arithmetic circuit boards includes a plurality of second AI chips, and the second arithmetic circuit board It is connected to the control board and the routing board respectively; the second arithmetic circuit board is arranged in the third sub-accommodating cavity.
  • each of the sub-accommodating chambers is provided with the fan.
  • the server provided by the embodiment of the present disclosure is provided with a cabinet, a partition, and a data processing device; a storage cavity is formed inside the cabinet, and the partition is provided in the storage cavity to divide the storage cavity into a plurality of sub-accommodation cavities; a data processing device It includes an x86 motherboard and a first arithmetic circuit board connected to the x86 motherboard.
  • the x86 motherboard and the first arithmetic circuit board are respectively accommodated in different sub-cavity chambers.
  • the first computing circuit board is added on the basis of the x86 motherboard, thereby improving the computing power of the server, and also improving the space utilization rate of the server.
  • FIG. 1 is a schematic structural diagram of a server in an embodiment of the present disclosure
  • FIG. 2 is a topology diagram of a server in an embodiment of the present disclosure
  • FIG. 3 is a top view of the second sub-accommodating cavity in the embodiment of the present disclosure.
  • FIG 4 is a top view of a third sub-accommodating cavity in an embodiment of the present disclosure.
  • control board
  • routing board
  • FIG. 1 is a schematic structural diagram of a server in an embodiment of the present disclosure
  • FIG. 2 is a topology diagram of the server in an embodiment of the present disclosure.
  • this embodiment provides a server, including: a box 100, a partition 200, and a data processing device 300; a storage cavity is formed inside the box 100, and the partition 200 is disposed in the storage cavity,
  • the data processing device 300 includes an x86 motherboard 310 and a first arithmetic circuit board 350 connected to the x86 motherboard 310, the x86 motherboard 310 and the first arithmetic circuit board 350 are respectively accommodated in different sub-accommodations Inside the cavity.
  • the server can be used to process various data.
  • the server may include a cabinet 100, a partition 200, and a data processing device 300.
  • the shape of the box 100 may be various.
  • the box 100 may have a rectangular parallelepiped shape or a cylindrical shape.
  • the box 100 may have a receiving cavity.
  • the shape of the receiving cavity may be the same as or different from the shape of the box 100.
  • the box 100 may have a thin-walled structure, so that the volume of the receiving cavity is large, so as to accommodate more data processing devices 300.
  • the box 100 may also have an opening, so as to facilitate the placement of the data processing device 300.
  • the material of the box 100 may also be of various types. For example, it may be processed by using common materials (such as metal or plastic, etc.) in common processing methods (such as welding or injection molding). Preferably, the box 100 may be welded with aluminum alloy material, so as to improve the strength of the box 100 and reduce the weight of the box 100.
  • common materials such as metal or plastic, etc.
  • common processing methods such as welding or injection molding.
  • the box 100 may be welded with aluminum alloy material, so as to improve the strength of the box 100 and reduce the weight of the box 100.
  • the partition 200 may be disposed in the box 100, and the structure of the partition 200 may be various.
  • the partition 200 may be a grid-like structure, or it may be a plate-shaped solid structure, and the shape of the partition 200 may be
  • the plane shape may also be a curved shape, which can be set according to the shape of the accommodating cavity, which is not specifically limited herein.
  • the material of the separator 200 may also be various, and it may also be processed by using common materials (such as metal or plastic) in common processing methods (such as welding or injection molding).
  • the material of the partition 200 may be the same as the box 100 or different from the box 100.
  • the partition 200 may be made of the same material as the box 100, thereby reducing production costs.
  • the partition 200 may be welded to the cabinet 100, or the partition 200 may have a connection hole formed therein, and the cabinet 100 may have a screw hole , The screw can be screwed on the box 100 through the connection hole.
  • the partition 200 can divide the accommodating cavity into a plurality of sub-accommodating cavities.
  • the partition 200 can divide the accommodating cavity into two sub-accommodating cavities.
  • the data processing device 300 may include an x86 main board 310 and a first arithmetic circuit board 350.
  • the x86 main board 310 may be a data processing structure common in the prior art.
  • the first arithmetic circuit board 350 may be a circuit structure capable of realizing data processing functions. Multiple data processing chips may be provided, for example, one or more of CPU chips, artificial intelligence ("Artificial Intelligence", AI for short) chips, etc., which are not specifically limited herein.
  • the x86 motherboard may also be replaced with other data processing structures, such as an ARM chip, PLC, etc.
  • the x86 main board 310 and the first arithmetic circuit board 350 can be dispersedly placed in different sub-accommodating chambers, so that the computing capability of the server and the space utilization ratio of the cabinet 100 can be improved.
  • the partition 200 When installing the server, the partition 200 may be installed in the containing cavity first, and then the data processing device 300 may be dispersedly placed in a plurality of sub-receiving cavities to complete the assembly of the server.
  • the server provided by the embodiment of the present disclosure is provided with a cabinet, a partition, and a data processing device; a storage cavity is formed inside the cabinet, and the partition is provided in the storage cavity to divide the storage cavity into a plurality of sub-accommodation cavities; a data processing device It includes an x86 motherboard and a first arithmetic circuit board connected to the x86 motherboard.
  • the x86 motherboard and the first arithmetic circuit board are respectively accommodated in different sub-cavity chambers.
  • the first computing circuit board is added on the basis of the x86 motherboard, thereby improving the computing power of the server, and also improving the space utilization rate of the server.
  • the box 100 includes a bottom wall 110, a top wall 120 opposite to the bottom wall 110, and a side wall 130 fixed between the bottom wall 110 and the top wall 120.
  • the bottom wall 110, the top wall 120, and the side wall 130 are common A receiving cavity is enclosed; the partition 200 is connected to the side wall 130.
  • the box body 100 may include a top wall 120, a bottom wall 110, and a side wall 130, the top wall 120, the bottom wall 110 may be a planar plate structure, and the side wall 130 may be a ring shape In structure, the bottom wall 110, the top wall 120, and the side wall 130 may collectively enclose the receiving cavity.
  • An opening for installing the data processing device 300 may also be formed on the side wall 130.
  • the partition 200 can be put into the receiving cavity through the opening and installed on the side wall 130.
  • the partition 200 can be fixed on the side wall 130 Using the space above the bottom wall 110 further improves the utilization rate of the space.
  • the partition 200 can be detachably connected to the side wall 130, and the detachable connection method of the partition 200 can be various, for example, the partition 200 can be screwed to the side wall 130 by bolts, or the partition 200 can The engagement between the groove and the protrusion is snapped on the side wall 130, so that the number of the partitions 200 can be reasonably allocated according to the height of the data processing device 300.
  • a slot 131 is provided on the side wall 130, and the partition 200 is inserted into the slot 131.
  • the number of slots 131 may be one or multiple. When there is one slot 131, the length of the slot 131 may be equal to the width of the side wall 130; when the number of slots 131 is multiple, multiple slots
  • the 131 may be arranged on the side wall 130 along the same straight line at intervals, thereby improving the supporting effect of the partition 200.
  • the partition 200 is disposed parallel to the bottom wall 110, which makes the installation of the partition 200 more convenient, and the space inside the box 100 is more regular and orderly.
  • the number of the partitions 200 is multiple.
  • the plurality of partitions 200 are arranged in parallel and spaced in the accommodating cavity.
  • the provision of the plurality of partitions 200 can further improve the space utilization of the accommodating cavity.
  • the number of partitions 200 is two; two partitions 200 partition the receiving cavity into a first sub-receiving cavity 140, a second sub-receiving cavity 150, and a third sub-receiving cavity 160.
  • the two partitions 200 may be parallel to the bottom wall 110, and both sides of each partition 200 may be connected to the side wall 130, so that the accommodating cavity may be divided into three parts, the first sub-accommodating cavity 140
  • the arrangement order of the third sub-accommodating cavity 160 and the second sub-accommodating cavity 150 may be sequentially from top to bottom, or may be sequentially from bottom to top, which is not specifically limited herein.
  • the volumes of the first sub-accommodating cavity 140, the second sub-accommodating cavity 150, and the third sub-accommodating cavity 160 may be the same or different, and may be specifically planned according to the data processing device 300 to be accommodated.
  • the CPU chip 320 is provided on the x86 motherboard 310
  • the first arithmetic circuit board 350 includes a plurality of first AI chips 351
  • the x86 motherboard 310 is disposed in the first sub-cavity 140
  • the first computing The circuit board 350 is disposed in the second sub-accommodating cavity 150.
  • the CPU chip 320 may be a structure commonly used in the prior art to implement data processing and control functions. Both the x86 motherboard 310 and the CPU chip 320 may be disposed in the first sub-accommodating cavity 140, which may be used as the main computing component of the server to improve the computing performance of the server.
  • the first arithmetic circuit board 350 may be provided with a plurality of first AI chips 351, and the plurality of first AI chips 351 may communicate with the outside through the first arithmetic circuit board 350.
  • the number of the first AI chip 351 can be set according to the required computing requirements.
  • the first arithmetic circuit board 350 may be provided with six first AI chips 351, and the first AI chip 351 may be an artificial intelligence chip common in the prior art, which may have a data processing function.
  • the number of the first arithmetic circuit board 350 may also be multiple, which can be set according to the actual space size of the second sub-accommodating cavity 140.
  • the data processing device 300 further includes: a control board 330 and a routing board 340; the control board 330 is connected to the x86 main board 310 through the routing board 340, and the first arithmetic circuit board 350 is connected to the routing board 340 and the control board 330, respectively; and control The board 330 and the routing board 340 are disposed in the second sub-accommodating cavity 150.
  • control board 330 may be a structure capable of implementing a control function in the prior art
  • routing board 340 may be a structure such as a switch capable of implementing network communication in the prior art.
  • the control board 330 and the plurality of first arithmetic circuit boards 350 can communicate through a 24-pin board-to-wire connector, and the routing board 340, the control board 330 and the first arithmetic circuit board 350 can also pass through Gigabit Ethernet (RJ45 / 1G) communication, in addition, the routing board 340 can also communicate with the x86 motherboard 310, thereby connecting the data processing device 300 between the first sub-cavity 140 and the second sub-cavity 150, further Improve the computing power of the server.
  • RJ45 / 1G Gigabit Ethernet
  • FIG. 3 is a top view of the second sub-accommodation cavity in the embodiment of the present disclosure.
  • a first arithmetic circuit board 350 may be provided on the front right side
  • a control board 330 may be provided on the right rear side.
  • the data processing device 300 further includes: a plurality of second arithmetic circuit boards 360, the second arithmetic circuit board 360 includes a plurality of second AI chips 361, and the second arithmetic circuit boards 360 are respectively connected to the control board 330 and the routing board 340 connection; the second arithmetic circuit board 360 is disposed in the third sub-accommodating cavity 160.
  • the second arithmetic circuit board 360 may be provided with a plurality of second AI chips 361, and the plurality of second AI chips 361 may communicate with the outside through the second arithmetic circuit board 360.
  • the second AI chip 361 may be an artificial intelligence chip common in the prior art, which may have a data processing function.
  • the second arithmetic circuit board 360 can communicate with the routing board 340 through Gigabit Ethernet, and it can also communicate with the control board 330 through a 24-pin board-to-wire connector.
  • the structure and function of the second arithmetic circuit board 360 may be the same as the first arithmetic circuit board 350, and the second AI chip 361 may also be the same as the first AI chip 351, so as to reduce the cost.
  • FIG. 4 is a top view of a third sub-accommodating cavity in an embodiment of the present disclosure.
  • the number of second arithmetic circuit boards 360 may be two, and the two second arithmetic circuit boards 360 may be placed side by side.
  • Six second AI chips 361 may also be provided on the second computing circuit board 360 to improve the computing capability of the server.
  • the number of the second arithmetic circuit board 360 and the number of the second AI chips 361 can also be set according to the actual required arithmetic capability, for example, when the arithmetic capability is small, the second arithmetic circuit board 360 may not be set, when the arithmetic capability When it is high, a plurality of second arithmetic circuit boards 360 may be provided, and the number of second AI chips 361 on each arithmetic circuit board 360 may be the same or different.
  • first sub-accommodating cavity 140, the second sub-accommodating cavity 150, and the third sub-accommodating cavity 160 are sequentially arranged along the direction from the bottom wall 110 toward the top wall 120, and the third sub-accommodating cavity 160,
  • the second sub-accommodating cavity 150 and the first sub-accommodating cavity 140 facilitate the modification of the existing server without changing too much of the original structure, thereby reducing costs.
  • a fan 400 for dissipating heat for the data processing device 300 is provided in the accommodating cavity, thereby reducing the temperature of the data processing device 300 and ensuring normal operation.
  • each sub-accommodating cavity is provided with a fan 400, and the number of the fan 400 may be multiple, so as to separately radiate the multiple sub-accommodating cavity, and the heat dissipation efficiency is higher.
  • At least two sub-accommodation chambers share a fan 400, and two sub-accommodation chambers with little heat dissipation may be arranged adjacently and share the heat dissipation with the fan 400, thereby improving the utilization rate of the fan 400 and reducing costs.
  • the first sub-accommodating cavity 140 and the third sub-accommodating cavity 160 may share four fans 400, and the second sub-accommodating cavity 150 may be provided with eight fans 400 individually, and the fans 400 may be disposed at the rear side of each sub-accommodating cavity.
  • the terms “installation”, “connected”, “connection”, “fixed” and other terms should be understood in a broad sense, for example, it can be a fixed connection or a detachable connection , Or integrated; it can be mechanical connection, electrical connection or communication with each other; it can be directly connected or indirectly connected through an intermediate medium, it can be the connection between two components or the interaction between two components, Unless otherwise clearly defined.
  • installation can be a fixed connection or a detachable connection , Or integrated; it can be mechanical connection, electrical connection or communication with each other; it can be directly connected or indirectly connected through an intermediate medium, it can be the connection between two components or the interaction between two components, Unless otherwise clearly defined.

Abstract

本发明提供一种服务器,其中,服务器包括箱体、分隔件以及数据处理装置;箱体内部形成有容纳腔,分隔件设置在容纳腔内,以将容纳腔分隔为多个子容纳腔;数据处理装置包括x86主板以及与x86主板连接的第一运算电路板,x86主板及第一运算电路板分别容置于不同的子容纳腔内。相比现有技术中的服务器,在x86主板的基础上增加了第一运算电路板,从而提高了服务器的运算能力,同时还提高了服务器的空间利用率。

Description

服务器 技术领域
本申请涉及数据处理技术领域,尤其涉及一种服务器。
背景技术
服务器作为常用的数据处理装置,其必需具有处理大量数据的能力。因此,如何合理规划服务器内部结构以提升服务器的数据处理能力也越来越受到人们重视。
目前,服务器包括机箱以及x86主板,x86主板上设置有用于处理数据的中央处理器(Central Processing Unit,简称CPU芯片),x86主板安装在机箱的底部。
但是,随着人们对服务器运算能力要求的逐渐提高,x86主板的运算能力越来越难满足运算需求。
上述背景技术内容仅用于帮助理解本申请,而并不代表承认或认可所提及的任何内容属于相对于本申请的公知常识的一部分。
发明内容
本公开实施例提供一种服务器,以解决服务器运算能力低的问题。
本公开实施例提供了一种服务器,包括:箱体、分隔件以及数据处理装置;所述箱体内部形成有容纳腔,所述分隔件设置在所述容纳腔内,以将所述容纳腔分隔为多个子容纳腔;所述数据处理装置包括x86主板以及与所述x86主板连接的第一运算电路板,所述x86主板及所述第一运算电路板分别容置于不同的所述子容纳腔内。
如上所述的服务器,其中,所述箱体包括底壁、与所述底壁相对设置的顶壁以及固定在所述底壁与所述顶壁之间的侧壁,所述底壁、所述顶壁以及 所述侧壁共同围成所述容纳腔;所述分隔件与所述侧壁连接。
如上所述的服务器,其中,所述分隔件可拆卸连接在所述侧壁上。
如上所述的服务器,其中,所述侧壁上设置有插槽,所述分隔件插设在所述插槽内。
如上所述的服务器,其中,所述分隔件平行于所述底壁设置。
如上所述的服务器,其中,所述分隔件的数量为多个,多个所述分隔件平行且间隔设置在所述容纳腔内。
如上所述的服务器,其中,所述分隔件的数量为2;两个所述分隔件将所述容纳腔分隔为第一子容纳腔、第二子容纳腔以及第三子容纳腔。
如上所述的服务器,其中,所述x86主板上设置有CPU芯片,所述第一运算电路板上包括多个第一AI芯片,所述x86主板设置在所述第一子容纳腔内,所述第一运算电路板设置在所述第二子容纳腔内。
如上所述的服务器,其中,所述数据处理装置还包括:所述数据处理装置还包括:控制板以及路由板;所述控制板通过所述路由板与所述x86主板连接,所述第一运算电路板分别与所述路由板及所述控制板连接;且所述控制板以及所述路由板设置在所述第二子容纳腔内。
如上所述的服务器,其中,所述数据处理装置还包括:多个第二运算电路板,每个所述第二运算电路板上包括多个第二AI芯片,且所述第二运算电路板分别与所述控制板及所述路由板连接;所述第二运算电路板设置在所述第三子容纳腔内。
如上所述的服务器,其中,所述第一子容纳腔、所述第三子容纳腔、所述第二子容纳腔沿从所述底壁朝向所述顶壁的方向依次设置。
如上所述的服务器,其中,所述容纳腔内设置有用于为所述数据处理装置散热的风扇。
如上所述的服务器,其中,每个所述子容纳腔内都设置有所述风扇。
如上所述的服务器,其中,至少两个所述子容纳腔共用所述风扇。
本公开实施例提供的服务器,通过设置箱体、分隔件以及数据处理装置;箱体内部形成有容纳腔,分隔件设置在容纳腔内,以将容纳腔分隔为多个子容纳腔;数据处理装置包括x86主板以及与x86主板连接的第一运算电路板,x86主板及第一运算电路板分别容置于不同的子容纳腔内。相比目前的服务器,在x86主板的基础上增加了第一运算电路板,从而提高了服务器的运算能力,同时还提高了服务器的空间利用率。
附图说明
一个或多个实施例通过与之对应的附图进行示例性说明,这些示例性说明和附图并不构成对实施例的限定,附图中具有相同参考数字标号的元件示为类似的元件,附图不构成比例限制,并且其中:
图1为本公开实施例中服务器的结构示意图;
图2为本公开实施例中服务器的拓扑图;
图3为本公开实施例中第二子容纳腔的俯视图;
图4为本公开实施例中第三子容纳腔的俯视图。
附图标记说明:
100:箱体;
110:底壁;
120:顶壁;
130:侧壁;
131:插槽;
140:第一子容纳腔;
150:第二子容纳腔;
160:第三子容纳腔;
200:分隔件;
300:数据处理装置;
310:x86主板;
320:CPU芯片;
330:控制板;
340:路由板;
350:第一运算电路板;
351:第一AI芯片;
360:第二运算电路板;
361:第二AI芯片;
370:电源;
400:风扇。
具体实施方式
为了能够更加详尽地了解本公开实施例的特点与技术内容,下面结合附图对本公开实施例的实现进行详细阐述,所附附图仅供参考说明之用,并非用来限定本公开实施例。在以下的技术描述中,为方便解释起见,通过多个细节以提供对所披露实施例的充分理解。然而,在没有这些细节的情况下,一个或多个实施例仍然可以实施。在其它情况下,为简化附图,熟知的结构和装置可以简化展示。
图1为本公开实施例中服务器的结构示意图;图2为本公开实施例中服务器的拓扑图。
请结合图1和图2,本实施例提供了一种服务器,包括:箱体100、分隔件200以及数据处理装置300;箱体100内部形成有容纳腔,分隔件200设置在容纳腔内,以将容纳腔分隔为多个子容纳腔,数据处理装置300包括x86主板310以及与x86主板310连接的第一运算电路板350,x86主板310及第一运算电路板350分别容置于不同的子容纳腔内。具体地,服务器可以用于对各种数据进行处理。服务器可以包括箱体100、分隔件200和数据处理装置300。
箱体100的形状可以有多种,例如箱体100可以为长方体形、圆柱形多 种结构,箱体100可以具有一个容纳腔,容纳腔的形状可以与箱体100的形状相同,也可以不同,优选地,箱体100可以为薄壁状结构,使得容纳腔的体积较大,从而容纳更多的数据处理装置300。另外,箱体100还可以具有一个开口,从而方便放置数据处理装置300。
箱体100的材质也可以有多种,例如其可以用现有技术中常见的材料(例如金属或塑料等)以常见的加工方式(例如,焊接或注塑等)加工而成。优选地,箱体100可以用铝合金材质焊接而成,从而提高箱体100的强度,并降低箱体100的重量。
分隔件200可以设置在箱体100内,分隔件200的结构可以有多种,例如,分隔件200可以是网格状结构,又或者其可以是板状实体结构,分隔件200的形状可以是平面形,也可以是曲面形,具体可以根据容纳腔的形状进行设置,在此不做具体限定。
分隔件200的材质也可以有多种,其也可以用现有技术中常见的材料(例如金属或塑料等)以常见的加工方式(例如,焊接或注塑等)加工而成。分隔件200的材质可以与箱体100相同,也可以与箱体100不同。优选地,分隔件200可以是与箱体100以同种材质制成,从而降低生产成本。
另外,分隔件200与箱体100的连接方式可以有多种,例如,分隔件200可以焊接在箱体100上,又或者,分隔件200上形成有连接孔,箱体100上形成有螺纹孔,螺钉可以穿过连接孔螺接在箱体100上。
分隔件200可以将容纳腔分隔为多个子容纳腔,例如分隔件200为平面结构时,分隔件200可以将容纳腔分隔为两个子容纳腔,又例如,分隔件200为曲面结构时,其可以与箱体100具有多个连接处,从而将容纳腔分隔为至少两个子容纳腔。
数据处理装置300可以包括x86主板310和第一运算电路板350,x86主板310可以为现有技术中常见的数据处理结构,第一运算电路板350可以为能够实现数据处理功能的电路结构,既可以设置有多个数据处理芯片,例如, CPU芯片、人工智能(“Artificial Intelligence”,简称AI)芯片等的一种或多种,在此不做具体限定。
可以理解,在某些实施例中,x86主板还可以用其他数据处理结构代替,例如ARM芯片、PLC等。
x86主板310和第一运算电路板350可以分散放置在不同的子容纳腔内,从而可以提高服务器的运算能力和箱体100的空间利用比率。
服务器安装时,可以先将分隔件200安装在容纳腔内,然后将数据处理装置300分散放置于多个子容纳腔内,完成服务器的组装。
本公开实施例提供的服务器,通过设置箱体、分隔件以及数据处理装置;箱体内部形成有容纳腔,分隔件设置在容纳腔内,以将容纳腔分隔为多个子容纳腔;数据处理装置包括x86主板以及与x86主板连接的第一运算电路板,x86主板及第一运算电路板分别容置于不同的子容纳腔内。相比目前的服务器,在x86主板的基础上增加了第一运算电路板,从而提高了服务器的运算能力,同时还提高了服务器的空间利用率。
进一步地,箱体100包括底壁110、与底壁110相对设置的顶壁120以及固定在底壁110与顶壁120之间的侧壁130,底壁110、顶壁120以及侧壁130共同围成容纳腔;分隔件200与侧壁130连接。
具体地,作为箱体100的一种优选结构,箱体100可以包括顶壁120、底壁110以及侧壁130,顶壁120、底壁110可以为平面板状结构,侧壁130可以为环形结构,底壁110、顶壁120和侧壁130可以共同围成容纳腔。侧壁130上还可以形成有用于安装数据处理装置300的开口。分隔件200可以通过开口放入容纳腔内,并安装在侧壁130上。由于现有技术中的数据处理装置300多是安装在底壁110上,而数据处理装置300的高度一般比较低,导致底壁110上方的空间浪费,将分隔件200固定在侧壁130上能够利用底壁110上方空间,进一步地提高空间的利用率。
进一步优选地,分隔件200可拆卸连接在侧壁130上,分隔件200的可拆卸连接方式可以有多种,例如分隔件200可以通过螺栓螺接在侧壁130上,又或者分隔件200可以通过卡槽与凸起的相配合卡接在侧壁130上,从而可以根据数据处理装置300的高度,合理分配分隔件200的数量。
进一步地,侧壁130上设置有插槽131,分隔件200插设在插槽131内。插槽131的数量可以是一个也可以是多个,当插槽131为一个时,插槽131的长度可以等于侧壁130的宽度;当插槽131的数量为多个时,多个插槽131可以沿同一直线间隔设置在侧壁130上,从而提高对分隔件200的支撑效果。
作为一种优选地实施例,分隔件200平行于底壁110设置,使得分隔件200的安装更加方便,且箱体100内部的空间更加规整有序。
在上述实施例的基础上,分隔件200的数量为多个,多个分隔件200平行且间隔设置在容纳腔内,设置多个分隔件200可以进一步提升容纳腔的空间利用率。
作为一种优选地实施方式,分隔件200的数量为2;两个分隔件200将容纳腔分隔为第一子容纳腔140、第二子容纳腔150以及第三子容纳腔160。
具体地,两个分隔件200可以都平行于底壁110,每个分隔件200的两侧可以都连接在侧壁130上,从而可以将容纳腔分隔为3个部分,第一子容纳腔140,第三子容纳腔160、第二子容纳腔150的设置顺序可以为从上到下依次设置,也可以是从下到上依次设置,在此不做具体限定。另外,第一子容纳腔140、第二子容纳腔150和第三子容纳腔160的体积可以相同也可以不同,具体可以根据所要容纳的数据处理装置300进行规划。
在上述实施例的基础上,x86主板310上设置有CPU芯片320,第一运算电路板350上包括多个第一AI芯片351,x86主板310设置在第一子容纳腔140内,第一运算电路板350设置在第二子容纳腔150内。
具体地,CPU芯片320可以是现有技术中常见的实现数据处理及控制功能的结构。x86主板310和CPU芯片320可以都设置在第一子容纳腔140内, 其可以作为服务器的主要运算部件来提高服务器的运算性能。
第一运算电路板350可以设置有多个第一AI芯片351,多个第一AI芯片351可以通过第一运算电路板350与外部实现通信。第一AI芯片351的数量可以根据所需的运算需求进行设置。例如第一运算电路板350可以设置有6块第一AI芯片351,第一AI芯片351可以是现有技术中常见的人工智能芯片,其可以具有数据处理的功能。
另外,第一运算电路板350的数量也可以为多个,可以根据第二子容纳腔140的实际空间大小进行设置。
进一步地,数据处理装置300还包括:控制板330以及路由板340;控制板330通过路由板340与x86主板310连接,第一运算电路板350分别与路由板340及控制板330连接;且控制板330以及路由板340设置在第二子容纳腔150内。
具体地,控制板330可以是现有技术中能够实现控制功能的结构,路由板340可以是现有技术中能够实现网络通信的交换机等结构。控制板330与多个第一运算电路板350之间可以通过24pin板对线连接器进行通信,路由板340、控制板330和第一运算电路板350之间还可以通过千兆以太网(RJ45/1G)进行通信,另外,路由板340可以还可以实现与x86主板310之间的通信,从而将第一子容纳腔140和第二子容纳腔150之间的数据处理装置300通信连接,进一步提高服务器的运算能力。
图3为本公开实施例中第二子容纳腔的俯视图,请参考图3,作为第二子容纳腔150内的一种布局结构,左前侧可以设置为路由板340,左后侧可以为电源370,右前侧可以设置第一运算电路板350,右后侧可以设置控制板330。
更进一步,数据处理装置300还包括:多个第二运算电路板360,第二运算电路板360上包括多个第二AI芯片361,且第二运算电路板360分别与控制板330及路由板340连接;第二运算电路板360设置在第三子容纳腔160 内。
具体地,第二运算电路板360可以设置有多个第二AI芯片361,多个第二AI芯片361可以通过第二运算电路板360与外部实现通信。第二AI芯片361可以是现有技术中常见的人工智能芯片,其可以具有数据处理的功能。第二运算电路板360可以通过千兆以太网与路由板340进行通信,其还可以通过24pin板对线连接器与控制板330进行通信。第二运算电路板360的结构和功能可以与第一运算电路板350相同,第二AI芯片361也可以和第一AI芯片351相同,以降低成本。
图4为本公开实施例中第三子容纳腔的俯视图,请结合图4,优选地,第二运算电路板360的数量可以为2,两个第二运算电路板360可以左右并排放置,第二运算电路板360上还可以设置有6个第二AI芯片361,以提升服务器的运算能力。当然,第二运算电路板360的数量和第二AI芯片361的数量也可以根据实际所需的运算能力进行设置,例如当运算能力较小时,可以不设置第二运算电路板360,当运算能力较高时,可以设置多个第二运算电路板360,每个运算电路板360上第二AI芯片361的数量也可以相同也可以不同。
进一步地,第一子容纳腔140、第二子容纳腔150、第三子容纳腔160沿从底壁110朝向顶壁120的方向依次设置,从上到下依次为第三子容纳腔160、第二子容纳腔150和第一子容纳腔140,从而方便对现有的服务器进行改装,无需改变太多的原有结构,从而降低成本。
在上述实施例的基础上,容纳腔内设置有用于为数据处理装置300散热的风扇400,从而降低数据处理装置300的温度,保证正常运行。
进一步,每个子容纳腔内都设置有风扇400,风扇400的数量可以为多个,从而对多个子容纳腔进行分别散热,散热效率更高。
优选地,至少两个子容纳腔共用风扇400,对于散热不多的两个子容纳腔可以相邻设置,并共用风扇400散热,从而提高风扇400的利用率,降低 成本。例如,第一子容纳腔140和第三子容纳腔160可以共用4个风扇400,第二子容纳腔150可以单独设置8个风扇400,风扇400都可以设置在每个子容纳腔的后侧。
在本申请的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”、“顺时针”、“逆时针”、“轴向”、“径向”、“周向”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。
在本申请中,除非另有明确的规定和限定,术语"安装"、"相连"、"连接"、"固定"等术语应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或成一体;可以是机械连接,也可以是电连接或彼此可通讯;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系,除非另有明确的限定。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
在以上描述中,参考术语"一个实施例"、"一些实施例"、"示例"、"具体示例"、或"一些示例"等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
本申请中使用的用词仅用于描述实施例并且不用于限制权利要求。如在实施例以及权利要求的描述中使用的,除非上下文清楚地表明,否则单数形式的“一个”(a)、“一个”(an)和“所述”(the)旨在同样包括复数形式。 类似地,如在本申请中所使用的术语“和/或”是指包含一个或一个以上相关联的列出的任何以及所有可能的组合。另外,当用于本申请中时,术语“包括”(comprise)及其变型“包括”(comprises)和/或包括(comprising)等指陈述的特征、整体、步骤、操作、元素,和/或组件的存在,但不排除一个或一个以上其它特征、整体、步骤、操作、元素、组件和/或这些的分组的存在或添加。
上述技术描述可参照附图,这些附图形成了本申请的一部分,并且通过描述在附图中示出了依照所描述的实施例的实施方式。虽然这些实施例描述的足够详细以使本领域技术人员能够实现这些实施例,但这些实施例是非限制性的;这样就可以使用其它的实施例,并且在不脱离所描述的实施例的范围的情况下还可以做出变化。所有这些变化被认为包含在所公开的实施例以及权利要求中。
另外,上述技术描述中使用术语以提供所描述的实施例的透彻理解。然而,并不需要过于详细的细节以实现所描述的实施例。因此,实施例的上述描述是为了阐释和描述而呈现的。上述描述中所呈现的实施例以及根据这些实施例所公开的例子是单独提供的,以添加上下文并有助于理解所描述的实施例。上述说明书不用于做到无遗漏或将所描述的实施例限制到本公开的精确形式。根据上述教导,若干修改、选择适用以及变化是可行的。在某些情况下,没有详细描述为人所熟知的处理步骤以避免不必要地影响所描述的实施例。

Claims (14)

  1. 一种服务器,其特征在于,包括:箱体、分隔件以及数据处理装置;
    所述箱体内部形成有容纳腔,所述分隔件设置在所述容纳腔内,以将所述容纳腔分隔为多个子容纳腔;
    所述数据处理装置包括x86主板以及与所述x86主板连接的第一运算电路板,所述x86主板及所述第一运算电路板分别容置于不同的所述子容纳腔内。
  2. 根据权利要求1所述的服务器,其特征在于,所述箱体包括底壁、与所述底壁相对设置的顶壁以及固定在所述底壁与所述顶壁之间的侧壁,所述底壁、所述顶壁以及所述侧壁共同围成所述容纳腔;所述分隔件与所述侧壁连接。
  3. 根据权利要求2所述的服务器,其特征在于,所述分隔件可拆卸连接在所述侧壁上。
  4. 根据权利要求3所述的服务器,其特征在于,所述侧壁上设置有插槽,所述分隔件插设在所述插槽内。
  5. 根据权利要求4所述的服务器,其特征在于,所述分隔件平行于所述底壁设置。
  6. 根据权利要求2-5任一项所述的服务器,其特征在于,所述分隔件的数量为多个,多个所述分隔件平行且间隔设置在所述容纳腔内。
  7. 根据权利要求6所述的服务器,其特征在于,所述分隔件的数量为2;两个所述分隔件将所述容纳腔分隔为第一子容纳腔、第二子容纳腔以及第三子容纳腔。
  8. 根据权利要求7所述的服务器,其特征在于,所述x86主板上设置有CPU芯片,所述第一运算电路板上包括多个第一AI芯片,所述x86主板设置在所述第一子容纳腔内,所述第一运算电路板设置在所述第二子容纳腔内。
  9. 根据权利要求8所述的服务器,其特征在于,所述数据处理装置还包 括:控制板以及路由板;所述控制板通过所述路由板与所述x86主板连接,所述第一运算电路板分别与所述路由板及所述控制板连接;且所述控制板以及所述路由板设置在所述第二子容纳腔内。
  10. 根据权利要求9所述的服务器,其特征在于,所述数据处理装置还包括:多个第二运算电路板,每个所述第二运算电路板上包括多个第二AI芯片,且所述第二运算电路板分别与所述控制板及所述路由板连接;所述第二运算电路板设置在所述第三子容纳腔内。
  11. 根据权利要求10所述的服务器,其特征在于,所述第一子容纳腔、所述第三子容纳腔、所述第二子容纳腔沿从所述底壁朝向所述顶壁的方向依次设置。
  12. 根据权利要求1-5任一项所述的服务器,其特征在于,所述容纳腔内设置有用于为所述数据处理装置散热的风扇。
  13. 根据权利要求12所述的服务器,其特征在于,每个所述子容纳腔内都设置有所述风扇。
  14. 根据权利要求12所述的服务器,其特征在于,至少两个所述子容纳腔共用所述风扇。
PCT/CN2018/117184 2018-11-23 2018-11-23 服务器 WO2020103129A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/117184 WO2020103129A1 (zh) 2018-11-23 2018-11-23 服务器

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/117184 WO2020103129A1 (zh) 2018-11-23 2018-11-23 服务器

Publications (1)

Publication Number Publication Date
WO2020103129A1 true WO2020103129A1 (zh) 2020-05-28

Family

ID=70774288

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/117184 WO2020103129A1 (zh) 2018-11-23 2018-11-23 服务器

Country Status (1)

Country Link
WO (1) WO2020103129A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639712A (zh) * 2008-07-31 2010-02-03 英业达股份有限公司 服务器
CN203149471U (zh) * 2012-12-18 2013-08-21 南京烽火星空通信发展有限公司 一种优化散热的机箱结构
CN103576743A (zh) * 2012-07-25 2014-02-12 成都亿友科技有限公司 集成式云计算服务器的集中控制装置
CN208077071U (zh) * 2018-02-28 2018-11-09 苏州丹卡精密机械有限公司 一种便于维修的服务器机箱
CN209132686U (zh) * 2018-11-23 2019-07-19 北京比特大陆科技有限公司 服务器

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639712A (zh) * 2008-07-31 2010-02-03 英业达股份有限公司 服务器
CN103576743A (zh) * 2012-07-25 2014-02-12 成都亿友科技有限公司 集成式云计算服务器的集中控制装置
CN203149471U (zh) * 2012-12-18 2013-08-21 南京烽火星空通信发展有限公司 一种优化散热的机箱结构
CN208077071U (zh) * 2018-02-28 2018-11-09 苏州丹卡精密机械有限公司 一种便于维修的服务器机箱
CN209132686U (zh) * 2018-11-23 2019-07-19 北京比特大陆科技有限公司 服务器

Similar Documents

Publication Publication Date Title
JP5313380B2 (ja) サーバシャーシ
JP4184408B2 (ja) モジュラープラットホームシステム及び装置
US20180027700A1 (en) Technologies for rack architecture
US20130135811A1 (en) Architecture For A Robust Computing System
CN205139812U (zh) 机壳及其备援式电源供应器
US10916818B2 (en) Self-activating thermal management system for battery pack
JP5715719B2 (ja) 熱伝導構造
US8441788B2 (en) Server
US8614890B2 (en) Chassis extension module
US20200140058A1 (en) Electronic speed controller assembly, power system, and unmanned aerial vehicle
WO2020103129A1 (zh) 服务器
CN216795610U (zh) 散热机壳及具有其的智能装置
US20210083340A1 (en) Shelf design for battery modules
US11528826B2 (en) Internal channel design for liquid cooled device
CN209132686U (zh) 服务器
WO2023273275A1 (zh) 一种液冷机柜
JP7311647B2 (ja) 浸漬環境での局所的な流体の加速
US11324109B2 (en) Electronic load device and heat-dissipating load module
WO2017154077A1 (ja) 電池装置および電池システム
CN219181957U (zh) 一种中央计算平台的硬件结构及车辆
CN219590742U (zh) 机箱
CN220114570U (zh) 车载主机及汽车
CN217608164U (zh) 一种散热结构和电气设备
WO2020103127A1 (zh) 服务器
CN214411312U (zh) 散热模组及储能模组

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18940739

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.09.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18940739

Country of ref document: EP

Kind code of ref document: A1