CN107239346A - A kind of whole machine cabinet computing resource tank node and computing resource pond framework - Google Patents

A kind of whole machine cabinet computing resource tank node and computing resource pond framework Download PDF

Info

Publication number
CN107239346A
CN107239346A CN201710433600.0A CN201710433600A CN107239346A CN 107239346 A CN107239346 A CN 107239346A CN 201710433600 A CN201710433600 A CN 201710433600A CN 107239346 A CN107239346 A CN 107239346A
Authority
CN
China
Prior art keywords
gpu
node
whole machine
computing resource
machine cabinet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710433600.0A
Other languages
Chinese (zh)
Inventor
郭猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201710433600.0A priority Critical patent/CN107239346A/en
Publication of CN107239346A publication Critical patent/CN107239346A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a kind of whole machine cabinet computing resource tank node and computing resource pond framework, the whole machine cabinet server for being configured with management module, calculate node is applied to the form of 1U nodes, its structure includes power panel, GPU node modules and GPU, the GPU node modules are connected to above-mentioned management module by power panel, realize to GPU node modules condition monitoring and the management function of computing resource;Data exchange chip is configured with GPU node modules, the data exchange chip can connect the calculate node, GPU and realize that data are calculated between GPU and calculate node to be exchanged.A kind of whole machine cabinet computing resource tank node and computing resource pond framework of the present invention is compared with prior art, pass through upper layer software (applications) pond administrative skill, realize that the dynamic pondization of resource and task are distributed automatically, reach that the maximization of node resource is used, improve resource pool flexibility, utilization rate, system energy consumption is reduced, it is practical.

Description

A kind of whole machine cabinet computing resource tank node and computing resource pond framework
Technical field
The present invention relates to field of computer technology, specifically a kind of whole machine cabinet computing resource tank node and computing resource Pond framework.
Background technology
With the fast development of internet economy, mass data just impacts whole data with unprecedented growth trend Center industry, higher requirement is proposed to IT infrastructure.Server is as one of core component of data center, in order to suitable The demand for answering following extensive business to increase, it is also desirable to which its framework is optimized and reconstructed.
In the Resources re engineering framework of server, computing resource reconstruct is one of important application.Simultaneously modularization and High density is the important trend of server development, shows as generic server and gradually develops to whole machine cabinet server.
The pondization design of current computing resource is not applied to whole machine cabinet server field, and integration density is low, high energy consumption, nothing Method is managed concentratedly, and resource allocation mode ossifys, and utilization of resources rate is low, installs and maintenance workload is big.
Based on this, the present invention provides a kind of whole machine cabinet computing resource tank node and computing resource pond framework.Solve complete machine Cabinet computing resource pond architecture design technology, forms 1U node modules by computing resource pond and is applied to whole machine cabinet server, and Realize the functions such as cascade extension, dynamic pond, centralized management.
The content of the invention
The technical assignment of the present invention is that there is provided a kind of whole machine cabinet computing resource tank node and calculating for above weak point Resource pool framework.
A kind of whole machine cabinet computing resource tank node, is applied to the form of 1U nodes and is configured with management module, calculate node Whole machine cabinet server, its structure includes power panel, GPU node modules and GPU, and the GPU node modules are connected by power panel Above-mentioned management module is connected to, is realized to GPU node modules condition monitoring and the management function of computing resource;In GPU node modules In be configured with data exchange chip, the data exchange chip can connect the calculate node, GPU and realize GPU and calculate node Between calculate data exchange.
Powered between power panel and the GPU node module using copper bar, supply voltage is 12V.
The data exchange chip configures 2 data upstream Interfaces and 4 data downstream interfaces, 4 data downstream interfaces 4 GPU are respectively connected to, 1 data upstream Interface can access calculate node, and the data uplink interface, data downstream interface are PCIE interfaces.
BMC chip, MCPU chips and the PCIE Switch chips of order interconnection are also configured with the GPU node modules, The PCIE Switch chips connect above-mentioned data exchange chip and are also associated with expansible external management interface, this pair of outer tube Reason interface is PCIE interfaces.
The computing resource tank node can be used for cascading, i.e., interconnect at least two GPU node modules, specific cascade structure For:The upstream Interface of a GPU node modules is accessed into calculate node first, another upstream Interface of the GPU node modules then connects Enter a upstream Interface of another GPU node modules;External management interface between two GPU node modules is interconnected, and realizes PCIE manages the intercommunication of signal;It is then real using above-mentioned connected mode between another GPU node modules and other GPU node modules Now cascade.
In the GPU node modules being connected with calculate node, the management to GPU node modules, MCPU are realized by MCPU External management interface and data exchange chip are connected to by 1 PCIE Switch chip, ascending tube is realized by BMC chip The dynamic select that passage is 1 and 2 is managed, that is, which data uplink interface selected, when calculate node module is to be cascaded module, pipe Link switching is managed to passage 1, keeps 1 MCPU to carry out the management of 2 or N number of GPU node modules, N here is to be cascaded mould The quantity of block, so as to realize the cascade of GPU node modules.
A kind of whole machine cabinet computing resource pond framework, including calculate node, some GPU node modules, a whole machine cabinet pipe Module and whole machine cabinet power bus BUSBAR are managed, calculate node and GPU node modules are connected to by respective power panel respectively Whole machine cabinet power bus BUSBAR power takings, realize the centrally connected power supply in computing resource pond;Whole machine cabinet management module is used to realize to whole The centralized management in rack computing resource pond, calculate node is used for the main equipment end as computing resource pond, is connected respectively by cable It is connected to each GPU node modules and transmits PCIE data-signals.
BMC chip in the calculate node, GPU nodes is logical by respective power panel and whole machine cabinet management module respectively Letter, so as to realize the centralized management in computing resource pond;The whole machine cabinet management module is used to collect calculate node and GPU node modules Resource information, resource utilization, and report the upper application software in the whole machine cabinet management module.
The whole machine cabinet management module communicated with monitoring chip BMC obtain resource information include cpu busy percentage, GPU profit With rate, the network bandwidth, and resource utilization in resource pool is reported into upper application software in time.
All GPU resource Unified codings of acquisition, management are formed GPU resource pond by the system upper application software, and According to specific related resource utilization rate, the business saturation degree of each GPU in GPU resource pond, effective adjustresources pool service are calculated Using realizing the dynamic pond of resource, while new processor active task can be distributed automatically, realize that the maximization of node resource is used.
Compared to the prior art a kind of whole machine cabinet computing resource tank node and computing resource pond framework of the present invention, have Following beneficial effect:
1), computing resource tank node module whole machine cabinet server is applied to the form of 1U nodes, improve deployment density.
2), whole machine cabinet computing resource pond can realize centrally connected power supply, centralized management, improve efficiency, reduce system energy consumption.
3), GPU node modules can realize that data are cascaded, and the dynamic of link management can be realized by BMC chip, reach meter Resource pool extension purpose is calculated, calculate node resource requirement is reduced, reduces cost.
4), based on the design of computing resource pond node module, with reference to calculate node, build the resource pool of whole machine cabinet form Framework, realizes the centrally connected power supply of whole machine cabinet form, centralized management, dynamic pond, raising delivery efficiency, O&M efficiency.
5), system upper application software by all GPU resource Unified codings in the mechanism, management, form GPU resource pond, And according to specific related resource utilization rate, calculate each GPU business saturation degree, effective adjustresources pool service in GPU resource pond Using, the dynamic pond of resource is realized, while new processor active task can be distributed automatically, realizes that the maximization of node resource is used, so that Resource pool flexibility, utilization rate are improved, system energy consumption is reduced, it is practical, it is applied widely, with good popularization and application Value.
Brief description of the drawings
Accompanying drawing 1 is whole machine cabinet computing resource tank node schematic diagram.
Accompanying drawing 2 is GPU node module cascade schematic diagrames.
Accompanying drawing 3 is computing resource pond link management cascade schematic diagram.
Accompanying drawing 4 is whole machine cabinet computing resource pond configuration diagram.
Embodiment
Below in conjunction with the accompanying drawings and specific embodiment the invention will be further described.
A kind of whole machine cabinet computing resource tank node, by the computing resource tank node module, with the form application of 1U nodes To whole machine cabinet server, the centralized management in computing resource pond can be achieved, integration density, reduction energy consumption is improved.Save in computing resource pond 4 GPU of point can be directly realized by mutual calculating data exchange by such as PEX9797 data exchange chip, the node module Data-interface can be cascaded to other 1 computing resource tank node, realize the cascade of computing resource pond data exchange unit.In number During according to cascade, the switching at runtime of link management is realized by BMC chip, keeps 1 MCPU to carry out 2 computing resource data exchanges The management of unit, realizes the cascade of data exchange administrative unit.
Whole machine cabinet management system collects the resource information of calculate node and GPU nodes, resource utilization and reports upper strata Application software.All GPU resource Unified codings, management in the mechanism are formed GPU resource pond by system upper application software, and According to specific related resource utilization rate, each GPU business saturation degree in GPU resource pond is calculated, effective adjustresources pool service should With, the dynamic pond of resource is realized, and new processor active task is distributed automatically, realize that the maximization of node resource is used.
As shown in Figure 1, concrete structure of the invention includes power panel, GPU node modules and GPU, the GPU nodes mould Block is connected to above-mentioned management module by power panel, realizes to GPU node modules condition monitoring and the management function of computing resource; Data exchange chip is configured with GPU node modules, the data exchange chip can connect the calculate node, GPU and realize Exchanging for data is calculated between GPU and calculate node.
Powered between power panel and the GPU node module using copper bar, the copper bar is mutual with node module by power panel Connection realizes 12V system power supplies, and hot plug, excessively stream, overvoltage circuit design are carried out to power panel, node module system is improved reliable Property.
The data exchange chip configures 2 data upstream Interfaces and 4 data downstream interfaces, 4 data downstream interfaces 4 GPU are respectively connected to, 1 data upstream Interface can access calculate node, and the data uplink interface, data downstream interface are PCIE interfaces.
BMC chip, MCPU chips and the PCIE Switch chips of order interconnection are also configured with the GPU node modules, The PCIE Switch chips connect above-mentioned data exchange chip and are also associated with expansible external management interface, this pair of outer tube Reason interface is PCIE interfaces.
The computing resource tank node can be used for cascading, i.e., interconnect at least two GPU node modules, specific cascade structure For:The upstream Interface of a GPU node modules is accessed into calculate node first, another upstream Interface of the GPU node modules then connects Enter a upstream Interface of another GPU node modules;External management interface between two GPU node modules is interconnected, and realizes PCIE manages the intercommunication of signal;It is then real using above-mentioned connected mode between another GPU node modules and other GPU node modules Now cascade.
In the GPU node modules being connected with calculate node, the management to GPU node modules, MCPU are realized by MCPU External management interface and data exchange chip are connected to by 1 PCIE Switch chip, realized by BMC chip up Management passage is 1 and 2 dynamic select, that is, selects which data uplink interface, when calculate node module is to be cascaded module, Link management is switched to passage 1, keeps 1 MCPU to carry out the management of 2 or N number of GPU node modules, and N here is to be cascaded The quantity of module, so as to realize the cascade of GPU node modules.
Data exchange chip is by taking PEX9797 chips as an example, as shown in Fig. 2 being computing resource tank node module-cascade framework Schematic diagram.4 GPU of computing resource tank node can be directly realized by mutual calculating data exchange, node by PEX9797 chips 1 data upstream Interface of module is connected to calculate node, and another 1 data-interface is connected to other 1 computing resource tank node, Realize the cascade of computing resource pond data exchange unit.
As shown in figure 3, being computing resource pond link management cascade schematic diagram.Computing resource tank node module passes through MCPU realizes the management of computing resource data exchange unit, and MCPU PCIEx1 management signals pass through 1 PCIE Switch chip Be connected to external management interface and data exchange chip PEX9797, by BMC chip realize up management passage for 1 and 2 it is dynamic State is selected.When calculate node module for when being cascaded module, link management is switched to passage 1,1 MCPU is kept to carry out 2 meters The management of resource data crosspoint is calculated, the cascade of data exchange administrative unit is realized.
A kind of whole machine cabinet computing resource pond framework, as shown in figure 4, its structure includes a calculate node, some GPU sections Point module, whole machine cabinet management module and whole machine cabinet power bus BUSBAR, calculate node and GPU node modules are respectively by respective Power panel be connected to whole machine cabinet power bus BUSBAR power takings, realize the centrally connected power supply in computing resource pond;Whole machine cabinet manages mould Block is used to realize the centralized management to whole machine cabinet computing resource pond, and calculate node passes through as the Host ends in computing resource pond PCIERedriver chips strengthen PCIE signal driving force, and PCIE data-signals are connected respectively to each GPU nodes by cable Module, forms GPU resource pond, realizes whole machine cabinet computing resource pond.
BMC chip in the calculate node, GPU nodes is logical by respective power panel and whole machine cabinet management module respectively Letter, so as to realize the centralized management in computing resource pond;The whole machine cabinet management module is used to collect calculate node and GPU node modules Resource information, resource utilization, and report the upper application software in the whole machine cabinet management module.
The whole machine cabinet management module communicated with monitoring chip BMC obtain resource information include cpu busy percentage, GPU profit With rate, the network bandwidth, and resource utilization in resource pool is reported into upper application software in time.
All GPU resource Unified codings of acquisition, management are formed GPU resource pond by the system upper application software, and According to specific related resource utilization rate, the business saturation degree of each GPU in GPU resource pond, effective adjustresources pool service are calculated Using realizing the dynamic pond of resource, while new processor active task can be distributed automatically, realize that the maximization of node resource is used.
In the present invention, computing resource high-speed data crosspoint is built based on data exchange chip, forms computing resource Tank node module, whole machine cabinet server is applied to the form of 1U nodes, and the centralized management in computing resource pond can be achieved, collection is improved Into density, reduction energy consumption.
Computing resource tank node module data cascade is built according to Fig. 2, reaches that computing resource pond extends purpose, reduces Calculate node resource requirement, reduces cost.
Computing resource tank node module management link cascade is built according to Fig. 3, computing resource pond expansion management is realized Demand, reduces cost.
According to shown in Fig. 4, based on the design of computing resource pond node module, with reference to calculate node, whole machine cabinet form is built Resource pool framework, realize the centrally connected power supply of whole machine cabinet form, centralized management, dynamic pond, improve delivery efficiency, O&M and imitate Rate.
By upper layer software (applications) pond administrative skill, realize that the dynamic pondization of resource and task are distributed automatically, reach node resource Maximization use, improve resource pool flexibility, utilization rate, reduce system energy consumption.
So as to realize the dynamic pond framework of whole machine cabinet computing resource for supporting cascade.
The technical program is also used in server and storage mainboard plant produced test stage, for BIOS, BMC, CPLD Version checking.
By embodiment above, the those skilled in the art can readily realize the present invention.But should Work as understanding, the present invention is not limited to above-mentioned embodiment.On the basis of disclosed embodiment, the technical field Technical staff can be combined different technical characteristics, so as to realize different technical schemes.
It is the known technology of those skilled in the art in addition to the technical characteristic described in specification.

Claims (10)

1. a kind of whole machine cabinet computing resource tank node, it is characterised in that be applied to the form of 1U nodes be configured with management module, The whole machine cabinet server of calculate node, its structure includes power panel, GPU node modules and GPU, and the GPU node modules pass through Power panel is connected to above-mentioned management module, realizes to GPU node modules condition monitoring and the management function of computing resource;In GPU Data exchange chip is configured with node module, the data exchange chip can connect the calculate node, GPU and realize GPU with The exchange of data is calculated between calculate node.
2. a kind of whole machine cabinet computing resource tank node according to claim 1, it is characterised in that the power panel and GPU Powered between node module using copper bar, supply voltage is 12V.
3. a kind of whole machine cabinet computing resource tank node according to claim 1, it is characterised in that the data exchange chip 2 data upstream Interfaces and 4 data downstream interfaces are configured, 4 data downstream interfaces are respectively connected in 4 GPU, 1 data Line interface can access calculate node, and the data uplink interface, data downstream interface are PCIE interfaces.
4. a kind of whole machine cabinet computing resource tank node according to claim 3, it is characterised in that the GPU node modules In be also configured with order interconnection BMC chip, MCPU chips and PCIE Switch chips, the PCIE Switch chips connection on State data exchange chip and be also associated with expansible external management interface, the external management interface is PCIE interfaces.
5. a kind of whole machine cabinet computing resource tank node according to claim 4, it is characterised in that the computing resource pond section Point can be used for cascading, i.e., interconnect at least two GPU node modules, and specific cascade structure is:First by a GPU node modules Upstream Interface accesses calculate node, and it is one up that another upstream Interfaces of the GPU node modules then accesses another GPU node modules Interface;External management interface between two GPU node modules is interconnected, and realizes that PCIE manages the intercommunication of signal;Another GPU Cascade is then realized using above-mentioned connected mode between node module and other GPU node modules.
6. a kind of whole machine cabinet computing resource tank node according to claim 4 or 5, it is characterised in that with calculate node In the GPU node modules of connection, the management to GPU node modules is realized by MCPU, MCPU chips pass through 1 PCIE Switch chips are connected to external management interface and data exchange chip, realize that up management passage is 1 and 2 by BMC chip Dynamic select, that is, which data uplink interface selected, when calculate node module for be cascaded module when, link management is switched to Passage 1, keeps 1 MCPU to carry out the management of 2 or N number of GPU node modules, N here is the quantity for being cascaded module, so that Realize the cascade of GPU node modules.
7. a kind of whole machine cabinet computing resource pond framework, it is characterised in that including a calculate node, some GPU node modules, Whole machine cabinet management module and whole machine cabinet power bus BUSBAR, calculate node and GPU node modules pass through respective power supply respectively Plate is connected to whole machine cabinet power bus BUSBAR power takings, realizes the centrally connected power supply in computing resource pond;Whole machine cabinet management module is used for The centralized management to whole machine cabinet computing resource pond is realized, calculate node is used for the main equipment end as computing resource pond, passes through line Cable is connected respectively to each GPU node modules and transmits PCIE data-signals.
8. a kind of whole machine cabinet computing resource pond framework according to claim 7, it is characterised in that the calculate node, BMC chip in GPU nodes is communicated by respective power panel with whole machine cabinet management module respectively, so as to realize computing resource pond Centralized management;The whole machine cabinet management module is used for resource information, the utilization of resources for collecting calculate node and GPU node modules Rate, and report the upper application software in the whole machine cabinet management module.
9. a kind of whole machine cabinet computing resource pond framework according to claim 8, it is characterised in that the whole machine cabinet management The module resource information obtained that communicated with monitoring chip BMC includes cpu busy percentage, GPU utilization rates, the network bandwidth, and by resource Resource utilization reports upper application software in time in pond.
10. a kind of whole machine cabinet computing resource pond framework according to claim 9, it is characterised in that the system upper strata All GPU resource Unified codings of acquisition, management are formed GPU resource pond by application software, and according to specific related resource profit With rate, the business saturation degree of each GPU in GPU resource pond is calculated, the dynamic pond of resource is realized in effective adjustresources pool service application Change, while new processor active task can be distributed automatically, realize that the maximization of node resource is used.
CN201710433600.0A 2017-06-09 2017-06-09 A kind of whole machine cabinet computing resource tank node and computing resource pond framework Pending CN107239346A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710433600.0A CN107239346A (en) 2017-06-09 2017-06-09 A kind of whole machine cabinet computing resource tank node and computing resource pond framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710433600.0A CN107239346A (en) 2017-06-09 2017-06-09 A kind of whole machine cabinet computing resource tank node and computing resource pond framework

Publications (1)

Publication Number Publication Date
CN107239346A true CN107239346A (en) 2017-10-10

Family

ID=59986082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710433600.0A Pending CN107239346A (en) 2017-06-09 2017-06-09 A kind of whole machine cabinet computing resource tank node and computing resource pond framework

Country Status (1)

Country Link
CN (1) CN107239346A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107748726A (en) * 2017-11-02 2018-03-02 郑州云海信息技术有限公司 A kind of GPU casees
CN108173735A (en) * 2018-01-17 2018-06-15 郑州云海信息技术有限公司 A kind of GPU Box server cascaded communication method, apparatus and system
CN108319539A (en) * 2018-02-28 2018-07-24 郑州云海信息技术有限公司 A kind of method and system generating GPU card slot position information
CN108710418A (en) * 2018-08-06 2018-10-26 郑州云海信息技术有限公司 A kind of GPU-Switch structures cabinet
CN108874726A (en) * 2018-05-25 2018-11-23 郑州云海信息技术有限公司 A kind of GPU whole machine cabinet PCIE link interacted system and method
CN108959165A (en) * 2018-06-28 2018-12-07 郑州云海信息技术有限公司 A kind of management system of GPU whole machine cabinet cluster
CN109189347A (en) * 2018-09-20 2019-01-11 郑州云海信息技术有限公司 A kind of sharing storage module, server and system
CN109408440A (en) * 2018-11-06 2019-03-01 郑州云海信息技术有限公司 A kind of PCIE expanding unit
CN110413557A (en) * 2019-06-29 2019-11-05 苏州浪潮智能科技有限公司 A kind of GPU accelerator
TWI690789B (en) * 2018-11-28 2020-04-11 英業達股份有限公司 Graphic processor system
CN111352494A (en) * 2020-02-22 2020-06-30 苏州浪潮智能科技有限公司 54V input PCIE (peripheral component interface express) switch board power supply framework and power supply wiring method
TWI704463B (en) * 2019-03-29 2020-09-11 英業達股份有限公司 Server system and management method thereto
CN111736915A (en) * 2020-06-05 2020-10-02 浪潮电子信息产业股份有限公司 Management method, device, equipment and medium for cloud host instance hardware acceleration equipment
CN114500413A (en) * 2021-12-17 2022-05-13 阿里巴巴(中国)有限公司 Equipment connection method and device and equipment connection chip

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090251476A1 (en) * 2008-04-04 2009-10-08 Via Technologies, Inc. Constant Buffering for a Computational Core of a Programmable Graphics Processing Unit
CN104202194A (en) * 2014-09-10 2014-12-10 华为技术有限公司 Configuration method and device of PCIe (peripheral component interface express) topology
CN104331130A (en) * 2014-08-13 2015-02-04 浪潮电子信息产业股份有限公司 A system based on a whole cabinet server ultra-large-scale deployment
CN104915917A (en) * 2015-06-01 2015-09-16 浪潮电子信息产业股份有限公司 GPU cabinet, PCIe exchange device and server system
CN105227666A (en) * 2015-10-12 2016-01-06 浪潮(北京)电子信息产业有限公司 The whole machine cabinet management framework that a kind of facing cloud calculates
CN105426286A (en) * 2015-11-05 2016-03-23 浪潮(北京)电子信息产业有限公司 System for monitoring whole rack server
CN106445045A (en) * 2016-08-31 2017-02-22 浪潮电子信息产业股份有限公司 Power supply copper bar and server
CN106685725A (en) * 2017-01-11 2017-05-17 郑州云海信息技术有限公司 Central management control panel, method and system
CN106774752A (en) * 2017-01-11 2017-05-31 郑州云海信息技术有限公司 A kind of Rack servers spare fans control method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090251476A1 (en) * 2008-04-04 2009-10-08 Via Technologies, Inc. Constant Buffering for a Computational Core of a Programmable Graphics Processing Unit
CN104331130A (en) * 2014-08-13 2015-02-04 浪潮电子信息产业股份有限公司 A system based on a whole cabinet server ultra-large-scale deployment
CN104202194A (en) * 2014-09-10 2014-12-10 华为技术有限公司 Configuration method and device of PCIe (peripheral component interface express) topology
CN104915917A (en) * 2015-06-01 2015-09-16 浪潮电子信息产业股份有限公司 GPU cabinet, PCIe exchange device and server system
CN105227666A (en) * 2015-10-12 2016-01-06 浪潮(北京)电子信息产业有限公司 The whole machine cabinet management framework that a kind of facing cloud calculates
CN105426286A (en) * 2015-11-05 2016-03-23 浪潮(北京)电子信息产业有限公司 System for monitoring whole rack server
CN106445045A (en) * 2016-08-31 2017-02-22 浪潮电子信息产业股份有限公司 Power supply copper bar and server
CN106685725A (en) * 2017-01-11 2017-05-17 郑州云海信息技术有限公司 Central management control panel, method and system
CN106774752A (en) * 2017-01-11 2017-05-31 郑州云海信息技术有限公司 A kind of Rack servers spare fans control method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
厂商: "浪潮发布业界最高GPU密度的SR-AI整机柜", 《中关村在线》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107748726A (en) * 2017-11-02 2018-03-02 郑州云海信息技术有限公司 A kind of GPU casees
CN107748726B (en) * 2017-11-02 2020-03-24 郑州云海信息技术有限公司 GPU (graphics processing Unit) box
CN108173735A (en) * 2018-01-17 2018-06-15 郑州云海信息技术有限公司 A kind of GPU Box server cascaded communication method, apparatus and system
US11641405B2 (en) 2018-01-17 2023-05-02 Zhengzhou Yunhai Information Technology Co., Ltd. GPU box server cascade communication method, device, and system
CN108173735B (en) * 2018-01-17 2020-08-25 苏州浪潮智能科技有限公司 GPU Box server cascade communication method, device and system
WO2019140921A1 (en) * 2018-01-17 2019-07-25 郑州云海信息技术有限公司 Gpu box server cascade communication method, device, and system
WO2019165773A1 (en) * 2018-02-28 2019-09-06 郑州云海信息技术有限公司 Method and system for generating gpu card slot position information
CN108319539A (en) * 2018-02-28 2018-07-24 郑州云海信息技术有限公司 A kind of method and system generating GPU card slot position information
CN108319539B (en) * 2018-02-28 2022-03-22 郑州云海信息技术有限公司 Method and system for generating GPU card slot position information
CN108874726A (en) * 2018-05-25 2018-11-23 郑州云海信息技术有限公司 A kind of GPU whole machine cabinet PCIE link interacted system and method
CN108959165A (en) * 2018-06-28 2018-12-07 郑州云海信息技术有限公司 A kind of management system of GPU whole machine cabinet cluster
CN108710418B (en) * 2018-08-06 2023-09-22 郑州云海信息技术有限公司 GPU-Switch structure case
CN108710418A (en) * 2018-08-06 2018-10-26 郑州云海信息技术有限公司 A kind of GPU-Switch structures cabinet
CN109189347A (en) * 2018-09-20 2019-01-11 郑州云海信息技术有限公司 A kind of sharing storage module, server and system
CN109408440A (en) * 2018-11-06 2019-03-01 郑州云海信息技术有限公司 A kind of PCIE expanding unit
TWI690789B (en) * 2018-11-28 2020-04-11 英業達股份有限公司 Graphic processor system
TWI704463B (en) * 2019-03-29 2020-09-11 英業達股份有限公司 Server system and management method thereto
CN110413557B (en) * 2019-06-29 2020-11-10 苏州浪潮智能科技有限公司 GPU (graphics processing unit) accelerating device
CN110413557A (en) * 2019-06-29 2019-11-05 苏州浪潮智能科技有限公司 A kind of GPU accelerator
CN111352494A (en) * 2020-02-22 2020-06-30 苏州浪潮智能科技有限公司 54V input PCIE (peripheral component interface express) switch board power supply framework and power supply wiring method
CN111736915A (en) * 2020-06-05 2020-10-02 浪潮电子信息产业股份有限公司 Management method, device, equipment and medium for cloud host instance hardware acceleration equipment
CN111736915B (en) * 2020-06-05 2022-07-05 浪潮电子信息产业股份有限公司 Management method, device, equipment and medium for cloud host instance hardware acceleration equipment
CN114500413A (en) * 2021-12-17 2022-05-13 阿里巴巴(中国)有限公司 Equipment connection method and device and equipment connection chip
CN114500413B (en) * 2021-12-17 2024-04-16 阿里巴巴(中国)有限公司 Device connection method and device, and device connection chip

Similar Documents

Publication Publication Date Title
CN107239346A (en) A kind of whole machine cabinet computing resource tank node and computing resource pond framework
CN105549460A (en) Satellite-borne electronic equipment comprehensive management and control system
CN101819556B (en) Signal-processing board
CN107659437A (en) A kind of whole machine cabinet computing resource Pooled resources automatic recognition system and method
CN104135514B (en) Fusion type virtual storage system
CN103336756B (en) A kind of generating apparatus of data computational node
CN107480094A (en) A kind of pond server system architecture of fusion architecture
CN105159617A (en) Pooled storage system framework
CN102402474B (en) Prototype verification device for programmable logic devices
CN104641593B (en) Web plate and communication equipment
CN104579786B (en) A kind of server design method based on 2D Torus network topology architectures
CN203178870U (en) Internet access switching card
CN104750581A (en) Redundant interconnection memory-shared server system
CN210983137U (en) Server hardware system architecture
CN102298418A (en) Advanced mezzanine card (AMC) board card based on MicroTCA standard and connection method thereof
CN209248518U (en) A kind of solid state hard disk expansion board clamping and server
CN207440541U (en) A kind of redundancy communication controller based on arm processor
CN102495819B (en) Method for realizing blade service high speed bus SI (System Information) optimization and redundancy through one-third orthogonal intersection
CN204928853U (en) Simple and easy serial communication equipment
CN206877374U (en) A kind of power marketing intelligent platform
CN106484656B (en) A kind of management board of collectable multinode management information
Miyoshi et al. New system architecture for next-generation green data centers: mangrove
CN109739560A (en) A kind of GPU card cluster configuration control system and method
CN105468104A (en) Converged server and backboard
CN209401013U (en) A kind of server and its memory Riser plate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171010

RJ01 Rejection of invention patent application after publication