WO2022028061A1 - Gpu management apparatus and method based on detection adjustment module, and gpu server - Google Patents
Gpu management apparatus and method based on detection adjustment module, and gpu server Download PDFInfo
- Publication number
- WO2022028061A1 WO2022028061A1 PCT/CN2021/096546 CN2021096546W WO2022028061A1 WO 2022028061 A1 WO2022028061 A1 WO 2022028061A1 CN 2021096546 W CN2021096546 W CN 2021096546W WO 2022028061 A1 WO2022028061 A1 WO 2022028061A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gpu
- module
- task
- management
- cpu
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/20—Cooling means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3058—Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations
- G06F11/3062—Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations where the monitored property is the power consumption
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/508—Monitor
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present invention relates to the field of GPU management design, in particular to a GPU management device, method and GPU server based on a detection and adjustment module.
- GPU Graphics Processing Unit
- AI Artificial Intelligence, artificial intelligence
- a server often includes a GPU processor and a CPU (Central Processing Unit, central processing unit) processor. While CPU processors are better at integer operations, GPU processors are better at floating point operations.
- CPU Central Processing Unit
- the present invention innovatively proposes a GPU management device, a method and a GPU server based on a detection and adjustment module, which effectively solves the inability to adjust the relationship between the CPU and GPU according to different application scenarios due to the prior art.
- a detection and adjustment module which effectively solves the inability to adjust the relationship between the CPU and GPU according to different application scenarios due to the prior art.
- the utilization of CPU and GPU and the efficiency of task processing are effectively improved.
- a first aspect of the present invention provides a GPU management device based on a detection and adjustment module, including: a CPU module, a CPU management module, a conversion module, a GPU module, a GPU management module, and a detection and adjustment module.
- the adjustment control terminal is respectively connected in communication with the control terminal of the GPU management module and the CPU management module, and is used to detect the data type to be processed, and select the corresponding GPU module and/or CPU module for processing according to the data type to be processed; the CPU
- the management module is communicatively connected with the CPU module, for realizing the management of the CPU module;
- the GPU management module is communicatively connected with the GPU module, for realizing the management of the GPU module and the balanced distribution of the tasks to be processed;
- the CPU module is converted into The module is communicatively connected with the GPU module.
- the GPU module includes a plurality of GPU sub-modules connected in parallel, each GPU sub-module includes several GPUs and an accelerator card, and several GPUs and the accelerator card are arranged in parallel, between multiple GPU sub-modules and between several GPUs. Both communicate through the GPU management module to jointly complete the data processing tasks issued by the GPU management module.
- the GPU management module includes a plurality of GPU management sub-modules, the plurality of GPU management sub-modules are connected in parallel, and each GPU management sub-module is connected in communication with a plurality of GPU sub-modules connected in parallel.
- it also includes: a power consumption monitoring module and a fan control module, the monitoring end of the power consumption monitoring module is connected to the GPU module, and is used to monitor the power consumption of the GPU module in real time, and the output end of the power consumption monitoring module is connected to the GPU module.
- the input end of the fan control module is connected, and once the power consumption of the monitored GPU module exceeds the set threshold, the fan control module increases the running speed of the fan.
- a second aspect of the present invention provides a GPU management method based on a detection and adjustment module, which is implemented on the basis of the GPU management device based on the detection and adjustment module described in the first aspect of the present invention, including:
- the detection adjustment module detects the type of the task. If it is a floating-point operation task, the GPU management module is given priority to call the GPU module to implement data operation processing; if it is an integer operation task, the CPU management module is given priority to call the CPU module to achieve data operation. Processing; if the types of tasks to be processed include integer operation part tasks and floating point operation part tasks, the floating point operation part tasks are given priority to call the GPU module through the GPU management module to realize data operation processing, and the integer operation part tasks are given priority to be managed by the CPU The module calls the CPU module to realize the operation processing of the data.
- the GPU management module when the GPU management module receives the task assigned by the detection and adjustment module, it acquires the task with the highest priority in the task queue, and schedules the GPU cluster resources in the GPU module according to the priority of the task to be processed.
- scheduling the GPU cluster resources in the GPU module according to the priority of the tasks to be processed specifically includes:
- the GPU management module traverses the GPU cluster resources, and if the idle computing power of the current GPU cluster meets the minimum computing power requirement of the user corresponding to the task to be processed, the task to be processed is allocated to the one that meets the minimum computing power requirement and requires the least number of GPUs.
- a GPU cluster if the idle computing power of the current GPU cluster cannot meet the minimum computing power requirements of the user corresponding to the task to be processed, then traverse the currently executing task from small to large according to the task priority, and traverse the currently executing task from small to large according to the task priority. Priority for pending task scheduling.
- scheduling tasks to be processed according to the priorities of currently executing tasks and tasks to be processed specifically includes:
- the pending task waits for the next scheduling; if the priority of the currently executing task is less than the priority of the pending task, the current execution is calculated and processed in turn The sum of the idle computing power and the computing power to be released of the GPU cluster of the task, if the sum of the idle computing power and the computing power to be released of the GPU cluster currently executing the task does not meet the minimum computing power requirement of the user corresponding to the task to be processed, Then wait for the next scheduling; if the sum of the idle computing power of the GPU cluster currently executing the task and the computing power to be released meets the minimum computing power requirement of the user corresponding to the task to be processed, the task to be processed is allocated to meet the minimum computing power.
- the power consumption monitoring module obtains the power consumption of the GPU module in real time, compares the current power consumption value of the GPU module with the set power consumption value, and controls the fan control module to increase the fan speed if the current GPU module power consumption value is greater than the set power consumption value .
- a third aspect of the present invention provides a GPU server, including the GPU management device based on the detection and adjustment module as described in the first aspect.
- the present invention effectively solves the problem that the interconnection topology between the CPU and the GPU cannot be adjusted according to different application scenarios due to the prior art, so as to achieve a reasonable configuration of floating-point operations and integer operations, and effectively improves the performance of the CPU and the GPU. Utilization and task processing efficiency.
- multiple GPU sub-modules and between several GPUs communicate through the GPU management module to jointly complete the data processing tasks issued by the GPU management module, avoiding the need for communication between multiple GPUs to pass through the CPU module.
- the problem of low communication efficiency caused by the conversion improves the communication efficiency between GPUs.
- each GPU management sub-module is communicatively connected with a plurality of GPU sub-modules connected in parallel, which can improve the bandwidth of parallel processing and make the interconnection bandwidth between GPUs achieve the best performance.
- the fan control module and the power consumption monitoring module set separately, the power consumption monitoring module monitors the power consumption of the GPU module in real time, and once the power consumption of the monitored GPU module exceeds the set threshold, the fan control module will pass the fan control module in time. Increase the fan running speed to avoid the heat problem caused by the untimely cooling of the fan control module due to the dramatic change in the power consumption of the GPU module, thus affecting the GPU usage efficiency.
- Fig. 1 is the structural schematic diagram of the device of Example 1 in the scheme of the present invention.
- Fig. 2 is the schematic flow chart of the method of embodiment 2 in the scheme of the present invention.
- Fig. 3 is the schematic flow chart of embodiment three method in the scheme of the present invention.
- Fig. 4 is the schematic flow chart of S6 in the method of embodiment three in the scheme of the present invention.
- Fig. 5 is the schematic flow chart of S64 in the third method in the embodiment of the present invention.
- Example 6 is a schematic flow chart of the method of Example 4 in the scheme of the present invention.
- FIG. 7 is a schematic structural diagram of a GPU server in Embodiment 5 in the solution of the present invention.
- the present invention provides a GPU management device based on a detection and adjustment module, including: a CPU module 1, a CPU management module 2, a conversion module 3, a GPU module 4, a GPU management module 5, and a detection and adjustment module 6, the adjustment control terminal of the detection adjustment module 6 is respectively connected with the control terminal of the GPU management module 5 and the CPU management module 2 to communicate with each other, for detecting the data type to be processed, and select the corresponding GPU module 1 according to the data type to be processed.
- a detection and adjustment module including: a CPU module 1, a CPU management module 2, a conversion module 3, a GPU module 4, a GPU management module 5, and a detection and adjustment module 6, the adjustment control terminal of the detection adjustment module 6 is respectively connected with the control terminal of the GPU management module 5 and the CPU management module 2 to communicate with each other, for detecting the data type to be processed, and select the corresponding GPU module 1 according to the data type to be processed.
- the CPU management module 2 is communicated and connected with the CPU module 1, for realizing the management of the CPU module 1
- the GPU management module 5 is communicated and connected with the GPU module 4, and is used for realizing the management of the GPU module 4 And the balanced distribution of tasks to be processed
- the CPU module 1 is connected to the GPU module 4 in communication through the conversion module 3 .
- the GPU module 4 includes a plurality of GPU sub-modules 41 connected in parallel, each GPU sub-module 41 includes a plurality of GPUs 411 and an accelerator card 412, and a plurality of GPUs 411 and the accelerator card 412 are arranged in parallel.
- the GPUs 411 all communicate through the GPU management module 5 to jointly complete the data processing task issued by the GPU management module 5 .
- CPUs are highly efficient in computing-intensive applications such as digital media processing and scientific computing
- GPUs are highly efficient in parallel computing of large-scale data.
- the efficient parallel computing based on GPU mainly utilizes the cooperative computing mode of CPU and GPU in the hybrid architecture.
- the execution performance of the program is improved.
- the data cannot be directly transmitted between the GPU and the GPU. Only the GPU can first transmit the data to the CPU through the conversion module, and then the CPU To transmit the corresponding data to another GPU that receives the data, this communication method will bring huge communication overhead.
- Communication between multiple GPU sub-modules and between several GPUs is through the GPU management module, and the task of GPU management module 5 (playing dual roles of switching and management) is used to distribute it to each GPU in a balanced manner, so as to prevent high transaction costs between GPUs and GPUs.
- the communication overhead of the GPU affects the overall performance of the data flow program; multiple GPU sub-modules and several GPUs communicate through the GPU management module to jointly complete the data processing tasks issued by the GPU management module, avoiding communication between multiple GPUs
- the problem of low communication efficiency caused by the conversion of the CPU module is required to improve the communication efficiency between GPUs.
- the CPU module 1 includes at least two CPU11, namely CPU0 and CPU1, and the conversion module 3 includes a Retimer chip and a PCIe Switch chip.
- the Retimer chip is connected in series between the CPU and the PCIe Switch chip, one end is connected to the CPU, and the other end is connected to the PCIe Switch chip , mainly used for signal relay to ensure lossless transmission of signals.
- the main function of the PCIe Switch chip is channel conversion; each CPU11 is connected to two conversion modules 3 respectively, and each conversion module 3 is connected to the corresponding GPU sub-module 41.
- each GPU sub-module 41 includes two GPUs 411 and one accelerator card 412, that is, from GPU0-GPU7, accelerator card 0-acceleration card 3, and CPU0 all the way PCIe X16 leads through Retimer
- the chip and PCIe Switch chip are expanded into 3 lanes of PCIe X16, which are respectively connected to GPU0, GPU1 and accelerator card 0; the other lane of PCIe X16 from CPU0 is expanded into 3 lanes of PCIe X16 through the Retimer chip and PCIe Switch chip, which are respectively connected to GPU2 and GPU3 and accelerator card 1; one route of PCIe X16 from CPU1 is expanded into 3 routes of PCIe X16 through Retimer chip and PCIe Switch chip, which are connected to GPU4, GPU5 and accelerator card 2 respectively; another route of PCIe X16 from CPU1 is routed through Retimer chip and PCIe Switch chip Expanded to 3-way PCIe X16
- the GPU management module 5 includes a plurality of GPU management sub-modules 51, and the plurality of GPU management sub-modules 51 are connected in parallel, and each GPU management sub-module 51 is connected in communication with a plurality of GPU sub-modules 41 connected in parallel.
- the number of GPU management sub-modules 51 can be multiple (one is also possible, but the bandwidth performance is not optimal), specifically, it can be 6, each GPU management sub-module. 51 are connected to multiple GPU sub-modules connected in parallel, which can improve the bandwidth of parallel processing and make the interconnection bandwidth between GPUs achieve the best performance.
- a power consumption monitoring module 7 and a fan control module 8 and the monitoring end of the power consumption monitoring module 7 is connected to the GPU module 4 for real-time monitoring of the power consumption of the GPU module 4, and the output end of the power consumption monitoring module 7 It is connected to the input end of the fan control module 8, and once the power consumption of the monitored GPU module 4 exceeds the set threshold, the fan control module 8 increases the running speed of the fan.
- the fan control module 8 may include a BMC81 (Baseboard Management Controller, baseboard management controller), a CPLD82 (Complex Programming logic device, programmable logic device), a fan 83, and the control output end of the BMC81 is connected to the control input end of the fan 83 , the control output terminal of CPLD82 is connected to the control input terminal of the fan, and the monitoring terminal of CPLD82 is connected to the fault output terminal of BMC81.
- BMC81 controls the operation of the fan; once CPLD82 monitors the fault of BMC, CPLD82 takes over the operation of BMC81 to control the operation of the fan.
- the purpose of separately setting the power consumption monitoring module 7 in the present invention is to shorten the power consumption monitoring and alarm time of the GPU module, because when the BMC monitors the power consumption of the GPU module 4, it generally adopts the polling method to obtain the power consumption of the GPU module 4, and the polling period may need 1s, and the power consumption change of the GPU module 4 is often at the level of us. Therefore, if the BMC is used to directly monitor the GPU power consumption, it is easy to cause an untimely alarm and cause the GPU module 4 to overheat. In the technical solution of the present invention, the power consumption monitoring module is separately set. 7. When the power consumption of the GPU module 4 changes greatly, the BMC can be notified in time to adjust the fan speed, so that the GPU module 4 can be cooled in time, and the operation of the GPU module can be avoided due to heat dissipation problems.
- the invention effectively solves the problem that the interconnection topology between the CPU and the GPU cannot be adjusted according to different application scenarios due to the prior art, so as to achieve a reasonable configuration of the floating point operation and the integer operation, and effectively improves the utilization rate of the CPU and the GPU and task processing efficiency.
- the technical solution of the present invention also provides a GPU management method based on a detection and adjustment module, which is implemented on the basis of Embodiment 1 of the present invention, including:
- the detection and adjustment module detects the task type
- the GPU module is preferentially called through the GPU management module to realize the operation processing of the data;
- the floating point operation part task is preferentially called by the GPU management module to realize the operation processing of the data
- the integer operation part task is preferentially managed by the CPU
- the module calls the CPU module to realize the operation processing of the data.
- the invention effectively solves the problem that the interconnection topology between the CPU and the GPU cannot be adjusted according to different application scenarios due to the prior art, so as to achieve a reasonable configuration of the floating point operation and the integer operation, and effectively improves the utilization rate of the CPU and the GPU and task processing efficiency.
- the technical solution of the present invention also provides a GPU management method based on a detection and adjustment module, which is implemented on the basis of Embodiment 1 of the present invention, including:
- the detection and adjustment module detects the task type
- the GPU management module is given priority to call the GPU module to realize the operation processing of the data;
- the floating point operation part task is given priority to call the GPU module through the GPU management module to realize the operation processing of the data, and the integer operation part task is preferentially managed by the CPU
- the module calls the CPU module to realize the operation processing of the data
- step S6 specifically includes:
- the GPU management module traverses the GPU cluster resources
- S64 traverse the currently executed tasks from small to large according to the priority of the tasks, and schedule the to-be-processed tasks according to the priorities of the currently-executed task and the to-be-processed task.
- step S63 if at least 4 GPUs can meet the minimum computing capability requirement, the task to be processed is allocated to the corresponding 4 GPUs for computing processing.
- S64 specifically includes:
- step S644 determine whether the sum of the idle computing power of the GPU cluster currently executing the task and the computing power to be released meets the minimum computing power requirement of the user corresponding to the task to be processed, if the judgment result is yes, then perform step S645, if the judgment result If no, execute step S646;
- multiple GPU sub-modules and multiple GPUs communicate through the GPU management module to jointly complete the data processing tasks issued by the GPU management module, avoiding the need for the conversion of the CPU module for communication between multiple GPUs
- the resulting problem of low communication efficiency improves the communication efficiency between GPUs.
- each GPU management sub-module is communicatively connected with a plurality of GPU sub-modules connected in parallel, which can improve the bandwidth of parallel processing and make the interconnection bandwidth between GPUs achieve the best performance.
- the GPU management module is used to evenly distribute tasks to each GPU, so as to prevent the high communication overhead between the GPU and the GPU from affecting the overall performance of the data flow program, achieve load balancing between GPUs, and ensure the performance of the GPUs. Run efficiently.
- the technical solution of the present invention also provides a GPU management method based on a detection and adjustment module, which is implemented on the basis of Embodiment 1 of the present invention, including:
- the detection and adjustment module detects the task type
- the GPU management module is given priority to call the GPU module to realize the operation processing of the data;
- the floating point operation part task is preferentially called by the GPU management module to realize the operation processing of the data
- the integer operation part task is preferentially managed by the CPU
- the module calls the CPU module to realize the operation processing of the data
- the power consumption monitoring module obtains the power consumption of the GPU module in real time, and compares the current power consumption value of the GPU module with the set power consumption value. If the current GPU module power consumption value is greater than the set power consumption value, it controls the fan control module to increase speed of the fan.
- the fan control module and the power consumption monitoring module set separately, the power consumption monitoring module monitors the power consumption of the GPU module in real time, and once the power consumption of the monitored GPU module exceeds the set threshold, the fan control module increases the power consumption in time.
- the fan running speed can avoid the heat problem caused by the untimely cooling of the fan control module due to the dramatic change in the power consumption of the GPU module, thus affecting the efficiency of the GPU.
- the technical solution of the present invention further provides a GPU server, including the GPU management device based on the detection and adjustment module according to Embodiment 1 of the present invention.
- the height of the GPU server may be 4U.
- the GPU management device based on the detection and adjustment module of the first embodiment of the present invention it may also include CPU Board (CPU board, which can integrate 2 CPUs), GPU Board (GPU board) , can integrate 8 GPUs), Bridge Board (CPU board and GPU board interconnection connector), Riser Board (expansion board), PDB Board (power backplane), redundant power supply (4+4 or 3+3PSU), etc., It can also be other GPU server structure, which is not limited in the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Quality & Reliability (AREA)
- Human Computer Interaction (AREA)
- Power Sources (AREA)
- Multi Processors (AREA)
- Hardware Redundancy (AREA)
Abstract
The present invention provides a GPU management apparatus based on a detection adjustment module, comprising: a CPU module, a CPU management module, a conversion module, a GPU module, a GPU management module, and the detection adjustment module. An adjustment control end of the detection adjustment module is communicatively connected to control ends of the GPU management module and the CPU management module, separately, and the detection adjustment module is used for detecting a data type to be processed and selecting the corresponding GPU module and/or CPU module for processing according to said data type; the GPU management module is communicatively connected to the GPU module and is used for realizing management of the GPU module and balanced allocation of tasks to be processed. The present invention further provides a GPU management method based on the detection adjustment module, and a GPU server, and effectively improves the utilization rate and the task processing efficiency of a CPU and a GPU.
Description
本申请要求于2020年08月03日提交中国国家知识产权局,申请号为CN202010767363.3,发明名称为“一种基于侦测调节模块的GPU管理装置、方法及GPU服务器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application is required to be submitted to the State Intellectual Property Office of China on August 3, 2020, the application number is CN202010767363.3, and the name of the invention is "a GPU management device, method and GPU server based on a detection and adjustment module". Priority, the entire contents of which are incorporated herein by reference.
本发明涉及GPU管理设计领域,尤其是涉及一种基于侦测调节模块的GPU管理装置、方法及GPU服务器。The present invention relates to the field of GPU management design, in particular to a GPU management device, method and GPU server based on a detection and adjustment module.
随着GPU(Graphics Processing Unit,图形处理器)服务器技术的快速发展,越来越多的机器学习及AI(Artificial Intelligence,人工智能)应用得到推广使用;GPU服务器在深度学习的训练等业务中得到了大规模的应用。With the rapid development of GPU (Graphics Processing Unit, graphics processor) server technology, more and more machine learning and AI (Artificial Intelligence, artificial intelligence) applications have been promoted and used; GPU servers have been used in deep learning training and other businesses. large-scale applications.
现有技术中,应用于图形设计、人工智能、科学研究等领域需要使用非常多的GPU处理器,而一台服务器中往往包括GPU处理器以及CPU(Central Processing Unit,中央处理器)处理器。而CPU处理器更擅长整数运算,GPU处理器更擅长浮点运算。In the prior art, applications in graphics design, artificial intelligence, scientific research and other fields require the use of a large number of GPU processors, and a server often includes a GPU processor and a CPU (Central Processing Unit, central processing unit) processor. While CPU processors are better at integer operations, GPU processors are better at floating point operations.
但是,现有任务处理时却无法根据不同的应用场景调整CPU和GPU之间的互联拓扑,以达到一个浮点运算(GPU优势项)和整数运算(CPU优势项)的合理配置,不利于提高CPU以及GPU的利用率以及任务处理效率。However, the existing task processing cannot adjust the interconnection topology between the CPU and GPU according to different application scenarios, so as to achieve a reasonable configuration of floating-point operation (GPU advantage) and integer operation (CPU advantage), which is not conducive to improving the CPU and GPU utilization and task processing efficiency.
发明内容SUMMARY OF THE INVENTION
本发明为了解决现有技术中存在的问题,创新提出了一种基于侦测调节模块的GPU管理装置、方法及GPU服务器,有效解决由于现有技术造成无法根据不同的应用场景调整CPU和GPU之间的互联拓扑,以达到一个浮点运算和整数运算的合理配置的问题,有效的提高了CPU以及GPU的利用率以及任务处理效率。In order to solve the problems existing in the prior art, the present invention innovatively proposes a GPU management device, a method and a GPU server based on a detection and adjustment module, which effectively solves the inability to adjust the relationship between the CPU and GPU according to different application scenarios due to the prior art. In order to achieve a reasonable configuration of floating-point operations and integer operations, the utilization of CPU and GPU and the efficiency of task processing are effectively improved.
本发明第一方面提供了一种基于侦测调节模块的GPU管理装置,包括:CPU模块、CPU管理模块、转换模块、GPU模块、GPU管理模块、侦测调节模块,所述侦测调节模块的调节控制端分别与GPU管理模块、CPU管理模块的控制端通信连接,用于检测待处理的数据类型,并根据待处理的数据类型选择对应的GPU模块和/或CPU模块进行处理;所述CPU管理模块与CPU模块通信连接,用于实现对CPU模块的管理;所述GPU管理模块与GPU模块通信连接,用于实现对GPU模块的管理以及待处理任务的均衡分配;所述CPU模块通过转换模块与GPU模块通信连接。A first aspect of the present invention provides a GPU management device based on a detection and adjustment module, including: a CPU module, a CPU management module, a conversion module, a GPU module, a GPU management module, and a detection and adjustment module. The adjustment control terminal is respectively connected in communication with the control terminal of the GPU management module and the CPU management module, and is used to detect the data type to be processed, and select the corresponding GPU module and/or CPU module for processing according to the data type to be processed; the CPU The management module is communicatively connected with the CPU module, for realizing the management of the CPU module; the GPU management module is communicatively connected with the GPU module, for realizing the management of the GPU module and the balanced distribution of the tasks to be processed; the CPU module is converted into The module is communicatively connected with the GPU module.
可选地,所述GPU模块包括多个并联连接的GPU子模块,每个GPU子模块包括若干GPU以及加速卡,若干GPU与加速卡并联设置,多个GPU子模块之间以及若干GPU之间均通过GPU管理模块通信,共同完成GPU管理模块下发的数据处理任务。Optionally, the GPU module includes a plurality of GPU sub-modules connected in parallel, each GPU sub-module includes several GPUs and an accelerator card, and several GPUs and the accelerator card are arranged in parallel, between multiple GPU sub-modules and between several GPUs. Both communicate through the GPU management module to jointly complete the data processing tasks issued by the GPU management module.
进一步地,GPU管理模块包括多个GPU管理子模块,多个GPU管理子模块之间并联连接,且每个GPU管理子模块均与多个并联连接的GPU子模块通信连接。Further, the GPU management module includes a plurality of GPU management sub-modules, the plurality of GPU management sub-modules are connected in parallel, and each GPU management sub-module is connected in communication with a plurality of GPU sub-modules connected in parallel.
可选地,还包括:功耗监测模块以及风扇控制模块,所述功耗监测模块的监测端与GPU模块连接,用于实时监测GPU模块的功耗,所述功耗监测模块的输出端与风扇控制模块的输入端连接,一旦监测GPU模块功耗超过设定阈值时,通过风扇控制模块增大风扇运行转速。Optionally, it also includes: a power consumption monitoring module and a fan control module, the monitoring end of the power consumption monitoring module is connected to the GPU module, and is used to monitor the power consumption of the GPU module in real time, and the output end of the power consumption monitoring module is connected to the GPU module. The input end of the fan control module is connected, and once the power consumption of the monitored GPU module exceeds the set threshold, the fan control module increases the running speed of the fan.
本发明第二方面提供了一种基于侦测调节模块的GPU管理方法,是基于本发明第一方面所述的基于侦测调节模块的GPU管理装置基础上实现的,包括:A second aspect of the present invention provides a GPU management method based on a detection and adjustment module, which is implemented on the basis of the GPU management device based on the detection and adjustment module described in the first aspect of the present invention, including:
将待处理的任务划分为整数运算以及浮点运算;Divide pending tasks into integer operations and floating-point operations;
侦测调节模块侦测任务类型,如果是浮点运算任务,则优先通过GPU管理模块调用GPU模块实现数据的运算处理;如果是整数运算任务,则优先通过CPU管理模块调用CPU模块实现数据的运算处理;如果待处理的任务类型包括整数运算部分任务以及浮点运算部分任务,则将浮点运算部分任务优先通过GPU管理模块调用GPU模块实现数据的运算处理,将整数运算部分任务优先通过CPU管理模块调用CPU模块实现数据的运算处理。The detection adjustment module detects the type of the task. If it is a floating-point operation task, the GPU management module is given priority to call the GPU module to implement data operation processing; if it is an integer operation task, the CPU management module is given priority to call the CPU module to achieve data operation. Processing; if the types of tasks to be processed include integer operation part tasks and floating point operation part tasks, the floating point operation part tasks are given priority to call the GPU module through the GPU management module to realize data operation processing, and the integer operation part tasks are given priority to be managed by the CPU The module calls the CPU module to realize the operation processing of the data.
可选地,当GPU管理模块接收到侦测调节模块分配的任务时,获取任务队列中优先级最高的任务,根据待处理任务优先级调度GPU模块中的GPU集群资源。Optionally, when the GPU management module receives the task assigned by the detection and adjustment module, it acquires the task with the highest priority in the task queue, and schedules the GPU cluster resources in the GPU module according to the priority of the task to be processed.
进一步地,根据待处理任务优先级调度GPU模块中的GPU集群资源具体包括:Further, scheduling the GPU cluster resources in the GPU module according to the priority of the tasks to be processed specifically includes:
GPU管理模块遍历GPU集群资源,如果当前GPU集群的空闲 运算能力满足所述待处理任务对应的用户的最小运算能力要求,则将待处理任务分配至满足最小运算能力要求且需要的GPU数量最少的GPU集群中;如果当前GPU集群的空闲运算能力不能满足所述待处理任务对应的用户的最小运算能力要求,则根据任务优先级从小到大遍历当前执行任务,根据当前执行任务与待处理任务的优先级进行待处理任务调度。The GPU management module traverses the GPU cluster resources, and if the idle computing power of the current GPU cluster meets the minimum computing power requirement of the user corresponding to the task to be processed, the task to be processed is allocated to the one that meets the minimum computing power requirement and requires the least number of GPUs. In a GPU cluster; if the idle computing power of the current GPU cluster cannot meet the minimum computing power requirements of the user corresponding to the task to be processed, then traverse the currently executing task from small to large according to the task priority, and traverse the currently executing task from small to large according to the task priority. Priority for pending task scheduling.
进一步地,根据当前执行任务与待处理任务的优先级进行待处理任务调度具体包括:Further, scheduling tasks to be processed according to the priorities of currently executing tasks and tasks to be processed specifically includes:
如果所有的当前执行任务的优先级均大于或等于待处理任务的优先级,则待处理任务等待下一次调度;如果当前执行任务的优先级小于待处理任务的优先级,则依次计算处理当前执行任务的GPU集群的空闲运算能力和待释放运算能力的总和,如果当前执行任务的GPU集群的空闲运算能力和待释放运算能力的总和不满足所述待处理任务对应的用户的最小运算能力要求,则等待下一次调度;如果当前执行任务的GPU集群的空闲运算能力和待释放运算能力的总和满足所述待处理任务对应的用户的最小运算能力要求,则将待处理任务分配至满足最小运算能力要求且需要的GPU数量最少的GPU集群,并将所述GPU集群中待释放运算能力对应的当前执行任务保存后挂起。If the priority of all currently executing tasks is greater than or equal to the priority of the pending task, the pending task waits for the next scheduling; if the priority of the currently executing task is less than the priority of the pending task, the current execution is calculated and processed in turn The sum of the idle computing power and the computing power to be released of the GPU cluster of the task, if the sum of the idle computing power and the computing power to be released of the GPU cluster currently executing the task does not meet the minimum computing power requirement of the user corresponding to the task to be processed, Then wait for the next scheduling; if the sum of the idle computing power of the GPU cluster currently executing the task and the computing power to be released meets the minimum computing power requirement of the user corresponding to the task to be processed, the task to be processed is allocated to meet the minimum computing power. A GPU cluster with the least number of GPUs required and required, saves and suspends the currently executed task corresponding to the computing power to be released in the GPU cluster.
可选地,还包括:Optionally, also include:
功耗监测模块实时获取GPU模块功耗,将当前GPU模块功耗值与设定功耗值进行比较,如果当前GPU模块功耗值大于设定功耗值, 则控制风扇控制模块增大风扇转速。The power consumption monitoring module obtains the power consumption of the GPU module in real time, compares the current power consumption value of the GPU module with the set power consumption value, and controls the fan control module to increase the fan speed if the current GPU module power consumption value is greater than the set power consumption value .
本发明第三方面提供了一种GPU服务器,包括如第一方面所述的基于侦测调节模块的GPU管理装置。A third aspect of the present invention provides a GPU server, including the GPU management device based on the detection and adjustment module as described in the first aspect.
本发明采用的技术方案包括以下技术效果:The technical scheme adopted in the present invention includes the following technical effects:
1、本发明有效解决由于现有技术造成无法根据不同的应用场景调整CPU和GPU之间的互联拓扑,以达到一个浮点运算和整数运算的合理配置的问题,有效的提高了CPU以及GPU的利用率以及任务处理效率。1. The present invention effectively solves the problem that the interconnection topology between the CPU and the GPU cannot be adjusted according to different application scenarios due to the prior art, so as to achieve a reasonable configuration of floating-point operations and integer operations, and effectively improves the performance of the CPU and the GPU. Utilization and task processing efficiency.
2、本发明技术方案中多个GPU子模块之间以及若干GPU之间均通过GPU管理模块通信,共同完成GPU管理模块下发的数据处理任务,避免了多个GPU之间通信需要通过CPU模块的转换造成的通信效率低的问题,提高了GPU之间通信效率。2. In the technical solution of the present invention, multiple GPU sub-modules and between several GPUs communicate through the GPU management module to jointly complete the data processing tasks issued by the GPU management module, avoiding the need for communication between multiple GPUs to pass through the CPU module. The problem of low communication efficiency caused by the conversion improves the communication efficiency between GPUs.
3、本发明技术方案中每个GPU管理子模块均与多个并联连接的GPU子模块通信连接,可以提升并行处理的带宽,使得GPU之间的互联带宽达到最佳性能。3. In the technical solution of the present invention, each GPU management sub-module is communicatively connected with a plurality of GPU sub-modules connected in parallel, which can improve the bandwidth of parallel processing and make the interconnection bandwidth between GPUs achieve the best performance.
4、本发明技术方案中风扇控制模块以及单独设置的功耗监测模块,所述功耗监测模块实时监测GPU模块的功耗,一旦监测GPU模块功耗超过设定阈值时,及时通过风扇控制模块增大风扇运行转速,避免因为GPU模块功耗变化剧烈,造成风扇控制模块散热不及时造成的发热问题,从而影响GPU使用效率。4. In the technical solution of the present invention, the fan control module and the power consumption monitoring module set separately, the power consumption monitoring module monitors the power consumption of the GPU module in real time, and once the power consumption of the monitored GPU module exceeds the set threshold, the fan control module will pass the fan control module in time. Increase the fan running speed to avoid the heat problem caused by the untimely cooling of the fan control module due to the dramatic change in the power consumption of the GPU module, thus affecting the GPU usage efficiency.
应当理解的是以上的一般描述以及后文的细节描述仅是示例性和解释性的,并不能限制本发明。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention.
为了更清楚说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单介绍,显而易见的,对于本领域普通技术人员而言,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, for those of ordinary skill in the art, On the premise of no creative work, other drawings can also be obtained from these drawings.
图1为本发明方案中实施例一装置的结构示意图;Fig. 1 is the structural schematic diagram of the device of Example 1 in the scheme of the present invention;
图2为本发明方案中实施例二方法的流程示意图;Fig. 2 is the schematic flow chart of the method of embodiment 2 in the scheme of the present invention;
图3为本发明方案中实施例三方法的流程示意图;Fig. 3 is the schematic flow chart of embodiment three method in the scheme of the present invention;
图4为本发明方案中实施例三方法中S6的流程示意图;Fig. 4 is the schematic flow chart of S6 in the method of embodiment three in the scheme of the present invention;
图5为本发明方案中实施例三方法中S64的流程示意图;Fig. 5 is the schematic flow chart of S64 in the third method in the embodiment of the present invention;
图6为本发明方案中实施例四方法的流程示意图;6 is a schematic flow chart of the method of Example 4 in the scheme of the present invention;
图7为本发明方案中实施例五GPU服务器的结构示意图。FIG. 7 is a schematic structural diagram of a GPU server in Embodiment 5 in the solution of the present invention.
为能清楚说明本方案的技术特点,下面通过具体实施方式,并结合其附图,对本发明进行详细阐述。下文的公开提供了许多不同的实施例或例子用来实现本发明的不同结构。为了简化本发明的公开,下文中对特定例子的部件和设置进行描述。此外,本发明可以在不同例子中重复参考数字和/或字母。这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施例和/或设置之间的关系。应当注意,在附图中所图示的部件不一定按比例绘制。本发明省略了对公知组件和处理技术及工艺的描述以避免不必要地限制本发明。In order to clearly illustrate the technical features of the solution, the present invention will be described in detail below through specific embodiments and in conjunction with the accompanying drawings. The following disclosure provides many different embodiments or examples for implementing different structures of the invention. In order to simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Furthermore, the present invention may repeat reference numerals and/or letters in different instances. This repetition is for the purpose of simplicity and clarity and does not in itself indicate a relationship between the various embodiments and/or arrangements discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and processes are omitted from the present invention to avoid unnecessarily limiting the present invention.
实施例一Example 1
如图1所示,本发明提供了一种基于侦测调节模块的GPU管理装置,包括:CPU模块1、CPU管理模块2、转换模块3、GPU模块4、GPU管理模块5、侦测调节模块6,侦测调节模块6的调节控制端分别与GPU管理模块5、CPU管理模块2的控制端通信连接,用于检测待处理的数据类型,并根据待处理的数据类型选择对应的GPU模块1和/或CPU模块4进行处理;CPU管理模块2与CPU模块1通信连接,用于实现对CPU模块1的管理;GPU管理模块5与GPU模块4通信连接,用于实现对GPU模块4的管理以及待处理任务的均衡分配;CPU模块1通过转换模块3与GPU模块4通信连接。As shown in FIG. 1 , the present invention provides a GPU management device based on a detection and adjustment module, including: a CPU module 1, a CPU management module 2, a conversion module 3, a GPU module 4, a GPU management module 5, and a detection and adjustment module 6, the adjustment control terminal of the detection adjustment module 6 is respectively connected with the control terminal of the GPU management module 5 and the CPU management module 2 to communicate with each other, for detecting the data type to be processed, and select the corresponding GPU module 1 according to the data type to be processed. And/or the CPU module 4 is processed; the CPU management module 2 is communicated and connected with the CPU module 1, for realizing the management of the CPU module 1; the GPU management module 5 is communicated and connected with the GPU module 4, and is used for realizing the management of the GPU module 4 And the balanced distribution of tasks to be processed; the CPU module 1 is connected to the GPU module 4 in communication through the conversion module 3 .
具体地,GPU模块4包括多个并联连接的GPU子模块41,每个GPU子模块41包括若干GPU411以及加速卡412,若干GPU411与加速卡412并联设置,多个GPU子模块41之间以及若干GPU411之间均通过GPU管理模块5通信,共同完成GPU管理模块5下发的数据处理任务。Specifically, the GPU module 4 includes a plurality of GPU sub-modules 41 connected in parallel, each GPU sub-module 41 includes a plurality of GPUs 411 and an accelerator card 412, and a plurality of GPUs 411 and the accelerator card 412 are arranged in parallel. The GPUs 411 all communicate through the GPU management module 5 to jointly complete the data processing task issued by the GPU management module 5 .
CPU在数字媒体处理和科学计算等计算密集型的应用领域执行效率高,而GPU在大规模数据的并行计算执行效率高。基于GPU的高效并行计算主要利用混合架构中CPU与GPU协作计算的模式。在混合架构的系统提高程序的执行性能,在多CPU和多GPU混合架构系统平台上,GPU与GPU之间不能直接的进行数据的传输,只能GPU通过转换模块先传输给CPU,再由CPU把相应的数据传输给另一个接收数据的GPU,这种通信方式将会带来巨大的通信开销。多个GPU子模块之间以及若干GPU之间均通过GPU管理模块通信, 利用GPU管理模块5(起到切换和管理双重作用)任务将其均衡分配到各GPU,以防止GPU与GPU间高额的通信开销影响数据流程序的整体性能;多个GPU子模块之间以及若干GPU之间均通过GPU管理模块通信,共同完成GPU管理模块下发的数据处理任务,避免了多个GPU之间通信需要通过CPU模块的转换造成的通信效率低的问题,提高了GPU之间通信效率。CPUs are highly efficient in computing-intensive applications such as digital media processing and scientific computing, while GPUs are highly efficient in parallel computing of large-scale data. The efficient parallel computing based on GPU mainly utilizes the cooperative computing mode of CPU and GPU in the hybrid architecture. In the hybrid architecture system, the execution performance of the program is improved. On the multi-CPU and multi-GPU hybrid architecture system platform, the data cannot be directly transmitted between the GPU and the GPU. Only the GPU can first transmit the data to the CPU through the conversion module, and then the CPU To transmit the corresponding data to another GPU that receives the data, this communication method will bring huge communication overhead. Communication between multiple GPU sub-modules and between several GPUs is through the GPU management module, and the task of GPU management module 5 (playing dual roles of switching and management) is used to distribute it to each GPU in a balanced manner, so as to prevent high transaction costs between GPUs and GPUs. The communication overhead of the GPU affects the overall performance of the data flow program; multiple GPU sub-modules and several GPUs communicate through the GPU management module to jointly complete the data processing tasks issued by the GPU management module, avoiding communication between multiple GPUs The problem of low communication efficiency caused by the conversion of the CPU module is required to improve the communication efficiency between GPUs.
CPU模块1中至少包括两个CPU11,即CPU0、CPU1,转换模块3包括Retimer芯片、PCIe Switch芯片,Retimer芯片串联于CPU与PCIe Switch芯片之间,一端与CPU连接,另一端与PCIe Switch芯片连接,主要用于信号中继,以保证信号进行无损传输,PCIe Switch芯片主要作用是通道转换作用;每个CPU11与两个转换模块3分别连接,每个转换模块3与对应GPU子模块41连接,对应地,GPU子模块41为4个,每个GPU子模块41包括两个GPU411以及一个加速卡412,即,从GPU0-GPU7,加速卡0-加速卡3,CPU0引出的一路PCIe X16经Retimer芯片和PCIe Switch芯片扩展为3路PCIe X16,分别连接到GPU0、GPU1和加速卡0;CPU0引出的另一路PCIe X16经Retimer芯片和PCIe Switch芯片扩展为3路PCIe X16,分别连接到GPU2、GPU3和加速卡1;CPU1引出的一路PCIe X16经Retimer芯片和PCIe Switch芯片扩展为3路PCIe X16,分别连接到GPU4、GPU5和加速卡2;CPU1引出的另一路PCIe X16经Retimer芯片和PCIe Switch芯片扩展为3路PCIe X16,分别连接到GPU6、GPU7和加速卡3。GPU0…GPU7以及加速卡0…加速卡3均分别与 GPU管理模块5连接。The CPU module 1 includes at least two CPU11, namely CPU0 and CPU1, and the conversion module 3 includes a Retimer chip and a PCIe Switch chip. The Retimer chip is connected in series between the CPU and the PCIe Switch chip, one end is connected to the CPU, and the other end is connected to the PCIe Switch chip , mainly used for signal relay to ensure lossless transmission of signals. The main function of the PCIe Switch chip is channel conversion; each CPU11 is connected to two conversion modules 3 respectively, and each conversion module 3 is connected to the corresponding GPU sub-module 41. Correspondingly, the number of GPU sub-modules 41 is 4, and each GPU sub-module 41 includes two GPUs 411 and one accelerator card 412, that is, from GPU0-GPU7, accelerator card 0-acceleration card 3, and CPU0 all the way PCIe X16 leads through Retimer The chip and PCIe Switch chip are expanded into 3 lanes of PCIe X16, which are respectively connected to GPU0, GPU1 and accelerator card 0; the other lane of PCIe X16 from CPU0 is expanded into 3 lanes of PCIe X16 through the Retimer chip and PCIe Switch chip, which are respectively connected to GPU2 and GPU3 and accelerator card 1; one route of PCIe X16 from CPU1 is expanded into 3 routes of PCIe X16 through Retimer chip and PCIe Switch chip, which are connected to GPU4, GPU5 and accelerator card 2 respectively; another route of PCIe X16 from CPU1 is routed through Retimer chip and PCIe Switch chip Expanded to 3-way PCIe X16, connected to GPU6, GPU7 and accelerator card 3 respectively. GPU0 . . . GPU7 and accelerator card 0 . . . accelerator card 3 are respectively connected to the GPU management module 5 .
GPU管理模块5包括多个GPU管理子模块51,多个GPU管理子模块51之间并联连接,且每个GPU管理子模块51均与多个并联连接的GPU子模块41通信连接。The GPU management module 5 includes a plurality of GPU management sub-modules 51, and the plurality of GPU management sub-modules 51 are connected in parallel, and each GPU management sub-module 51 is connected in communication with a plurality of GPU sub-modules 41 connected in parallel.
为匹配本发明中GPU子模块41,GPU管理子模块51的数量可以为多个(一个也可以,但是带宽性能达不到最佳),具体地,可以是6个,每个GPU管理子模块51均与多个并联连接的GPU子模块通信连接,可以提升并行处理的带宽,使得GPU之间的互联带宽达到最佳性能。In order to match the GPU sub-modules 41 in the present invention, the number of GPU management sub-modules 51 can be multiple (one is also possible, but the bandwidth performance is not optimal), specifically, it can be 6, each GPU management sub-module. 51 are connected to multiple GPU sub-modules connected in parallel, which can improve the bandwidth of parallel processing and make the interconnection bandwidth between GPUs achieve the best performance.
进一步地,还包括:功耗监测模块7以及风扇控制模块8,功耗监测模块7的监测端与GPU模块4连接,用于实时监测GPU模块4的功耗,功耗监测模块7的输出端与风扇控制模块8的输入端连接,一旦监测GPU模块4功耗超过设定阈值时,通过风扇控制模块8增大风扇运行转速。Further, it also includes: a power consumption monitoring module 7 and a fan control module 8, and the monitoring end of the power consumption monitoring module 7 is connected to the GPU module 4 for real-time monitoring of the power consumption of the GPU module 4, and the output end of the power consumption monitoring module 7 It is connected to the input end of the fan control module 8, and once the power consumption of the monitored GPU module 4 exceeds the set threshold, the fan control module 8 increases the running speed of the fan.
具体地,风扇控制模块8可以包括BMC81(Baseboard Management Controller,基板管理控制器)、CPLD82(Complex Programming logic device,可编程逻辑器件)、风扇83,BMC81的控制输出端与风扇83的控制输入端连接,CPLD82的控制输出端与风扇的控制输入端连接,CPLD82的监测端与BMC81的故障输出端连接,BMC81正常情况下,控制风扇运行;一旦CPLD82监测到BMC故障时,CPLD82接替BMC81控制风扇运行。Specifically, the fan control module 8 may include a BMC81 (Baseboard Management Controller, baseboard management controller), a CPLD82 (Complex Programming logic device, programmable logic device), a fan 83, and the control output end of the BMC81 is connected to the control input end of the fan 83 , the control output terminal of CPLD82 is connected to the control input terminal of the fan, and the monitoring terminal of CPLD82 is connected to the fault output terminal of BMC81. Under normal conditions, BMC81 controls the operation of the fan; once CPLD82 monitors the fault of BMC, CPLD82 takes over the operation of BMC81 to control the operation of the fan.
本发明技术方案中风扇控制模块8以及单独设置的功耗监测模 块7,功耗监测模块7实时监测GPU模块4的功耗,一旦监测GPU模块4功耗超过设定阈值时,及时通过风扇控制模块8增大风扇运行转速,避免因为GPU模块4功耗变化剧烈,造成风扇控制模块8散热不及时造成的发热问题,从而影响GPU使用效率。本发明单独设置功耗监测模块7的目的是为了缩短GPU模块功耗监测报警时间,因为BMC监测GPU模块4功耗时,一般采用轮询方式获取GPU模块4的功耗,轮询周期大概需要1s,而GPU模块4功耗变化往往在us级别,因此如果采用BMC直接监控GPU功耗时,容易造成报警不及时,造成GPU模块4过热,而本发明技术方案中通过单独设置功耗监测模块7,可以有效在GPU模块4功耗发生巨大变化时,及时通知BMC进行风扇转速的调节,从而使GPU模块4得到及时的散热,避免因为散热问题影响GPU模块运行。In the technical solution of the present invention, the fan control module 8 and the separately provided power consumption monitoring module 7, the power consumption monitoring module 7 monitors the power consumption of the GPU module 4 in real time, and once the power consumption of the monitored GPU module 4 exceeds the set threshold, the fan control The module 8 increases the running speed of the fan, so as to avoid the heating problem caused by the untimely heat dissipation of the fan control module 8 due to the drastic change in the power consumption of the GPU module 4, thereby affecting the GPU usage efficiency. The purpose of separately setting the power consumption monitoring module 7 in the present invention is to shorten the power consumption monitoring and alarm time of the GPU module, because when the BMC monitors the power consumption of the GPU module 4, it generally adopts the polling method to obtain the power consumption of the GPU module 4, and the polling period may need 1s, and the power consumption change of the GPU module 4 is often at the level of us. Therefore, if the BMC is used to directly monitor the GPU power consumption, it is easy to cause an untimely alarm and cause the GPU module 4 to overheat. In the technical solution of the present invention, the power consumption monitoring module is separately set. 7. When the power consumption of the GPU module 4 changes greatly, the BMC can be notified in time to adjust the fan speed, so that the GPU module 4 can be cooled in time, and the operation of the GPU module can be avoided due to heat dissipation problems.
本发明有效解决由于现有技术造成无法根据不同的应用场景调整CPU和GPU之间的互联拓扑,以达到一个浮点运算和整数运算的合理配置的问题,有效的提高了CPU以及GPU的利用率以及任务处理效率。The invention effectively solves the problem that the interconnection topology between the CPU and the GPU cannot be adjusted according to different application scenarios due to the prior art, so as to achieve a reasonable configuration of the floating point operation and the integer operation, and effectively improves the utilization rate of the CPU and the GPU and task processing efficiency.
实施例二 Embodiment 2
如图2所示,本发明技术方案还提供了一种基于侦测调节模块的GPU管理方法,是基于本发明实施例一基础上实现的,包括:As shown in FIG. 2 , the technical solution of the present invention also provides a GPU management method based on a detection and adjustment module, which is implemented on the basis of Embodiment 1 of the present invention, including:
S1,将待处理的任务划分为整数运算以及浮点运算;S1, divide the tasks to be processed into integer operations and floating-point operations;
S2,侦测调节模块侦测任务类型;S2, the detection and adjustment module detects the task type;
S3,如果是浮点运算任务,则优先通过GPU管理模块调用GPU 模块实现数据的运算处理;S3, if it is a floating-point operation task, the GPU module is preferentially called through the GPU management module to realize the operation processing of the data;
S4,如果是整数运算任务,则优先通过CPU管理模块调用CPU模块实现数据的运算处理;S4, if it is an integer operation task, the CPU management module is given priority to call the CPU module to realize the operation processing of the data;
S5,如果待处理的任务类型包括整数运算部分任务以及浮点运算部分任务,则将浮点运算部分任务优先通过GPU管理模块调用GPU模块实现数据的运算处理,将整数运算部分任务优先通过CPU管理模块调用CPU模块实现数据的运算处理。S5, if the type of the task to be processed includes an integer operation part task and a floating point operation part task, the floating point operation part task is preferentially called by the GPU management module to realize the operation processing of the data, and the integer operation part task is preferentially managed by the CPU The module calls the CPU module to realize the operation processing of the data.
本发明有效解决由于现有技术造成无法根据不同的应用场景调整CPU和GPU之间的互联拓扑,以达到一个浮点运算和整数运算的合理配置的问题,有效的提高了CPU以及GPU的利用率以及任务处理效率。The invention effectively solves the problem that the interconnection topology between the CPU and the GPU cannot be adjusted according to different application scenarios due to the prior art, so as to achieve a reasonable configuration of the floating point operation and the integer operation, and effectively improves the utilization rate of the CPU and the GPU and task processing efficiency.
实施例三 Embodiment 3
如图3所示,本发明技术方案还提供了一种基于侦测调节模块的GPU管理方法,是基于本发明实施例一基础上实现的,包括:As shown in FIG. 3 , the technical solution of the present invention also provides a GPU management method based on a detection and adjustment module, which is implemented on the basis of Embodiment 1 of the present invention, including:
S1,将待处理的任务划分为整数运算以及浮点运算;S1, divide the tasks to be processed into integer operations and floating-point operations;
S2,侦测调节模块侦测任务类型;S2, the detection and adjustment module detects the task type;
S3,如果是浮点运算任务,则优先通过GPU管理模块调用GPU模块实现数据的运算处理;S3, if it is a floating-point operation task, the GPU management module is given priority to call the GPU module to realize the operation processing of the data;
S4,如果是整数运算任务,则优先通过CPU管理模块调用CPU模块实现数据的运算处理;S4, if it is an integer operation task, the CPU management module is given priority to call the CPU module to realize the operation processing of the data;
S5,如果待处理的任务类型包括整数运算部分任务以及浮点运算部分任务,则将浮点运算部分任务优先通过GPU管理模块调用GPU 模块实现数据的运算处理,将整数运算部分任务优先通过CPU管理模块调用CPU模块实现数据的运算处理;S5, if the type of the task to be processed includes an integer operation part task and a floating point operation part task, the floating point operation part task is given priority to call the GPU module through the GPU management module to realize the operation processing of the data, and the integer operation part task is preferentially managed by the CPU The module calls the CPU module to realize the operation processing of the data;
S6,当GPU管理模块接收到侦测调节模块分配的任务时,获取任务队列中优先级最高的任务,根据待处理任务优先级调度GPU模块中的GPU集群资源。S6, when the GPU management module receives the task assigned by the detection and adjustment module, obtains the task with the highest priority in the task queue, and schedules the GPU cluster resources in the GPU module according to the priority of the task to be processed.
如图4所示,步骤S6具体包括:As shown in Figure 4, step S6 specifically includes:
S61,GPU管理模块遍历GPU集群资源;S61, the GPU management module traverses the GPU cluster resources;
S62,判断当前GPU集群的空闲运算能力满足所述待处理任务对应的用户的最小运算能力要求,如果判断结果为是,则执行步骤S63;如果判断结果为否,则执行步骤S64;S62, judging that the idle computing power of the current GPU cluster meets the minimum computing power requirement of the user corresponding to the task to be processed, if the judgment result is yes, then execute step S63; if the judgment result is no, execute step S64;
S63,将待处理任务分配至满足最小运算能力要求且需要的GPU数量最少的GPU集群中;S63, assigning the task to be processed to the GPU cluster that meets the minimum computing capability requirement and requires the least number of GPUs;
S64,根据任务优先级从小到大遍历当前执行任务,根据当前执行任务与待处理任务的优先级进行待处理任务调度。S64, traverse the currently executed tasks from small to large according to the priority of the tasks, and schedule the to-be-processed tasks according to the priorities of the currently-executed task and the to-be-processed task.
在步骤S63中,如果至少4个GPU即可满足最小运算能力要求,则将待处理任务分配至对应的4个GPU中进行计算处理。In step S63, if at least 4 GPUs can meet the minimum computing capability requirement, the task to be processed is allocated to the corresponding 4 GPUs for computing processing.
进一步地,如图5所示,S64具体包括:Further, as shown in Figure 5, S64 specifically includes:
S641,所有的当前执行任务的优先级是否均大于或等于待处理任务的优先级,如果判断结果为是,则执行步骤S642,如果判断结果为否,则执行步骤S643;S641, whether the priority of all currently executing tasks is greater than or equal to the priority of the task to be processed, if the judgment result is yes, then execute step S642, if the judgment result is no, then execute step S643;
S642,待处理任务等待下一次调度;S642, the pending task waits for the next scheduling;
S643,依次计算处理当前执行任务的GPU集群的空闲运算能力 和待释放运算能力的总和;S643, successively calculate and process the idle computing power of the GPU cluster of the current execution task and the sum of the computing power to be released;
S644,判断当前执行任务的GPU集群的空闲运算能力和待释放运算能力的总和是否满足所述待处理任务对应的用户的最小运算能力要求,如果判断结果为是,则执行步骤S645,如果判断结果为否,则执行步骤S646;S644, determine whether the sum of the idle computing power of the GPU cluster currently executing the task and the computing power to be released meets the minimum computing power requirement of the user corresponding to the task to be processed, if the judgment result is yes, then perform step S645, if the judgment result If no, execute step S646;
S645,将待处理任务分配至满足最小运算能力要求且需要的GPU数量最少的GPU集群,并将所述GPU集群中待释放运算能力对应的当前执行任务保存后挂起;S645, assigning the task to be processed to the GPU cluster that meets the minimum computing capability requirement and requires the least number of GPUs, and suspends after saving the current execution task corresponding to the computing capability to be released in the GPU cluster;
S646,等待下一次调度。S646, waiting for the next scheduling.
本发明技术方案中多个GPU子模块之间以及若干GPU之间均通过GPU管理模块通信,共同完成GPU管理模块下发的数据处理任务,避免了多个GPU之间通信需要通过CPU模块的转换造成的通信效率低的问题,提高了GPU之间通信效率。In the technical solution of the present invention, multiple GPU sub-modules and multiple GPUs communicate through the GPU management module to jointly complete the data processing tasks issued by the GPU management module, avoiding the need for the conversion of the CPU module for communication between multiple GPUs The resulting problem of low communication efficiency improves the communication efficiency between GPUs.
本发明技术方案中每个GPU管理子模块均与多个并联连接的GPU子模块通信连接,可以提升并行处理的带宽,使得GPU之间的互联带宽达到最佳性能。In the technical solution of the present invention, each GPU management sub-module is communicatively connected with a plurality of GPU sub-modules connected in parallel, which can improve the bandwidth of parallel processing and make the interconnection bandwidth between GPUs achieve the best performance.
本发明实施例利用GPU管理模块将任务将其均衡分配到各个GPU中,以防止GPU与GPU之间高额的通信开销影响数据流程序的整体性能,实现GPU之间负载均衡,保证了GPU的高效运行。In the embodiment of the present invention, the GPU management module is used to evenly distribute tasks to each GPU, so as to prevent the high communication overhead between the GPU and the GPU from affecting the overall performance of the data flow program, achieve load balancing between GPUs, and ensure the performance of the GPUs. Run efficiently.
实施例四 Embodiment 4
如图6所示,本发明技术方案还提供了一种基于侦测调节模块的GPU管理方法,是基于本发明实施例一基础上实现的,包括:As shown in FIG. 6 , the technical solution of the present invention also provides a GPU management method based on a detection and adjustment module, which is implemented on the basis of Embodiment 1 of the present invention, including:
S1,将待处理的任务划分为整数运算以及浮点运算;S1, divide the tasks to be processed into integer operations and floating-point operations;
S2,侦测调节模块侦测任务类型;S2, the detection and adjustment module detects the task type;
S3,如果是浮点运算任务,则优先通过GPU管理模块调用GPU模块实现数据的运算处理;S3, if it is a floating-point operation task, the GPU management module is given priority to call the GPU module to realize the operation processing of the data;
S4,如果是整数运算任务,则优先通过CPU管理模块调用CPU模块实现数据的运算处理;S4, if it is an integer operation task, the CPU management module is given priority to call the CPU module to realize the operation processing of the data;
S5,如果待处理的任务类型包括整数运算部分任务以及浮点运算部分任务,则将浮点运算部分任务优先通过GPU管理模块调用GPU模块实现数据的运算处理,将整数运算部分任务优先通过CPU管理模块调用CPU模块实现数据的运算处理;S5, if the type of the task to be processed includes an integer operation part task and a floating point operation part task, the floating point operation part task is preferentially called by the GPU management module to realize the operation processing of the data, and the integer operation part task is preferentially managed by the CPU The module calls the CPU module to realize the operation processing of the data;
S6,当GPU管理模块接收到侦测调节模块分配的任务时,获取任务队列中优先级最高的任务,根据待处理任务优先级调度GPU模块中的GPU集群资源;S6, when the GPU management module receives the task assigned by the detection adjustment module, obtains the task with the highest priority in the task queue, and schedules the GPU cluster resources in the GPU module according to the priority of the task to be processed;
S7,功耗监测模块实时获取GPU模块功耗,将当前GPU模块功耗值与设定功耗值进行比较,如果当前GPU模块功耗值大于设定功耗值,则控制风扇控制模块增大风扇转速。S7, the power consumption monitoring module obtains the power consumption of the GPU module in real time, and compares the current power consumption value of the GPU module with the set power consumption value. If the current GPU module power consumption value is greater than the set power consumption value, it controls the fan control module to increase speed of the fan.
本发明技术方案中风扇控制模块以及单独设置的功耗监测模块,所述功耗监测模块实时监测GPU模块的功耗,一旦监测GPU模块功耗超过设定阈值时,及时通过风扇控制模块增大风扇运行转速,避免因为GPU模块功耗变化剧烈,造成风扇控制模块散热不及时造成的发热问题,从而影响GPU使用效率。In the technical solution of the present invention, the fan control module and the power consumption monitoring module set separately, the power consumption monitoring module monitors the power consumption of the GPU module in real time, and once the power consumption of the monitored GPU module exceeds the set threshold, the fan control module increases the power consumption in time. The fan running speed can avoid the heat problem caused by the untimely cooling of the fan control module due to the dramatic change in the power consumption of the GPU module, thus affecting the efficiency of the GPU.
实施例五 Embodiment 5
如图7所示,本发明技术方案还提供了一种GPU服务器,包括本发明实施例一的基于侦测调节模块的GPU管理装置。其中,GPU服务器的高度可以是4U,除了本发明实施例一的基于侦测调节模块的GPU管理装置,还可以包括分为CPU Board(CPU板,可以集成2个CPU)、GPU Board(GPU板,可以集成8个GPU)、Bridge Board(CPU板和GPU板互联连接器)、Riser Board(扩展板)、PDB Board(电源背板)、冗余电源(4+4或者3+3PSU)等,也可以是其他GPU服务器结构,本发明在此不做限制。As shown in FIG. 7 , the technical solution of the present invention further provides a GPU server, including the GPU management device based on the detection and adjustment module according to Embodiment 1 of the present invention. The height of the GPU server may be 4U. In addition to the GPU management device based on the detection and adjustment module of the first embodiment of the present invention, it may also include CPU Board (CPU board, which can integrate 2 CPUs), GPU Board (GPU board) , can integrate 8 GPUs), Bridge Board (CPU board and GPU board interconnection connector), Riser Board (expansion board), PDB Board (power backplane), redundant power supply (4+4 or 3+3PSU), etc., It can also be other GPU server structure, which is not limited in the present invention.
上述虽然结合附图对本发明的具体实施方式进行了描述,但并非对本发明保护范围的限制,所属领域技术人员应该明白,在本发明的技术方案的基础上,本领域技术人员不需要付出创造性劳动即可做出的各种修改或变形仍在本发明的保护范围以内。Although the specific embodiments of the present invention have been described above in conjunction with the accompanying drawings, they do not limit the scope of protection of the present invention. Those skilled in the art should understand that on the basis of the technical solutions of the present invention, those skilled in the art do not need to pay creative work. Various modifications or deformations that can be made are still within the protection scope of the present invention.
Claims (10)
- 一种基于侦测调节模块的GPU管理装置,其特征是,包括:CPU模块、CPU管理模块、转换模块、GPU模块、GPU管理模块、侦测调节模块,所述侦测调节模块的调节控制端分别与GPU管理模块、CPU管理模块的控制端通信连接,用于检测待处理的数据类型,并根据待处理的数据类型选择对应的GPU模块和/或CPU模块进行处理;所述CPU管理模块与CPU模块通信连接,用于实现对CPU模块的管理;所述GPU管理模块与GPU模块通信连接,用于实现对GPU模块的管理以及待处理任务的均衡分配;所述CPU模块通过转换模块与GPU模块通信连接。A GPU management device based on a detection and adjustment module, characterized in that it comprises: a CPU module, a CPU management module, a conversion module, a GPU module, a GPU management module, a detection and adjustment module, and an adjustment control end of the detection and adjustment module It is respectively connected with the control terminal of the GPU management module and the CPU management module to detect the data type to be processed, and select the corresponding GPU module and/or the CPU module for processing according to the data type to be processed; the CPU management module and The CPU module is communicatively connected to realize the management of the CPU module; the GPU management module is communicatively connected to the GPU module to realize the management of the GPU module and the balanced distribution of tasks to be processed; the CPU module is connected to the GPU through the conversion module Module communication connection.
- 根据权利要求1所述的基于侦测调节模块的GPU管理装置,其特征是,所述GPU模块包括多个并联连接的GPU子模块,每个GPU子模块包括若干GPU以及加速卡,若干GPU与加速卡并联设置,多个GPU子模块之间以及若干GPU之间均通过GPU管理模块通信,共同完成GPU管理模块下发的数据处理任务。The GPU management device based on a detection and adjustment module according to claim 1, wherein the GPU module includes a plurality of GPU sub-modules connected in parallel, each GPU sub-module includes a plurality of GPUs and an accelerator card, and the plurality of GPUs and Accelerator cards are set in parallel, and multiple GPU sub-modules and several GPUs communicate through the GPU management module to jointly complete the data processing tasks issued by the GPU management module.
- 根据权利要求2所述的基于侦测调节模块的GPU管理装置,其特征是,GPU管理模块包括多个GPU管理子模块,多个GPU管理子模块之间并联连接,且每个GPU管理子模块均与多个并联连接的GPU子模块通信连接。The GPU management device based on the detection and adjustment module according to claim 2, wherein the GPU management module comprises a plurality of GPU management sub-modules, the plurality of GPU management sub-modules are connected in parallel, and each GPU management sub-module Both are connected in communication with a plurality of GPU sub-modules connected in parallel.
- 根据权利要求1-3任一所述的基于侦测调节模块的GPU管理装置,其特征是,还包括:功耗监测模块以及风扇控制模块,所述功耗监测模块的监测端与GPU模块连接,用于实时监测GPU模块的功 耗,所述功耗监测模块的输出端与风扇控制模块的输入端连接,一旦监测GPU模块功耗超过设定阈值时,通过风扇控制模块增大风扇运行转速。The GPU management device based on a detection and adjustment module according to any one of claims 1-3, further comprising: a power consumption monitoring module and a fan control module, wherein a monitoring end of the power consumption monitoring module is connected to the GPU module , used to monitor the power consumption of the GPU module in real time. The output end of the power consumption monitoring module is connected to the input end of the fan control module. Once the power consumption of the monitored GPU module exceeds the set threshold, the fan control module will increase the fan running speed. .
- 一种基于侦测调节模块的GPU管理方法,其特征是,是基于权利要求1-4任一所述的基于侦测调节模块的GPU管理装置基础上实现的,包括:A GPU management method based on a detection and adjustment module is characterized in that, it is realized on the basis of the GPU management device based on the detection and adjustment module of any one of claims 1-4, comprising:将待处理的任务划分为整数运算以及浮点运算;Divide pending tasks into integer operations and floating-point operations;侦测调节模块侦测任务类型,如果是浮点运算任务,则通过GPU管理模块调用GPU模块实现数据的运算处理;如果是整数运算任务,则通过CPU管理模块调用CPU模块实现数据的运算处理;如果待处理的任务类型包括整数运算部分任务以及浮点运算部分任务,则将浮点运算部分任务通过GPU管理模块调用GPU模块实现数据的运算处理,将整数运算部分任务通过CPU管理模块调用CPU模块实现数据的运算处理。The detection and adjustment module detects the task type. If it is a floating-point operation task, the GPU module is called through the GPU management module to realize the operation processing of the data; if it is an integer operation task, the CPU module is called through the CPU management module to realize the operation and processing of the data; If the types of tasks to be processed include integer operation part tasks and floating point operation part tasks, the floating point operation part of the task is called through the GPU management module to call the GPU module to implement data operation processing, and the integer operation part of the task is called through the CPU management module to call the CPU module Realize the operation and processing of data.
- 根据权利要求5所述的基于侦测调节模块的GPU管理方法,其特征是,当GPU管理模块接收到侦测调节模块分配的任务时,获取任务队列中优先级最高的任务,根据待处理任务优先级调度GPU模块中的GPU集群资源。The GPU management method based on the detection and adjustment module according to claim 5, wherein when the GPU management module receives the task assigned by the detection and adjustment module, it obtains the task with the highest priority in the task queue, and according to the task to be processed Priority scheduling of GPU cluster resources in the GPU module.
- 根据权利要求6所述的基于侦测调节模块的GPU管理方法,其特征是,根据待处理任务优先级调度GPU模块中的GPU集群资源具体包括:The GPU management method based on the detection adjustment module according to claim 6, wherein scheduling the GPU cluster resources in the GPU module according to the priority of the task to be processed specifically includes:GPU管理模块遍历GPU集群资源,如果当前GPU集群的空闲 运算能力满足所述待处理任务对应的用户的最小运算能力要求,则将待处理任务分配至满足最小运算能力要求且需要的GPU数量最少的GPU集群中;如果当前GPU集群的空闲运算能力不能满足所述待处理任务对应的用户的最小运算能力要求,则根据任务优先级从小到大遍历当前执行任务,根据当前执行任务与待处理任务的优先级进行待处理任务调度。The GPU management module traverses the GPU cluster resources, and if the idle computing power of the current GPU cluster meets the minimum computing power requirement of the user corresponding to the task to be processed, the task to be processed is allocated to the one that meets the minimum computing power requirement and requires the least number of GPUs. In a GPU cluster; if the idle computing power of the current GPU cluster cannot meet the minimum computing power requirements of the user corresponding to the task to be processed, then traverse the currently executing task from small to large according to the task priority, and traverse the currently executing task from small to large according to the task priority. Priority for pending task scheduling.
- 根据权利要求7所述的基于侦测调节模块的GPU管理方法,其特征是,根据当前执行任务与待处理任务的优先级进行待处理任务调度具体包括:The GPU management method based on the detection and adjustment module according to claim 7, wherein the scheduling of tasks to be processed according to the priority of the current execution task and the task to be processed specifically includes:如果所有的当前执行任务的优先级均大于或等于待处理任务的优先级,则待处理任务等待下一次调度;如果当前执行任务的优先级小于待处理任务的优先级,则依次计算处理当前执行任务的GPU集群的空闲运算能力和待释放运算能力的总和,如果当前执行任务的GPU集群的空闲运算能力和待释放运算能力的总和不满足所述待处理任务对应的用户的最小运算能力要求,则等待下一次调度;如果当前执行任务的GPU集群的空闲运算能力和待释放运算能力的总和满足所述待处理任务对应的用户的最小运算能力要求,则将待处理任务分配至满足最小运算能力要求且需要的GPU数量最少的GPU集群,并将所述GPU集群中待释放运算能力对应的当前执行任务保存后挂起。If the priority of all currently executing tasks is greater than or equal to the priority of the pending task, the pending task waits for the next scheduling; if the priority of the currently executing task is less than the priority of the pending task, the current execution is calculated and processed in turn The sum of the idle computing power and the computing power to be released of the GPU cluster of the task, if the sum of the idle computing power and the computing power to be released of the GPU cluster currently executing the task does not meet the minimum computing power requirement of the user corresponding to the task to be processed, Then wait for the next scheduling; if the sum of the idle computing power of the GPU cluster currently executing the task and the computing power to be released meets the minimum computing power requirement of the user corresponding to the task to be processed, the task to be processed is allocated to meet the minimum computing power. A GPU cluster with the least number of GPUs required and required, saves and suspends the currently executed task corresponding to the computing power to be released in the GPU cluster.
- 根据权利要求5-8任一所述的基于侦测调节模块的GPU管理方法,其特征是,还包括:The GPU management method based on the detection adjustment module according to any one of claims 5-8, it is characterized in that, also comprises:功耗监测模块实时获取GPU模块功耗,将当前GPU模块功耗值与设定功耗值进行比较,如果当前GPU模块功耗值大于设定功耗值,则控制风扇控制模块增大风扇转速。The power consumption monitoring module obtains the power consumption of the GPU module in real time, and compares the current power consumption value of the GPU module with the set power consumption value. If the current GPU module power consumption value is greater than the set power consumption value, it controls the fan control module to increase the fan speed. .
- 一种GPU服务器,其特征是,包括如权利要求1-4任一所述的基于侦测调节模块的GPU管理装置。A GPU server, characterized in that it includes the GPU management device based on a detection and adjustment module according to any one of claims 1-4.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010767363.3A CN112000468B (en) | 2020-08-03 | 2020-08-03 | GPU management device and method based on detection and adjustment module and GPU server |
CN202010767363.3 | 2020-08-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022028061A1 true WO2022028061A1 (en) | 2022-02-10 |
Family
ID=73463606
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/096546 WO2022028061A1 (en) | 2020-08-03 | 2021-05-27 | Gpu management apparatus and method based on detection adjustment module, and gpu server |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112000468B (en) |
WO (1) | WO2022028061A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115269209A (en) * | 2022-09-30 | 2022-11-01 | 浙江宇视科技有限公司 | GPU cluster scheduling method and server |
CN117311989A (en) * | 2023-11-28 | 2023-12-29 | 四川并济科技有限公司 | GPU cluster dynamic power management system and method |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112000468B (en) * | 2020-08-03 | 2023-02-24 | 苏州浪潮智能科技有限公司 | GPU management device and method based on detection and adjustment module and GPU server |
CN113194048B (en) * | 2021-04-16 | 2023-05-26 | 山东英信计算机技术有限公司 | Device for dynamically switching CPU and GPU topology and use method |
CN113504966B (en) * | 2021-06-22 | 2023-10-31 | 中国科学院计算技术研究所 | GPU cluster scheduling strategy simulation method and GPU cluster simulator |
CN113946537A (en) * | 2021-10-14 | 2022-01-18 | 浪潮商用机器有限公司 | Accelerating device and server |
CN115373860B (en) * | 2022-10-26 | 2023-01-10 | 小米汽车科技有限公司 | Scheduling method, device and equipment of GPU (graphics processing Unit) tasks and storage medium |
CN117170878B (en) * | 2023-10-31 | 2024-01-26 | 北京蓝耘科技股份有限公司 | Method for dynamically adjusting CPU and GPU caches |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103365726A (en) * | 2013-07-08 | 2013-10-23 | 华中科技大学 | Resource management method and system facing GPU (Graphic Processing Unit) cluster |
US20170004019A1 (en) * | 2013-12-23 | 2017-01-05 | Deutsche Telekom Ag | System and method for mobile augmented reality task scheduling |
CN107135257A (en) * | 2017-04-28 | 2017-09-05 | 东方网力科技股份有限公司 | Task is distributed in a kind of node cluster method, node and system |
CN112000468A (en) * | 2020-08-03 | 2020-11-27 | 苏州浪潮智能科技有限公司 | GPU management device and method based on detection and adjustment module and GPU server |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101901042B (en) * | 2010-08-27 | 2011-07-27 | 上海交通大学 | Method for reducing power consumption based on dynamic task migrating technology in multi-GPU (Graphic Processing Unit) system |
CN109033001B (en) * | 2018-07-17 | 2021-08-27 | 北京百度网讯科技有限公司 | Method and apparatus for allocating GPUs |
CN110908799A (en) * | 2019-11-08 | 2020-03-24 | 浪潮电子信息产业股份有限公司 | Communication method, device, equipment and medium in distributed training |
-
2020
- 2020-08-03 CN CN202010767363.3A patent/CN112000468B/en active Active
-
2021
- 2021-05-27 WO PCT/CN2021/096546 patent/WO2022028061A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103365726A (en) * | 2013-07-08 | 2013-10-23 | 华中科技大学 | Resource management method and system facing GPU (Graphic Processing Unit) cluster |
US20170004019A1 (en) * | 2013-12-23 | 2017-01-05 | Deutsche Telekom Ag | System and method for mobile augmented reality task scheduling |
CN107135257A (en) * | 2017-04-28 | 2017-09-05 | 东方网力科技股份有限公司 | Task is distributed in a kind of node cluster method, node and system |
CN112000468A (en) * | 2020-08-03 | 2020-11-27 | 苏州浪潮智能科技有限公司 | GPU management device and method based on detection and adjustment module and GPU server |
Non-Patent Citations (2)
Title |
---|
LUO QI: "Research on Convolutional Neural Network Optimization Base on Heterogeneous System Architecture Research on Convolutional Neural Network Optimization Base on Heterogeneous System Architecture", MASTER THESIS, TIANJIN POLYTECHNIC UNIVERSITY, CN, no. 7, 15 July 2018 (2018-07-15), CN , XP055895510, ISSN: 1674-0246 * |
WU LAN: "Test and Improvement of Kaveri with HSA", MASTER THESIS, TIANJIN POLYTECHNIC UNIVERSITY, CN, no. 4, 15 April 2015 (2015-04-15), CN , XP055895509, ISSN: 1674-0246 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115269209A (en) * | 2022-09-30 | 2022-11-01 | 浙江宇视科技有限公司 | GPU cluster scheduling method and server |
CN115269209B (en) * | 2022-09-30 | 2023-01-10 | 浙江宇视科技有限公司 | GPU cluster scheduling method and server |
CN117311989A (en) * | 2023-11-28 | 2023-12-29 | 四川并济科技有限公司 | GPU cluster dynamic power management system and method |
CN117311989B (en) * | 2023-11-28 | 2024-02-02 | 四川并济科技有限公司 | GPU cluster dynamic power management system and method |
Also Published As
Publication number | Publication date |
---|---|
CN112000468A (en) | 2020-11-27 |
CN112000468B (en) | 2023-02-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022028061A1 (en) | Gpu management apparatus and method based on detection adjustment module, and gpu server | |
CN109542830B (en) | Data processing system and data processing method | |
US9201490B2 (en) | Power management for a computer system | |
US20200073703A1 (en) | Apparatus and method for virtual machine scheduling in non-uniform memory access architecture | |
US8161307B2 (en) | Reducing power consumption while synchronizing a plurality of compute nodes during execution of a parallel application | |
JP2005332402A (en) | Method and apparatus for treating processing error in multiprocessor system | |
CN108664116B (en) | Self-adaptive power saving method and device for network function virtualization and CPU controller | |
EP2430541A1 (en) | Power management in a multi-processor computer system | |
CN107132903B (en) | Energy-saving management implementation method, device and network equipment | |
CN101697198B (en) | Method for dynamically regulating number of active processors in single computer system | |
JP2010537266A (en) | Proactive power management in parallel computers | |
US7877620B2 (en) | Managing power in a parallel computer | |
CN101246438A (en) | Process and interrupt processing method and device for symmetrical multiprocessing system | |
CN113037875B (en) | Method for realizing asynchronous gateway in distributed real-time service system | |
US11526767B2 (en) | Processor system and method for increasing data-transfer bandwidth during execution of a scheduled parallel process | |
US20220357786A1 (en) | Method and system for reducing power consumption by automatically allocating computing resources on the basis of component temperature | |
CN117978759B (en) | Interconnection device, high-performance exchange device and large-model all-in-one machine | |
US11422849B2 (en) | Technology for dynamically grouping threads for energy efficiency | |
CN117215801B (en) | On-chip load performance optimizing device suitable for multi-core processor | |
CN113946537A (en) | Accelerating device and server | |
CN110647399A (en) | High-performance computing system and method based on artificial intelligence network | |
CN113608607B (en) | Multi-node server control method and multi-node server | |
CN116225688A (en) | Multi-core collaborative rendering processing method based on GPU instruction forwarding | |
CN106789699B (en) | A kind of distributed online stream process service system | |
TWI823655B (en) | Task processing system and task processing method applicable to intelligent processing unit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21854552 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21854552 Country of ref document: EP Kind code of ref document: A1 |