EP3686738A1 - Device and method for accelerating graphics processor units, and computer readable storage medium - Google Patents
Device and method for accelerating graphics processor units, and computer readable storage medium Download PDFInfo
- Publication number
- EP3686738A1 EP3686738A1 EP19170410.5A EP19170410A EP3686738A1 EP 3686738 A1 EP3686738 A1 EP 3686738A1 EP 19170410 A EP19170410 A EP 19170410A EP 3686738 A1 EP3686738 A1 EP 3686738A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- gpus
- usage
- accelerating
- switches
- threshold
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012545 processing Methods 0.000 claims abstract description 24
- 230000005540 biological transmission Effects 0.000 claims abstract description 9
- 238000004891 communication Methods 0.000 claims description 12
- 230000003993 interaction Effects 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/173—Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
- G06F15/17356—Indirect interconnection networks
- G06F15/17368—Indirect interconnection networks non hierarchical topologies
- G06F15/17375—One dimensional, e.g. linear array, ring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline, look ahead
- G06F9/3877—Concurrent instruction execution, e.g. pipeline, look ahead using a slave processor, e.g. coprocessor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the disclosure generally relates to computer applications.
- a graphics processing unit makes use of a central processing unit (CPU) to accelerate deep learning, analysis, and engineering applications on a computer.
- CPU central processing unit
- a job scheduler such as SLURM/LSF/BPS, is used to schedule incoming tasks.
- PCIe peripheral component interconnect express
- GPUs graphics processor units
- computer readable storage medium it is necessary to provide device and method for accelerating graphics processor units (GPUs), and computer readable storage medium, to reasonably arrange the GPUs, and optimize the GPUs computing performance to solve the above problem.
- a first aspect of the present disclosure provides a method for accelerating GPUs.
- a plurality of GPUs exchanging data with central processing units through switches, a number of the GPUs being greater than or equal to a number of switches being greater than or equal to a number of the central processing units.
- the method including: receiving a request for usage of GPU resource sent by a user; calculating a quantity of the GPUs necessary for the usage; arranging the GPUs according to the usage to maximize data transmission of the GPUs; and processing the request for GPU resource by the arranged GPUs.
- a second aspect of the present disclosure provides a device for accelerating GPUs.
- a plurality of GPUs exchanging data with central processing units through switches, a number of the GPUs being greater than or equal to a number of switches being greater than or equal to a number of the central processing unit.
- the device including: a communication unit configured for to establish a communication connection between the GPUs and the switches, and the switches and the CPUs; a processor; and a storage device storing one or more program, when executed by the processor, the one or more programs causing the processor to: receive a request for usage of GPU resource sent by a user; calculate a quantity of the GPUs necessary for the usage; arrange the GPUs according to the usage to maximize data transmission of the GPUs; process the request for GPU resource by the arranged GPUs.
- a third aspect of the present disclosure provides a computer readable storage medium having stored thereon instructions that, wherein when executed by at least one processor of a computing device, causes the processor to perform a method for accelerating CPUs described above.
- the method for accelerating GPUs of the present disclosure calculates a quantity of the GPUs necessary for the usage, and ranges the GPUs according to the usage to maximize data transmission of the GPU.
- the present disclosure further provides the device for accelerating GPUs and the computer readable storage medium. Applying the method for accelerating GPUs of the present disclosure to reasonably arrange the GPUs, and optimize the GPUs computing performance according to the request for usage of GPU resource sent by a use.
- FIG. 1 illustrates a GPU accelerating device 10 of an embodiment of the present disclosure.
- a plurality of GPUs can exchange data with CPUs through switches, and two GPUs can exchange data with each other.
- An interactive connection can be formed between a GPU and a switch, and an interactive connection can be formed between a switch and a CPU.
- a Quick Path Interconnect (QPI, also known as CSI, a system for a common interface) can be formed between two CPUs.
- QPI also known as CSI, a system for a common interface
- a number of the GPUs can be greater than or equal to a number of the switches, and the number of the switches can be greater than or equal to a number of the CPUs.
- the switch can be, but is not limited to, a PCIe switch.
- the GPU accelerating device 10 can include a communication unit 100, a processor 200, and a storage device 300.
- the processor 200 is electrically connected to the communication unit 100 and the storage device 300.
- the communication unit 100 can establish a communication connection between two GPUs and the switches, and between the switches and the CPUs. In at least one embodiment, the communication unit 100 can establish a communication with other mobile terminals through a wireless network.
- the wireless network can be, but is not limited to, WIFI, BLUETOOTH, cellular mobile network, satellite network, and the like.
- the communication unit 100 can include independent connection ports, including but not limited to, D-Sub interface, D-Sub port, DVI-I terminal, and Video-In & Video-Out port, composite video terminal, S terminal, enhanced S terminal, DVI port, and HDMI port.
- independent connection ports including but not limited to, D-Sub interface, D-Sub port, DVI-I terminal, and Video-In & Video-Out port, composite video terminal, S terminal, enhanced S terminal, DVI port, and HDMI port.
- the storage device 300 can store data and program code for the GPU.
- the storage device 300 can further store a formula for calculating a usage of the GPU under user resource request.
- the storage device 300 can further store principles of arrangement of the GPU and GPU index rule.
- the storage device 300 may be, but is not limited to, read-only memory (ROM), random-access memory (RAM), programmable read-only memory (PROM), erasable programmable ROM (EPROM), one-time programmable read-only memory (OTPROM), electrically EPROM (EEPROM), compact disc read-only memory (CD-ROM), hard disk, solid state drive, or other forms of electronic, electromagnetic, or optical recording medium.
- ROM read-only memory
- RAM random-access memory
- PROM programmable read-only memory
- EPROM erasable programmable ROM
- OTPROM one-time programmable read-only memory
- EEPROM electrically EPROM
- CD-ROM compact disc read-only memory
- hard disk hard disk, solid state drive, or other forms of electronic, electromagnetic, or optical recording medium.
- the processor 200 can be a digital signal processor (DSP), a microcontroller unit (MCU), a field-programmable gate array (FPGA), a CPU, a single chip microcomputer, a system on chip (SoC), or other equivalent dedicated chip.
- DSP digital signal processor
- MCU microcontroller unit
- FPGA field-programmable gate array
- CPU central processing unit
- SoC system on chip
- FIG. 2 illustrates a data processing system 400 executed in the GPU accelerating device 10.
- the data processing system 400 may include several modules, which are a collection of software instructions stored in the storage device 300 and are executed by the processor 200.
- the data processing system 400 may include a receiving module 410, a calculation module 420, an arranging module 430, and a data processing module 440.
- the receiving module 410 receives a resource usage request sent by the user.
- the calculating module 420 calculates the usage relevant to the resource request according to preset calculation rules, and further obtains a usage of the GPU required for processing the request.
- the calculation rules are based on factors such as the request for resource usage, completion time, and cost. For example, if the resource usage is relatively simple, the amount of data is relatively small, and the computational requirements for GPU is less, then a lesser number of GPUs are needed. More GPUs are needed to calculate the resource if the converse is true. If there is a time limit, the calculation needs to be completed as soon as possible, and more GPUs are needed. In theory, the greater the number of GPUs which are used, the faster the calculation of the resource usage can be completed, but additional cost is required in completing such computing. The user determines the number of GPUs needed to process his requirement according to the above-mentioned factors.
- the arranging module 430 arranges a relationship in the arrangement between the GPUs and the switches, and a relationship in relation to the CPUs, according to the usage of GPUs and the preset arrangement principle, so as to arrange the GPU resources for optimal acceleration.
- the arranging module 430 arranges each GPU to communicate with one switch.
- the arranging module 430 arranges the GPUs so as to maximize the bandwidth of the switch.
- the arranging module 430 arranges the plurality of GPUs to create a ring index. The specific arranging method is provided in the method.
- the data processing module 440 utilizes the GPUs to process the request.
- the present disclosure provides a method for accelerating one or more GPUs.
- the method can begin at block S301.
- a request for usage of GPU resource sent by the user is received by the receiving module 410.
- the calculating module 420 may calculate the resource usage according to preset calculation rules, and obtain the quantity of the GPUs required to process the request.
- the calculation rules are specifically determined based on factors such as the request for usage of resources, the completion time, and the cost. For example, if the desired usage is relatively simple, the amount of data is relatively small, and the computational requirements in relation to GPUs is less, then less GPUs are needed. On the other hand, more GPUs are needed for the converse. If there is a time limit and the calculation needs to be completed as soon as possible, more GPUs are needed. In theory, the more GPUs which are used, the faster is the calculation, but additional cost is required to complete the computing task. The user determines the number of GPUs needed to use to process his requirement according to the above-mentioned factors.
- the GPUs are arranged according to the usage to maximize data transmission of the GPUs.
- the arranging module 430 arranges a relationship between the GPU and the switch, and an arrangement of the CPUs, according to the usage of GPUs and the preset arrangement principle.
- the GPU resources are arranged within reason to achieve optimization of GPU acceleration, the arrangement principle being stored in the storage device 300.
- FIG. 4 illustrates an arrangement of the GPUs of the first case.
- the arranging module 430 arranges each GPU to communicate with one switch.
- the switches perform data interaction with one GPU.
- the first threshold is two.
- GPU 510 and GPU 520 are selected.
- the GPU 510 communicates with a switch 610
- the GPU 520 communicates with a switch 620
- the switches 610, 620 perform data interaction with a CPU 710.
- FIG. 5 illustrates the arrangement of the GPUs of the second case.
- the arranging module 430 distributes the GPUs into groups. Each group of GPUs communicates with one switch to form a joint body. Joint bodies perform data interaction with at least two CPUs. For example, in at least one embodiment, the second threshold is eight.
- the usage quantity of GPUs is five, then four switches (610, 620, 630, and 640) and two CPUs (710 and 720) are used.
- the GPUs are in four groups.
- the GPU 510 and the GPU 550 are arranged in one group, and GPU 520, GPU 530, and GPU 540 are in a group.
- Each group of GPUs communicates with a switch to form a joint body.
- the GPU 510, the GPU 550, and the switch 610 form one joint body.
- the GPU 520 and the switch 620 form one joint body
- the GPU 530 and the switch 630 form one joint body
- the GPU 540 and the switch 640 form one joint body.
- Each joint body can interact data-wise with one CPU.
- the switch 610 and the switch 620 are connected to the CPU 710
- the switch 630 and the switch 640 are connected to the CPU 720.
- each GPU is arranged in a group.
- each group of GPUs communicates with one switch to form the joint body, and the joint bodies are distributed as groups in themselves.
- Each group of joint bodies can exchange data with at least two CPUs, thus the bandwidth of the switch can be maximized.
- the CPUs need to exchange gradients.
- the manner of exchange can be done in a centralized manner.
- Each CPU transmits its own gradient to the CPU, and the CPU calculates the gradients and transmits the gradients to other GPUs.
- FIG. 6 illustrates the arrangement of the GPUs of the third case.
- the arranging module 430 arranges the plurality of GPUs to form a ring index according to a preset index rule.
- the GPUs in the ring index perform data interaction with the CPU through at least one switch.
- the index rule uses the NVlink connection of prior art, and is not detailed here. For example, when eight GPUs are to be used, four switches (610, 620, 630, and 640) and two CPUs (710 and 730) are used.
- the eight GPUs are GPU 510, GPU 520, GPU 530, GPU 540, GPU550, GPU 560, GPU 570, and GPU 580.
- the relationships of the GPUs are changed in the index to create a loop of the eight GPUs.
- the index numbers of the GPUs are changed by NVlink to form a loop structure in which GPU 510, GPU 520, GPU 530, GPU 540, GPU550, GPU 560, GPU 570, and GPU 580 are connected end to end.
- the GPU 510 and the GPU 580 are connected to the switch 610, the GPU 520 and the GPU 570 are connected to the switch 620, the GPU 530 and the GPUS 560 are connected to the switch 630, and the GPU 540 and the GPU 550 are connected to the switch 640.
- the switch 610 and the switch 620 are connected to the CPU 710, and the switch 630 and the switch 640 are connected to the CPU 720.
- NVlink uses a point-to-point architecture and serial transmission protocol. NVlink is used for the connections between CPU and GPU, or the connections of multiple GPUs.
- the relationship of connections can be changed according to the user's request.
- the index relationship between the GPUs can be changed to form the ring index, and the GPUs being the ring index reduces the data movements between the GPU and the CPU when processing requests for resources.
- the weight values between the GPUs are not limited by the bandwidth between GPU and GPU.
- NVlink accelerates communications between the GPUs and the GPUs, thereby reducing processing time, making data transfer between GPUs more efficient, and achieving optimal acceleration.
- the request for GPU resources is processed and satisfied by the arranged GPUs.
- the data processing module 440 can process the request for resource usage.
- the method for accelerating GPUs calculates the quantity of the GPUs required, and arranges the GPUs to maximize the GPU data transmissions.
- the method of the present disclosure is used to arrange the GPUs and improve the GPU operation performance within reasonable limits.
- the method disclosed can be used in the fields of image calculation, deep learning training, and the like.
- Each function in each embodiment of the present invention may be integrated in one processor, or separate physical units may exist, or two or more units may be integrated in one physical unit.
- the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software function modules.
Abstract
A method for accelerating graphics processing units (GPUs) receives a request for usage of GPU resource sent by a user, calculates a quantity of GPUs which are necessary, and arrange the GPUs in several ways to maximize data transmission from and between the GPUs, and between the GPUs and one or more central processing units (CPUs) connected by switches between the GPUs and the CPUs. A device for accelerating GPUs is also provided.
Description
- The disclosure generally relates to computer applications.
- A graphics processing unit (GPU) makes use of a central processing unit (CPU) to accelerate deep learning, analysis, and engineering applications on a computer. To maximize utilization of the GPUs, tasks are allocated to resources. A job scheduler, such as SLURM/LSF/BPS, is used to schedule incoming tasks. However, the above scheduling can create a bandwidth bottleneck in PCIe (peripheral component interconnect express) bus, which has certain limitations of its own, and acceleration by the GPU is thus limited.
- In view of the above situation, it is necessary to provide device and method for accelerating graphics processor units (GPUs), and computer readable storage medium, to reasonably arrange the GPUs, and optimize the GPUs computing performance to solve the above problem.
- A first aspect of the present disclosure provides a method for accelerating GPUs. A plurality of GPUs exchanging data with central processing units through switches, a number of the GPUs being greater than or equal to a number of switches being greater than or equal to a number of the central processing units. The method including: receiving a request for usage of GPU resource sent by a user; calculating a quantity of the GPUs necessary for the usage; arranging the GPUs according to the usage to maximize data transmission of the GPUs; and processing the request for GPU resource by the arranged GPUs.
- A second aspect of the present disclosure provides a device for accelerating GPUs. A plurality of GPUs exchanging data with central processing units through switches, a number of the GPUs being greater than or equal to a number of switches being greater than or equal to a number of the central processing unit. The device including: a communication unit configured for to establish a communication connection between the GPUs and the switches, and the switches and the CPUs; a processor; and a storage device storing one or more program, when executed by the processor, the one or more programs causing the processor to: receive a request for usage of GPU resource sent by a user; calculate a quantity of the GPUs necessary for the usage; arrange the GPUs according to the usage to maximize data transmission of the GPUs; process the request for GPU resource by the arranged GPUs.
- A third aspect of the present disclosure provides a computer readable storage medium having stored thereon instructions that, wherein when executed by at least one processor of a computing device, causes the processor to perform a method for accelerating CPUs described above.
- The method for accelerating GPUs of the present disclosure calculates a quantity of the GPUs necessary for the usage, and ranges the GPUs according to the usage to maximize data transmission of the GPU. The present disclosure further provides the device for accelerating GPUs and the computer readable storage medium. Applying the method for accelerating GPUs of the present disclosure to reasonably arrange the GPUs, and optimize the GPUs computing performance according to the request for usage of GPU resource sent by a use.
- Implementations of the present technology will now be described, by way of embodiments, with reference to the attached figures.
-
FIG. 1 is a schematic diagram of a GPU accelerating device in accordance with an embodiment of the present disclosure. -
FIG. 2 is a schematic diagram of an embodiment of a data processing system. -
FIG.3 is a flow chart of an embodiment of an accelerating method for GPU. -
FIG. 4 is a schematic diagram of an embodiment of multiple GPUs arranged in a first case. -
FIG. 5 is a schematic diagram of an embodiment of multiple GPUs arranged in a second case. -
FIG. 6 is schematic diagram of an embodiment of multiple GPUs arranged in a third case. - It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features. The description is not to be considered as limiting the scope of the embodiments described herein.
- The term "comprising" means "including, but not necessarily limited to", it specifically indicates open-ended inclusion or membership in a so-described combination, group, series, and the like.
-
FIG. 1 illustrates aGPU accelerating device 10 of an embodiment of the present disclosure. A plurality of GPUs can exchange data with CPUs through switches, and two GPUs can exchange data with each other. An interactive connection can be formed between a GPU and a switch, and an interactive connection can be formed between a switch and a CPU. A Quick Path Interconnect (QPI, also known as CSI, a system for a common interface) can be formed between two CPUs. A number of the GPUs can be greater than or equal to a number of the switches, and the number of the switches can be greater than or equal to a number of the CPUs. In at least one embodiment, the switch can be, but is not limited to, a PCIe switch. - The
GPU accelerating device 10 can include acommunication unit 100, aprocessor 200, and astorage device 300. Theprocessor 200 is electrically connected to thecommunication unit 100 and thestorage device 300. - The
communication unit 100 can establish a communication connection between two GPUs and the switches, and between the switches and the CPUs. In at least one embodiment, thecommunication unit 100 can establish a communication with other mobile terminals through a wireless network. The wireless network can be, but is not limited to, WIFI, BLUETOOTH, cellular mobile network, satellite network, and the like. - In at least one embodiment, the
communication unit 100 can include independent connection ports, including but not limited to, D-Sub interface, D-Sub port, DVI-I terminal, and Video-In & Video-Out port, composite video terminal, S terminal, enhanced S terminal, DVI port, and HDMI port. - The
storage device 300 can store data and program code for the GPU. - The
storage device 300 can further store a formula for calculating a usage of the GPU under user resource request. Thestorage device 300 can further store principles of arrangement of the GPU and GPU index rule. - The
storage device 300 may be, but is not limited to, read-only memory (ROM), random-access memory (RAM), programmable read-only memory (PROM), erasable programmable ROM (EPROM), one-time programmable read-only memory (OTPROM), electrically EPROM (EEPROM), compact disc read-only memory (CD-ROM), hard disk, solid state drive, or other forms of electronic, electromagnetic, or optical recording medium. - The
processor 200 can be a digital signal processor (DSP), a microcontroller unit (MCU), a field-programmable gate array (FPGA), a CPU, a single chip microcomputer, a system on chip (SoC), or other equivalent dedicated chip. -
FIG. 2 illustrates adata processing system 400 executed in theGPU accelerating device 10. Thedata processing system 400 may include several modules, which are a collection of software instructions stored in thestorage device 300 and are executed by theprocessor 200. In the embodiment as disclosed, thedata processing system 400 may include areceiving module 410, acalculation module 420, anarranging module 430, and adata processing module 440. - The
receiving module 410 receives a resource usage request sent by the user. - The calculating
module 420 calculates the usage relevant to the resource request according to preset calculation rules, and further obtains a usage of the GPU required for processing the request. - The calculation rules are based on factors such as the request for resource usage, completion time, and cost. For example, if the resource usage is relatively simple, the amount of data is relatively small, and the computational requirements for GPU is less, then a lesser number of GPUs are needed. More GPUs are needed to calculate the resource if the converse is true. If there is a time limit, the calculation needs to be completed as soon as possible, and more GPUs are needed. In theory, the greater the number of GPUs which are used, the faster the calculation of the resource usage can be completed, but additional cost is required in completing such computing. The user determines the number of GPUs needed to process his requirement according to the above-mentioned factors.
- The
arranging module 430 arranges a relationship in the arrangement between the GPUs and the switches, and a relationship in relation to the CPUs, according to the usage of GPUs and the preset arrangement principle, so as to arrange the GPU resources for optimal acceleration. - In at least one embodiment, there are three possible cases. In a first case, when the usage of GPUs calculated by the calculating
module 420 is less than or equal to a first threshold, thearranging module 430 arranges each GPU to communicate with one switch. In a second case, when the usage of GPUs calculated by the calculatingmodule 420 is greater than the first threshold but less than a second threshold, the arrangingmodule 430 arranges the GPUs so as to maximize the bandwidth of the switch. In a third case, when the usage of GPUs is calculated to be greater than or equal to the second threshold, the arrangingmodule 430 arranges the plurality of GPUs to create a ring index. The specific arranging method is provided in the method. - The
data processing module 440 utilizes the GPUs to process the request. - Referring to
FIG. 3 , the present disclosure provides a method for accelerating one or more GPUs. The method can begin at block S301. - At block S301, a request for usage of GPU resource sent by the user is received by the receiving
module 410. - At block S302, a quantity of GPUs necessary for the usage is calculated.
- The calculating
module 420 may calculate the resource usage according to preset calculation rules, and obtain the quantity of the GPUs required to process the request. - In detail, the calculation rules are specifically determined based on factors such as the request for usage of resources, the completion time, and the cost. For example, if the desired usage is relatively simple, the amount of data is relatively small, and the computational requirements in relation to GPUs is less, then less GPUs are needed. On the other hand, more GPUs are needed for the converse. If there is a time limit and the calculation needs to be completed as soon as possible, more GPUs are needed. In theory, the more GPUs which are used, the faster is the calculation, but additional cost is required to complete the computing task. The user determines the number of GPUs needed to use to process his requirement according to the above-mentioned factors.
- At block S303, the GPUs are arranged according to the usage to maximize data transmission of the GPUs.
- The arranging
module 430 arranges a relationship between the GPU and the switch, and an arrangement of the CPUs, according to the usage of GPUs and the preset arrangement principle. The GPU resources are arranged within reason to achieve optimization of GPU acceleration, the arrangement principle being stored in thestorage device 300. - The principle of arrangement principle is as follows.
-
FIG. 4 illustrates an arrangement of the GPUs of the first case. When the quantity of the GPUs calculated for necessary usage is less than or equal to a first numerical threshold, the arrangingmodule 430 arranges each GPU to communicate with one switch. The switches perform data interaction with one GPU. For example, in the present embodiment, the first threshold is two. When the usage of GPUs is two,GPU 510 andGPU 520 are selected. TheGPU 510 communicates with aswitch 610, theGPU 520 communicates with aswitch 620, and theswitches CPU 710. -
FIG. 5 illustrates the arrangement of the GPUs of the second case. When the usage of GPUs calculated as being required is greater than the first threshold but less than a second threshold, the arrangingmodule 430 distributes the GPUs into groups. Each group of GPUs communicates with one switch to form a joint body. Joint bodies perform data interaction with at least two CPUs. For example, in at least one embodiment, the second threshold is eight. When the usage quantity of GPUs is five, then four switches (610, 620, 630, and 640) and two CPUs (710 and 720) are used. The GPUs are in four groups. TheGPU 510 and theGPU 550 are arranged in one group, andGPU 520,GPU 530, andGPU 540 are in a group. Each group of GPUs communicates with a switch to form a joint body. TheGPU 510, theGPU 550, and theswitch 610 form one joint body. TheGPU 520 and theswitch 620 form one joint body, theGPU 530 and theswitch 630 form one joint body, and theGPU 540 and theswitch 640 form one joint body. Each joint body can interact data-wise with one CPU. In detail, theswitch 610 and theswitch 620 are connected to theCPU 710, and theswitch 630 and theswitch 640 are connected to theCPU 720. - If the usage quantity of GPUs is 4, such as
GPU 510,GPU 520,GPU 530, and GPU540, each GPU is arranged in a group. - In the second case, each group of GPUs communicates with one switch to form the joint body, and the joint bodies are distributed as groups in themselves. Each group of joint bodies can exchange data with at least two CPUs, thus the bandwidth of the switch can be maximized.
- With the above two arrangement of GPUs, the CPUs need to exchange gradients. The manner of exchange can be done in a centralized manner. Each CPU transmits its own gradient to the CPU, and the CPU calculates the gradients and transmits the gradients to other GPUs.
-
FIG. 6 illustrates the arrangement of the GPUs of the third case. When the usage of GPUs calculated by the calculatingmodule 420 is greater than or equal to the second threshold, the arrangingmodule 430 arranges the plurality of GPUs to form a ring index according to a preset index rule. The GPUs in the ring index perform data interaction with the CPU through at least one switch. The index rule uses the NVlink connection of prior art, and is not detailed here. For example, when eight GPUs are to be used, four switches (610, 620, 630, and 640) and two CPUs (710 and 730) are used. The eight GPUs areGPU 510,GPU 520,GPU 530,GPU 540, GPU550,GPU 560,GPU 570, andGPU 580. The relationships of the GPUs are changed in the index to create a loop of the eight GPUs. Specifically, according to the preset index rule, the index numbers of the GPUs are changed by NVlink to form a loop structure in whichGPU 510,GPU 520,GPU 530,GPU 540, GPU550,GPU 560,GPU 570, andGPU 580 are connected end to end. TheGPU 510 and theGPU 580 are connected to theswitch 610, theGPU 520 and theGPU 570 are connected to theswitch 620, theGPU 530 and theGPUS 560 are connected to theswitch 630, and theGPU 540 and theGPU 550 are connected to theswitch 640. Theswitch 610 and theswitch 620 are connected to theCPU 710, and theswitch 630 and theswitch 640 are connected to theCPU 720. - NVlink uses a point-to-point architecture and serial transmission protocol. NVlink is used for the connections between CPU and GPU, or the connections of multiple GPUs.
- In other embodiments, the relationship of connections can be changed according to the user's request.
- The index relationship between the GPUs can be changed to form the ring index, and the GPUs being the ring index reduces the data movements between the GPU and the CPU when processing requests for resources. The weight values between the GPUs are not limited by the bandwidth between GPU and GPU. NVlink accelerates communications between the GPUs and the GPUs, thereby reducing processing time, making data transfer between GPUs more efficient, and achieving optimal acceleration.
- At block S304, the request for GPU resources is processed and satisfied by the arranged GPUs.
- The
data processing module 440 can process the request for resource usage. - The method for accelerating GPUs calculates the quantity of the GPUs required, and arranges the GPUs to maximize the GPU data transmissions. The method of the present disclosure is used to arrange the GPUs and improve the GPU operation performance within reasonable limits.
- The method disclosed can be used in the fields of image calculation, deep learning training, and the like.
- A person skilled in the art knows that all or part of the processes in the above embodiments can be implemented by a computer program to instruct related hardware, and that the program can be stored in a computer readable storage medium. When the program is executed, a flow of an embodiment of the methods as described above may be included.
- Each function in each embodiment of the present invention may be integrated in one processor, or separate physical units may exist, or two or more units may be integrated in one physical unit. The above integrated unit can be implemented in the form of hardware or in the form of hardware plus software function modules.
- It is believed that the present embodiments and their advantages will be understood from the foregoing description, and it will be apparent that various changes may be made thereto without departing from the spirit and scope of the disclosure or sacrificing all of its material advantages, the examples hereinbefore described merely being exemplary embodiments of the present disclosure.
Claims (15)
- A method for accelerating graphics processor units (GPUs), a plurality of GPUs exchanging data with central processing units through switches, a number of the GPUs being greater than or equal to a number of switches being greater than or equal to a number of the central processing units, wherein the method comprising:receiving a request for usage of GPU resource sent by a user;calculating a quantity of the GPUs (510, 520, 530, 540, 550, 560, 570, 580) necessary for the usage;arranging the GPUs (510, 520, 530, 540, 550, 560, 570, 580) according to the usage to maximize data transmission of the GPUs (510, 520, 530, 540, 550, 560, 570, 580); andprocessing the request for GPU resource by the arranged GPUs (510, 520, 530, 540, 550, 560, 570, 580).
- The method for accelerating GPUs of claim 1, wherein the process of arranging the GPUs (510, 520, 530, 540, 550, 560, 570, 580) according to the usage comprises:
when the usage of the GPUs (510, 520, 530, 540, 550, 560, 570, 580) is less than or equal to a first threshold, each of the plurality of GPUs (510, 520, 530, 540, 550, 560, 570, 580) communicates with one of the switches (610, 620, 630, 640), and the switches (610, 620, 630, 640) perform data interaction with one of the CPUs (710, 720). - The method for accelerating GPUs of claim 2, wherein the process of arranging the GPUs according to the usage further comprises:
when the usage of the GPUs (510, 520, 530, 540, 550, 560, 570, 580) is greater than the first threshold and less than or equal to a second threshold, the GPUs (510, 520, 530, 540, 550, 560, 570, 580) are distributed into a plurality of groups, each group of GPUs communicates to one of the switches (610, 620, 630, 640) to form a joint body, and joint bodies perform data interaction with at least two CPUs (710, 720). - The method for accelerating CPUs of claim 3, wherein each group of GPUs comprises one or more GPUs (510, 520, 530, 540, 550, 560, 570, 580).
- The method for accelerating GPUs of claim 3, wherein the process of arranging the GPUs according to the usage further comprises:
when the usage of the GPUs (510, 520, 530, 540, 550, 560, 570, 580) is greater than or equal to the second threshold, the GPUs (510, 520, 530, 540, 550, 560, 570, 580) are arranged to from a ring index according to a preset index rule, and the GPUs (510, 520, 530, 540, 550, 560, 570, 580) in the ring index perform data interaction with the CPU (710, 720) through at least one switch (610, 620, 630, 640). - The method for accelerating GPUs of claim 4, wherein the preset index rule is to use NVlink connection to change index numbers of the GPUs (510, 520, 530, 540, 550, 560, 570, 580).
- The method for accelerating CPUs of claim 5, wherein the first threshold is two, and the second threshold is eight.
- The method for accelerating CPUs of claim 5, wherein the switches (610, 620, 630, 640) are PCIe switches.
- A device for accelerating graphics processor units (GPUs), a plurality of GPUs exchanging data with central processing units through switches, a number of the GPUs being greater than or equal to a number of switches being greater than or equal to a number of the central processing units, wherein the device comprising:a communication unit (100) configured for to establish a communication connection between the GPUs (510, 520, 530, 540, 550, 560, 570, 580) and the switches (610, 620, 630, 640), andthe switches (610, 620, 630, 640) and the CPUs (710, 720);a processor (200); anda storage device (300) storing one or more programs, when executed by the processor (200),the one or more programs causing the processor (200) to:receive a request for usage of GPU resource sent by a user;calculate a quantity of the GPUs (510, 520, 530, 540, 550, 560, 570, 580) necessary for the usage;arrange the GPUs (510, 520, 530, 540, 550, 560, 570, 580) according to the usage to maximize data transmission of the GPUs (510, 520, 530, 540, 550, 560, 570, 580); andprocess the request for GPU resource by the arranged GPUs (510, 520, 530, 540, 550, 560, 570, 580).
- The device for accelerating GPUs of claim 6, wherein the process of arranging the GPUs (510, 520, 530, 540, 550, 560, 570, 580) according to the usage comprises:
when the usage of the GPUs (510, 520, 530, 540, 550, 560, 570, 580) is less than or equal to a first threshold, each of the plurality of GPUs (510, 520, 530, 540, 550, 560, 570, 580) communicates with one of the switches (610, 620, 630, 640), and the switches (610, 620, 630, 640) perform data interaction with one of the CPUs (710, 720). - The device for accelerating GPUs of claim 7, wherein the process of arranging the GPUs according to the usage further comprises:
when the usage of the GPUs (510, 520, 530, 540, 550, 560, 570, 580) is greater than the first threshold and less than or equal to a second threshold, the GPUs (510, 520, 530, 540, 550, 560, 570, 580) are distributed into a plurality of groups, each group of GPUs communicates to one of the switches (610, 620, 630, 640) to form a joint body, and joint bodies perform data interaction with at least two CPUs (710, 720). - The device for accelerating GPUs of claim 11, wherein each group of GPUs comprises one or more GPUs (510, 520, 530, 540, 550, 560, 570, 580).
- The device for accelerating GPUs of claim 8, wherein the process of arranging the GPUs (510, 520, 530, 540, 550, 560, 570, 580) according to the usage further comprises:when the usage of the GPUs (510, 520, 530, 540, 550, 560, 570, 580) is greater than or equal to the second threshold, the GPUs (510, 520, 530, 540, 550, 560, 570, 580) are arranged to from a ring index according to a preset index rule;the preset index rule is to use NVlink connection to change index numbers of the GPUs (510, 520, 530, 540, 550, 560, 570, 580);the first threshold is two, and the second threshold is eight.
- The device for accelerating GPUs of claim 9, wherein the switches (610, 620, 630, 640) are PCIe switches.
- A computer readable storage medium having stored thereon instructions that, wherein when executed by at least one processor of a computing device, causes the processor to perform a method for accelerating GPUs claimed in any one of claims 1-5.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910072335.7A CN111489279B (en) | 2019-01-25 | 2019-01-25 | GPU acceleration optimization method and device and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3686738A1 true EP3686738A1 (en) | 2020-07-29 |
Family
ID=66239985
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19170410.5A Withdrawn EP3686738A1 (en) | 2019-01-25 | 2019-04-19 | Device and method for accelerating graphics processor units, and computer readable storage medium |
Country Status (3)
Country | Link |
---|---|
US (1) | US10867363B2 (en) |
EP (1) | EP3686738A1 (en) |
CN (1) | CN111489279B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113791908B (en) * | 2021-09-16 | 2024-03-29 | 脸萌有限公司 | Service running method and device and electronic equipment |
CN116483587B (en) * | 2023-06-21 | 2023-09-08 | 湖南马栏山视频先进技术研究院有限公司 | Video super-division parallel method, server and medium based on image segmentation |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180027044A1 (en) * | 2016-07-25 | 2018-01-25 | Peraso Technologies Inc. | Wireless multimedia communications system and method |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8938723B1 (en) * | 2009-08-03 | 2015-01-20 | Parallels IP Holdings GmbH | Use of GPU for support and acceleration of virtual machines and virtual environments |
CN104035751B (en) * | 2014-06-20 | 2016-10-12 | 深圳市腾讯计算机系统有限公司 | Data parallel processing method based on multi-graphics processor and device |
US10361907B2 (en) * | 2015-04-27 | 2019-07-23 | Northeastern University | System for networking and analyzing geospatial data, human infrastructure, and natural elements |
CN105227669A (en) * | 2015-10-15 | 2016-01-06 | 浪潮(北京)电子信息产业有限公司 | A kind of aggregated structure system of CPU and the GPU mixing towards degree of depth study |
US9916636B2 (en) | 2016-04-08 | 2018-03-13 | International Business Machines Corporation | Dynamically provisioning and scaling graphic processing units for data analytic workloads in a hardware cloud |
US10896064B2 (en) * | 2017-03-27 | 2021-01-19 | International Business Machines Corporation | Coordinated, topology-aware CPU-GPU-memory scheduling for containerized workloads |
CN107632953A (en) | 2017-09-14 | 2018-01-26 | 郑州云海信息技术有限公司 | A kind of GPU casees PCIE extends interconnection topology device |
US10728091B2 (en) * | 2018-04-04 | 2020-07-28 | EMC IP Holding Company LLC | Topology-aware provisioning of hardware accelerator resources in a distributed environment |
-
2019
- 2019-01-25 CN CN201910072335.7A patent/CN111489279B/en active Active
- 2019-03-29 US US16/369,028 patent/US10867363B2/en active Active
- 2019-04-19 EP EP19170410.5A patent/EP3686738A1/en not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180027044A1 (en) * | 2016-07-25 | 2018-01-25 | Peraso Technologies Inc. | Wireless multimedia communications system and method |
Non-Patent Citations (2)
Title |
---|
ANONYMOUS: "White Paper NVIDIA DGX-1 With Tesla V100 System Architecture The Fastest Platform for Deep Learning", 16 February 2018 (2018-02-16), XP055638580, Retrieved from the Internet <URL:http://images.nvidia.com/content/pdf/dgx1-v100-system-architecture-whitepaper.pdf> [retrieved on 20191104] * |
SUNGHO SHIN ET AL: "Workload-aware Automatic Parallelization for Multi-GPU DNN Training", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 5 November 2018 (2018-11-05), XP080942141 * |
Also Published As
Publication number | Publication date |
---|---|
CN111489279A (en) | 2020-08-04 |
US20200242724A1 (en) | 2020-07-30 |
US10867363B2 (en) | 2020-12-15 |
CN111489279B (en) | 2023-10-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11669372B2 (en) | Flexible allocation of compute resources | |
CN109739786B (en) | DMA controller and heterogeneous acceleration system | |
EP3255553B1 (en) | Transmission control method and device for direct memory access | |
CN105518620A (en) | Network card configuration method and resource management center | |
US10867363B2 (en) | Device and method for accelerating graphics processor units, and computer readable storage medium | |
CN101887382A (en) | Method and device for arbitrating dynamic priority | |
CN103460202A (en) | Facilitating, at least in part, by circuitry, accessing of at least one controller command interface | |
CN110213147B (en) | Cloud network intercommunication method and device, storage medium and terminal equipment | |
CN113238802A (en) | Interrupt distributor, data processing chip, interrupt distribution method and data processing method | |
CN110636139A (en) | Optimization method and system for cloud load balancing | |
CN109726800B (en) | Operation method, device and related product | |
CN111767995A (en) | Operation method, device and related product | |
US20230153153A1 (en) | Task processing method and apparatus | |
CN108874699B (en) | Method and device for using MTP (Multi-time transfer protocol) function by multiple systems and electronic equipment | |
CN115775199B (en) | Data processing method and device, electronic equipment and computer readable storage medium | |
CN111309488A (en) | Method and system for sharing computing resources of unmanned aerial vehicle cluster and computer storage medium | |
TWI734072B (en) | Gpu accelerated optimization method, device and computer storage medium | |
CN113556242B (en) | Method and equipment for performing inter-node communication based on multi-processing nodes | |
CN113806064A (en) | Job scheduling method, device and system and job dispatching device | |
CN105812289A (en) | Data exchange method and device | |
US20140139533A1 (en) | Graphic processing unit virtual apparatus, graphic processing unit host apparatus, and graphic processing unit program processing methods thereof | |
CN110222000A (en) | A kind of AXI stream data frame bus combining device | |
CN111061674B (en) | Multiprocessor cross communication device and method | |
CN112740193A (en) | Method for accelerating system execution operation of big data operation | |
EP1193607A2 (en) | Apparatus and method for the exchange of signal groups between a plurality of components in a digital signal processor having a direct memory access controller |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200330 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20211103 |