CN113722085A - Method and system for distributing graphic resources - Google Patents

Method and system for distributing graphic resources Download PDF

Info

Publication number
CN113722085A
CN113722085A CN202010457620.3A CN202010457620A CN113722085A CN 113722085 A CN113722085 A CN 113722085A CN 202010457620 A CN202010457620 A CN 202010457620A CN 113722085 A CN113722085 A CN 113722085A
Authority
CN
China
Prior art keywords
gpu
graphics
processing unit
graphics processing
topology
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010457620.3A
Other languages
Chinese (zh)
Inventor
吕理钰
周思婷
王志能
郭立辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Artus Technologies Co ltd
Original Assignee
Artus Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Artus Technologies Co ltd filed Critical Artus Technologies Co ltd
Priority to CN202010457620.3A priority Critical patent/CN113722085A/en
Publication of CN113722085A publication Critical patent/CN113722085A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Abstract

The invention provides a method and a system for distributing graphic resources. The method comprises the following steps: obtaining first association information between a first graphics processing unit and at least one central processing unit in a plurality of graphics processing units; obtaining second association information between the first graphics processing unit and at least one second graphics processing unit in the plurality of graphics processing units; establishing a topological structure of the multiple graphic processing units according to the first and second associated information; and executing the graphic resource allocation of the plurality of graphic processing units according to the topological structure of the subgroups. Therefore, the graphic processing efficiency can be effectively improved.

Description

Method and system for distributing graphic resources
Technical Field
The present invention relates to a system resource allocation technique, and in particular, to a method and a system for allocating graphics resources.
Background
With the advance of technology, the demand for the operation speed of computer systems is also higher and higher. Therefore, some computer systems may be configured with multiple Graphics Processing Units (GPUs) to assist a Central Processing Unit (CPU) in performing graphics-related operations. However, in the conventional operating mechanism, the graphics processing task associated with a certain cpu is often randomly allocated to one or more cpus for processing, and it is not considered whether the communication path (or signaling path) between the gpu receiving the graphics processing task and the cpu is too complicated, thereby reducing the system performance.
Disclosure of Invention
The invention provides a method and a system for distributing graphic resources, which can effectively improve graphic processing efficiency.
The embodiment of the invention provides a method for distributing graphic resources, which comprises the following steps: obtaining first association information between a first graphics processing unit and at least one central processing unit in a plurality of graphics processing units; obtaining second association information between the first graphics processing unit and at least one second graphics processing unit in the plurality of graphics processing units; establishing a topological structure of the multiple graphic processing units according to the first and second associated information; and executing the graphic resource allocation of the plurality of graphic processing units according to the topological structure of the subgroups.
The present invention further provides a system for allocating graphics resources, which includes at least one cpu, a plurality of gpu, a storage device and a graphics resource allocator. The plurality of graphic processing units are connected to the at least one central processing unit. The storage device is connected to the at least one central processing unit and the plurality of graphics processing units. The graphics resource allocator is coupled to the storage device. The graphics resource allocator is configured to obtain, from the storage device, first association information between a first graphics processing unit of the plurality of graphics processing units and the at least one central processing unit. The graphics resource allocator is further configured to obtain second association information between the first graphics processing unit and at least a second graphics processing unit of the plurality of graphics processing units from the storage device. The graph resource distributor is further configured to establish a topological structure of the plurality of graph processing units according to the first correlation information and the second correlation information. The graph resource distributor is further configured to distribute the graph resources of the plurality of graph processing units according to the topology table structure.
Based on the above, a topological structure of the plurality of gpu can be established according to the first correlation information between the first gpu and the cpu and the second correlation information between the first gpu and the at least one second gpu. Then, according to the topology structure of the sub-group, the graphic resources of the plurality of graphic processing units can be properly allocated, thereby effectively improving the graphic processing efficiency.
Drawings
FIG. 1 is a schematic diagram of a system for allocating graphics resources, according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a physical connection between a CPU and a GPU according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating first association information according to an embodiment of the invention;
FIG. 4 is a diagram illustrating second association information according to an embodiment of the invention;
FIG. 5 is a schematic diagram showing a schematic table topology of a subgroup according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating a method for allocating graphics resources according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts.
FIG. 1 is a schematic diagram of a system for allocating graphics resources according to an embodiment of the present invention. Referring to fig. 1, a system (also referred to as a graphics resource distribution system) 10 can be implemented in various electronic devices (or computer devices) with data processing and graphics processing functions, such as a notebook computer, a desktop computer, an industrial computer, or a server.
The system 10 includes Central Processing Units (CPUs) 11(1) to 11(n), Graphic Processing Units (GPUs) 12(1) to 12(m), a storage device 13, and a graphic resource allocator 14. Each of the Central Processing Units (CPUs) 11(1) to 11(n) may refer to one central processing unit chip. Each of the GPU 12(1) -12 (m) may refer to a GPU chip. In operation, the CPU 11(1) -11 (n) and the GPU 12(1) -12 (m) can cooperate to complete the data processing and graphics processing tasks together. The total number of the CPUs 11(1) to 11(n) and the total number of the GPUs 12(1) to 12(m) may be determined by practical requirements, and the invention is not limited thereto.
The storage device 13 is used for storing data. The storage 13 may include volatile memory modules and non-volatile memory modules. The volatile Memory module may include a Memory medium such as a Random Access Memory (RAM) that can store data in a volatile manner. The nonvolatile Memory module may include a Read Only Memory (ROM) and/or a flash Memory (flash Memory) or other Memory media capable of nonvolatile storage of data.
The graphics resource allocator 14 is connected to the central processing units 11(1) to 11(n), the graphics processing units 12(1) to 12(m), and the storage device 13. In one embodiment, the graphics resource allocator 14 may be implemented as a Programmable general purpose or special purpose microprocessor, a Digital Signal Processor (DSP), a Programmable controller, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or other similar Device or combination thereof. In one embodiment, the graphics resource allocator 14 may also be implemented as program code and stored in the storage device 13.
The storage device 13 stores table information 131 and 132. The table information 131 and 132 is automatically generated by automatically scanning hardware information related to the operating system (e.g., Windows operating system) of the system 10 or system tools such as BIOS after the cpu 11(1) -11 (n) and the gpu 12(1) -12 (m) are mounted on a motherboard (not shown) of the system 10, for example.
The table information 131 describes the association information between the central processing units 11(1) to 11(n) and the graphics processing units 12(1) to 12 (m). In particular, the table information 131 may describe association information (also referred to as first association information) between a certain graphics processing unit (also referred to as a first graphics processing unit) of the graphics processing units 12(1) to 12(m) and the central processing units 11(1) to 11 (n). For example, the first related information may include core number (core number) information of a central processing unit (also referred to as the first central processing unit) that is pre-set by the first gpu and is preferentially used among the central processing units 11(1) - (11 (n)).
The table information 132 describes the correlation information between the graphics processing units 12(1) to 12 (m). In particular, the table information 132 may describe the association information (also referred to as second association information) between the first gpu and the remaining gpu (also referred to as second gpu) in the gpus 12(1) to 12 (m). For example, the second association information may include proximity level information between the first graphics processing unit and the second graphics processing unit.
The graphic resource allocator 14 may read the table information 131 and 132 from the storage device 13 to obtain the first correlation information and the second correlation information. Based on the first and second correlation information, the graphic resource distributor 14 can build a topological table of the graphics processing units 12(1) to 12 (m). Then, the graphic resource allocator 14 can perform the graphic resource allocation of the graphic processing units 12(1) to 12(m) according to the established topology table structure. For example, when it is necessary to execute a graphics processing task related to at least one of the central processing units 11(1) - (11 (n), the graphics resource allocator 14 may select an appropriate graphics processing unit from the graphics processing units 12(1) - (12 (m) according to the established topological topology structure of the clusters, and allocate the graphics processing task to the selected graphics processing unit for processing.
FIG. 2 is a diagram illustrating a physical connection between a CPU and a GPU according to an embodiment of the present invention. Referring to fig. 1 and 2, it is assumed that the cpu 11(1) -11 (n) includes the cpu 11(1) and 11(2) and the gpu 12(1) -12 (m) includes the gpu 12(1) -12 (8).
The CPU 11(1) and 11(2) are connected to the GPU 12(1) to 12(8) via the interfaces 21(1) to 21 (4). For example, the cpu 11(1) is connected to the gpu 12(1) and 12(2) via the interface 21(1) and is connected to the gpu 12(3) and 12(4) via the interface 21 (2). The cpu 11(2) is connected to the gpu 12(5) and 12(6) via the interface 21(3) and to the gpu 12(7) and 12(8) via the interface 21 (4). Interfaces 21(1) - (21 (4) may each comprise a PCIe switch or other type of physical connection interface.
It should be noted that, in the embodiment of fig. 2, when the upper layer cpu 11(1) and 11(2) allocates the graphics processing task to the lower layer cpu 12(1) -12 (8) or the upper layer cpu 11(1) and 11(2) receives the processing result of the graphics processing task from the lower layer cpu 12(1) -12 (8), the cpu 11(1) can directly communicate with the graphics processing unit 12(1) -12 (4) through the interfaces 21(1) and 21(2), and the cpu 11(2) can directly communicate with the graphics processing unit 12(5) -12 (8) through the interfaces 21(3) and 21 (4). However, if the task and/or the processing result exchange across the CPU is involved, the CPU 11(1) must communicate with the GPU 12(5) -12 (8) through the CPU 11(2), and the CPU 11(2) must communicate with the GPU 12(1) -12 (4) through the CPU 11 (1).
Based on the above limitations in communication, if the gpu is randomly selected to assist the specific cpu to perform graphics processing, the gpu may not be well selected, which may cause a delay in signal transmission and thus a reduction in overall graphics processing performance. In one embodiment, the established topology of the topology table reflects at least in part the physical connections between the upper CPU and the lower GPU. Therefore, when the graphics resource allocator 14 needs to allocate graphics processing resources, it can select one or more suitable graphics processing units according to the topology structure to assist the specific cpu to perform graphics operation, thereby improving the overall graphics processing performance.
For convenience of description, it is assumed that the numbers of the CPU 11(1) and the CPU 11(1) are CPU #1 and CPU #2, respectively. The numbers of the graphic processing units 12(1) to 12(8) are GPU #1, GPU #2, GPU #4, GPU #5, GPU #3, GPU #6, GPU #7, and GPU #8, respectively.
Fig. 3 is a schematic diagram of first association information according to an embodiment of the present invention. Fig. 4 is a schematic diagram of second association information according to an embodiment of the present invention. Referring to fig. 1 to 3, it is assumed that the first correlation information includes correlation information 31. The association information 31 may reflect the association between the cpu 11(1) to 11(2) and the gpu 12(1) to 12 (8). For example, the association information 31 may reflect that the core numbers of the CPUs with the priority preset by the GPU #1 to GPU #8 are CPU core #1, CPU core #2, CPU core #1, CPU core #2 and CPU core #2, respectively. The CPU core #1 is, for example, the core number of the CPU 11(1), and the CPU core #2 is, for example, the core number of the CPU 11 (2).
Based on the association information 31, the graphics resource allocator 14 may obtain the group to which each graphics processing unit belongs. For example, based on the association information 31, the GPU # 1-GPU #8 can be determined by the GPU resource allocator 14 to belong to group 1, group 2, group 1, group 2, and group 2, respectively. The GPU belonging to group 1 may comprise at least some of the GPU (e.g., GPU #1, GPU #2, GPU #4, GPU #5) that are configured to preferentially use CPU # 1. The GPU belonging to group 2 may include at least some of the GPU (e.g., GPU #3, GPU #6, GPU #7, GPU #8) that are configured to prioritize CPU # 2.
Referring to fig. 1 to 4, it is assumed that the second correlation information includes correlation information 41. The association information 41 may reflect the association of the GPU's 12(1) -12 (8) with each other. For example, the association information 41 may reflect proximity level information between a particular graphics processing unit (i.e., a first graphics processing unit) and the remaining graphics processing units (i.e., a second graphics processing unit).
In one embodiment, the proximity level information may be represented in a level (Rank)1 where the distance between the plurality of graphic processing units is closer and a level 2 where the distance between the plurality of graphic processing units is farther. For example, association information 41 may state that GPU #1 is closer to GPU #2, GPU #3, GPU #4, and GPU #5, and is further from GPU #6, GPU #7, and GPU # 8; the distance between the GPU #2 and the GPU #1, the GPU #4 and the GPU #5 is short, and the distance between the GPU #2 and the GPU #3, the GPU #6, the GPU #7 and the GPU #8 is long; GPU #3 is closer to GPU # 1, 6, 7, and 8, and is further from GPU # 2, 4, and 5, etc.
It should be noted that the information described in the association information 41 may not completely reflect the real physical connection relationship between the upper cpus 11(1) and 11(2) and the lower cpus 12(1) to 12 (8). For example, for the GPU #1, the association information 41 reflects that the GPU #3 belongs to a closer GPU, but actually, the GPU #3 is connected to another cpu 11(2) instead of the cpu 11 (1). Therefore, if the topological structure of the subdivision is established based on the association information 41, erroneous determination may occur, which may affect the performance of the subsequent graphic processing.
Thus, in one embodiment, in establishing the topological topology of clusters, the GPU 14 screens out the GPU (also referred to as the third GPU) belonging to the same cluster and having a short distance for each GPU. For example, according to the association information 41, the graphics resource allocator 14 may select the graphics processing units GPU #2, GPU #3, GPU #4, and GPU #5 as the graphics processing unit that is most adjacent to the graphics processing unit GPU #1, GPU #4, and GPU #5 as the graphics processing unit that is most adjacent to the graphics processing unit GPU #2, and the graphics processing units GPU #1, GPU #6, GPU #7, and GPU #8 as the graphics processing unit that is most adjacent to the graphics processing unit GPU # 3. Then, the graphics resource allocator 14 can determine a final established topological structure of the first graphical processing unit (also referred to as the first topological structure of the first sub-group) according to whether the screened third graphical processing unit and the corresponding first graphical processing unit belong to the same group.
Fig. 5 is a schematic diagram showing a topology in a table topology according to an embodiment of the present invention. Referring to fig. 1 to 5, according to the association information 31 and 41, when the topology 51 of the GPU #1 belonging to the group 1 is established, the GPU #2, the GPU #4, and the GPU #5 belonging to the same group 1 and being most adjacent to the GPU #1 are retained in the topology 51, and the GPU #3 belonging to the different group 2 and being misregistered as the GPU #1, which is also the most adjacent GPU #1, is removed from the topology 51.
In establishing the topology 52 of the same group 1, the GPU #1, GPU #4, and GPU #5 of the GPU #2 that are most adjacent to the same group 1 are maintained in the topology 52 of the same group 1. In establishing the topological structure 53 of the same group 2 in which the GPU #6, the GPU #7, and the GPU #8 belong to the same group 2 and the GPU #3 that is the nearest neighbor, the GPU #1 belonging to the different group 1 and being mistakenly counted as the GPU #3 that is also the nearest neighbor is removed from the topological structure 53 in the topological structure 53 of the group 2. By analogy, the topology of the corresponding groups of the remaining GPU # 4-GPU #8 can also be established.
Thereafter, when the graphics resources required for the graphics operation are to be allocated, the graphics resource allocator 14 can select at least one of the graphic processing unit GPU #1 and the graphic processing units GPU #2, GPU #4, and GPU #5 belonging to the topological structure 51 for performing the graphics operation in cooperation, or select at least one of the graphic processing unit GPU #3 and the graphic processing units GPU #6, GPU #7, and GPU #8 belonging to the topological structure 53 for performing the graphics operation in cooperation, for example. Compared with the random selection of available graphic processing units for graphic processing, the selection of graphic processing units according to the topological structure of the subdivision scheme established in the foregoing embodiment can effectively improve the overall graphic operation performance.
It should be noted that, although fig. 2 is used as an example of the physical connection relationship between the upper cpu and the lower gpu in the foregoing embodiments, the invention is not limited thereto. In another embodiment, the physical connection relationship between the upper layer cpu and the lower layer gpu may have other connection forms, such as more layers or via more or less interfaces, and the like, and the invention is not limited thereto. In addition, regardless of the physical connection relationship between the upper layer CPU and the lower layer GPU, the table information 131 and 132 of FIG. 1 can be automatically generated once the CPU and the GPU are mounted on the motherboard. Thereafter, a corresponding topology table can be established based on the table information 131 and 132, which will not be repeated herein.
Fig. 6 is a flowchart illustrating a method for allocating graphics resources according to an embodiment of the present invention. Referring to fig. 6, in step S601, first association information between a first gpu and at least one cpu in the gpus is obtained. In step S602, second association information between the first gpu and at least a second gpu in the gpus is obtained. In step S603, a topological structure of the plurality of gpu is established according to the first and second association information. In step S604, the allocation of graphics resources for the plurality of gpu is performed according to the topology table.
However, the steps in fig. 6 have been described in detail above, and are not described again here. It is to be noted that, the steps in fig. 6 can be implemented as a plurality of program codes or circuits, and the invention is not limited thereto. In addition, the method of fig. 6 may be used with the above exemplary embodiments, or may be used alone, and the invention is not limited thereto.
In summary, the embodiments of the present invention can group the gpu according to the first correlation information. Then, a topological structure of the graph processing unit is established according to the grouping result and the second correlation information. According to the established topological structure of the sub-groups, the graphic resources of the plurality of graphic processing units can be properly distributed and used, thereby effectively improving the graphic processing efficiency in the future.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for allocating graphics resources, comprising:
obtaining first association information between a first graphics processing unit and at least one central processing unit in a plurality of graphics processing units;
obtaining second association information between the first graphics processing unit and at least one second graphics processing unit in the plurality of graphics processing units;
establishing a topological structure of the multiple graphic processing units according to the first and second associated information; and
and executing the graphic resource allocation of the plurality of graphic processing units according to the topological structure of the subgroups.
2. The method for allocating graphics resources of claim 1, wherein the step of establishing the topology of the plurality of graphics processing units according to the first and second association information comprises:
obtaining a first group to which the first graphic processing unit belongs according to the first correlation information;
selecting at least one of the at least one second graphics processing unit as a third graphics processing unit closest to the first graphics processing unit according to the second association information; and
and determining a first topological topology of the topological topology of subgroups in relation to the first GPU, based on whether the third GPU belongs to the first group.
3. The method of claim 2, wherein the step of determining the first topology in the topology table related to the first GPU in accordance with whether the third GPU belongs to the first group comprises:
if the third GPU belongs to the first group, the third GPU is kept in the first topological topology; and
removing the third GPU from the first topological topology of the first subgroup if the third GPU does not belong to the first group.
4. The method for allocating graphics resources of claim 1, wherein the first association information records core number information of a first CPU that is pre-prioritized by the first GPU among the at least one CPU.
5. The method for allocating graphics resources of claim 1, wherein the second association information describes proximity level information between the first GPU and the at least a second GPU.
6. A system for allocating graphics resources, comprising:
at least one central processing unit;
a plurality of graphic processing units connected to the at least one central processing unit;
a storage device; and
a graphics resource allocator coupled to the at least one central processing unit, the storage device, and the plurality of graphics processing units,
wherein the graphics resource allocator is configured to obtain first association information between a first graphics processing unit of the plurality of graphics processing units and the at least one central processing unit from the storage device,
the graphics resource allocator is further configured to obtain second association information between the first graphics processing unit and at least a second graphics processing unit of the plurality of graphics processing units from the storage device,
the graphic resource distributor is further configured to establish a topological structure of the plurality of graphic processing units according to the first and second association information, and
the graph resource distributor is further configured to distribute the graph resources of the plurality of graph processing units according to the topology table structure.
7. The system for allocating graphics resources of claim 6, wherein said operation of establishing said topology of topologies of said plurality of graphics processing units based on said first correlation information and said second correlation information comprises:
obtaining a first group to which the first graphic processing unit belongs according to the first correlation information;
selecting at least one of the at least one second graphics processing unit as a third graphics processing unit closest to the first graphics processing unit according to the second association information; and
and determining a first topological topology of the topological topology of subgroups in relation to the first GPU, based on whether the third GPU belongs to the first group.
8. The system for allocating graphics resources of claim 7, wherein the operation of determining the first topology in the topology table related to the first GPU in accordance with whether the third GPU belongs to the first group comprises:
if the third GPU belongs to the first group, the third GPU is kept in the first topological topology; and
removing the third GPU from the first topological topology of the first subgroup if the third GPU does not belong to the first group.
9. The system for allocating graphics resources of claim 6, wherein the first association information records core number information of a first CPU that is pre-prioritized by the first GPU among the at least one CPU.
10. The system for allocating graphics resources of claim 6, wherein said second association information describes proximity level information between said first GPU and said at least a second GPU.
CN202010457620.3A 2020-05-26 2020-05-26 Method and system for distributing graphic resources Pending CN113722085A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010457620.3A CN113722085A (en) 2020-05-26 2020-05-26 Method and system for distributing graphic resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010457620.3A CN113722085A (en) 2020-05-26 2020-05-26 Method and system for distributing graphic resources

Publications (1)

Publication Number Publication Date
CN113722085A true CN113722085A (en) 2021-11-30

Family

ID=78672150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010457620.3A Pending CN113722085A (en) 2020-05-26 2020-05-26 Method and system for distributing graphic resources

Country Status (1)

Country Link
CN (1) CN113722085A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7561163B1 (en) * 2005-12-16 2009-07-14 Nvidia Corporation Detecting connection topology in a multi-processor graphics system
US20120162234A1 (en) * 2010-12-15 2012-06-28 Advanced Micro Devices, Inc. Device Discovery and Topology Reporting in a Combined CPU/GPU Architecture System
CN103262035A (en) * 2010-12-15 2013-08-21 超威半导体公司 Device discovery and topology reporting in a combined CPU/GPU architecture system
US20140109105A1 (en) * 2012-10-17 2014-04-17 Electronics And Telecommunications Research Institute Intrusion detection apparatus and method using load balancer responsive to traffic conditions between central processing unit and graphics processing unit
WO2016091164A1 (en) * 2014-12-12 2016-06-16 上海芯豪微电子有限公司 Multilane/multicore system and method
US20180276044A1 (en) * 2017-03-27 2018-09-27 International Business Machines Corporation Coordinated, topology-aware cpu-gpu-memory scheduling for containerized workloads
US20180365019A1 (en) * 2017-06-20 2018-12-20 Palo Alto Research Center Incorporated System and method for hybrid task management across cpu and gpu for efficient data mining
US20190149365A1 (en) * 2018-01-12 2019-05-16 Intel Corporation Time domain resource allocation for mobile communication
US10325343B1 (en) * 2017-08-04 2019-06-18 EMC IP Holding Company LLC Topology aware grouping and provisioning of GPU resources in GPU-as-a-Service platform
US20190312772A1 (en) * 2018-04-04 2019-10-10 EMC IP Holding Company LLC Topology-aware provisioning of hardware accelerator resources in a distributed environment
CN110389843A (en) * 2019-07-29 2019-10-29 广东浪潮大数据研究有限公司 A kind of business scheduling method, device, equipment and readable storage medium storing program for executing

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7561163B1 (en) * 2005-12-16 2009-07-14 Nvidia Corporation Detecting connection topology in a multi-processor graphics system
US20120162234A1 (en) * 2010-12-15 2012-06-28 Advanced Micro Devices, Inc. Device Discovery and Topology Reporting in a Combined CPU/GPU Architecture System
CN103262035A (en) * 2010-12-15 2013-08-21 超威半导体公司 Device discovery and topology reporting in a combined CPU/GPU architecture system
US20140109105A1 (en) * 2012-10-17 2014-04-17 Electronics And Telecommunications Research Institute Intrusion detection apparatus and method using load balancer responsive to traffic conditions between central processing unit and graphics processing unit
WO2016091164A1 (en) * 2014-12-12 2016-06-16 上海芯豪微电子有限公司 Multilane/multicore system and method
US20180276044A1 (en) * 2017-03-27 2018-09-27 International Business Machines Corporation Coordinated, topology-aware cpu-gpu-memory scheduling for containerized workloads
US20180365019A1 (en) * 2017-06-20 2018-12-20 Palo Alto Research Center Incorporated System and method for hybrid task management across cpu and gpu for efficient data mining
US10325343B1 (en) * 2017-08-04 2019-06-18 EMC IP Holding Company LLC Topology aware grouping and provisioning of GPU resources in GPU-as-a-Service platform
US20190149365A1 (en) * 2018-01-12 2019-05-16 Intel Corporation Time domain resource allocation for mobile communication
US20190312772A1 (en) * 2018-04-04 2019-10-10 EMC IP Holding Company LLC Topology-aware provisioning of hardware accelerator resources in a distributed environment
CN110389843A (en) * 2019-07-29 2019-10-29 广东浪潮大数据研究有限公司 A kind of business scheduling method, device, equipment and readable storage medium storing program for executing

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
FABRIZIO ANGIULLI;STEFANO BASTA;STEFANO LODI;CLAUDIO SARTORI: "GPU Strategies for Distance-Based Outlier Detection", no. 11, 31 December 2016 (2016-12-31) *
YE TIAN;YONG HU;XUKUN SHEN: "A multi‐GPU finite element computation and hybrid collision handling process framework for brain deformation simulation", vol. 30, no. 1, 31 December 2019 (2019-12-31) *
余文广;王维平;侯洪涛;李群;: "基于多核CPU-GPU异构平台的并行Agent仿真", 系统工程与电子技术, no. 08, 15 August 2012 (2012-08-15) *
方娟;章佳兴: "基于负载均衡的CPU- GPU异构计算平台任务调度策略", 北京工业大学学报, no. 007, 31 December 2020 (2020-12-31) *
邱智勇;周越德;刘中平;: "CPU+GPU架构下节点阻抗矩阵生成及节点编号优化方法", 电力系统自动化, no. 02 *
霍洪鹏;胡新明;盛冲冲;吴百锋;: "面向节点异构GPU集群的能量有效调度方案", 计算机应用与软件, no. 03, 15 March 2013 (2013-03-15) *

Similar Documents

Publication Publication Date Title
CN1274123A (en) Peripheral component interlink slot controller of partition system with dynamic arrangement
CN111258496B (en) Apparatus and method for dynamically allocating data paths
US10084860B2 (en) Distributed file system using torus network and method for configuring and operating distributed file system using torus network
CN111309644B (en) Memory allocation method and device and computer readable storage medium
US9600187B2 (en) Virtual grouping of memory
US20200348871A1 (en) Memory system, operating method thereof and computing system for classifying data according to read and write counts and storing the classified data in a plurality of types of memory devices
US11275516B2 (en) Host system configured to manage assignment of free block, data processing system including the host system, and method of operating the host system
CN106354428B (en) Storage sharing system of multi-physical layer partition computer system structure
US11461024B2 (en) Computing system and operating method thereof
CN110795234A (en) Resource scheduling method and device
CN103530253A (en) Clustering multi-overall-situation buffer pool system, center node, computational node and management method
CN112596669A (en) Data processing method and device based on distributed storage
CN113722085A (en) Method and system for distributing graphic resources
CN115039091A (en) Multi-key-value command processing method and device, electronic equipment and storage medium
CN115185874B (en) PCIE resource allocation method and related device
CN115729470A (en) Data writing method and related equipment
CN115098022A (en) Path selection method, device, equipment and storage medium
TWI825315B (en) Assigning method and assigning system for graphic resource
CN112099728B (en) Method and device for executing write operation and read operation
CN114816322A (en) External sorting method and device of SSD and SSD memory
CN103914401A (en) Storage device with multiple processors
CN113535087A (en) Data processing method, server and storage system in data migration process
CN115374024A (en) Memory data sorting method and related equipment
CN117215793A (en) Bus resource allocation method, device, equipment and storage medium
CN117393013B (en) Efficient DDR control method and related device in statistical application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination