CN110781129B - Resource scheduling method, device and medium in FPGA heterogeneous accelerator card cluster - Google Patents
Resource scheduling method, device and medium in FPGA heterogeneous accelerator card cluster Download PDFInfo
- Publication number
- CN110781129B CN110781129B CN201910864468.8A CN201910864468A CN110781129B CN 110781129 B CN110781129 B CN 110781129B CN 201910864468 A CN201910864468 A CN 201910864468A CN 110781129 B CN110781129 B CN 110781129B
- Authority
- CN
- China
- Prior art keywords
- card
- storage
- fpga heterogeneous
- heterogeneous accelerator
- resources
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 230000004044 response Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 4
- 230000006870 function Effects 0.000 description 6
- 230000001133 acceleration Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7807—System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
- G06F15/781—On-chip cache; Off-chip memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7867—Architectures of general purpose stored program computers comprising a single central processing unit with reconfigurable architecture
Abstract
The invention discloses a resource scheduling method in an FPGA heterogeneous accelerator card cluster, which comprises the following steps: receiving a request for storing information and judging whether the storage resources of the card are used up or not; responding to the exhaustion of storage resources, and sending a storage resource request to a main card in the FPGA heterogeneous accelerator card cluster; and receiving the storage resources of other FPGA heterogeneous accelerator cards distributed by the main card, and storing the storage information into the storage resources distributed by the main card. The invention also discloses a computer device and a readable storage medium. The resource scheduling method, the equipment and the medium in the FPGA heterogeneous accelerator card cluster can fully call the storage resources of the whole FPGA heterogeneous accelerator card cluster, and prevent the storage resources of a single FPGA heterogeneous accelerator card from being tense or idle.
Description
Technical Field
The present invention relates to the field of FPGA, and more particularly, to a method, an apparatus, and a readable medium for scheduling resources in an FPGA heterogeneous accelerator card cluster.
Background
In recent years, a large amount of FPGA heterogeneous accelerator cards are used in a server data center, the FPGA accelerator cards are often provided with storage resources (including but not limited to DDR and the like), when the FPGA heterogeneous accelerator cards are connected to a system, how to schedule and use the storage resources becomes a key problem, most of the prior art is only limited to the FPGA logic of the card to call the storage resources of the card, and the problem is that when the storage resources of the card called by the FPGA logic of the card are in shortage, only a hardware replacing mode can be adopted, the cost is increased, the development period is prolonged, and in addition, when the resources of the card called by the FPGA logic of the card are abundant, the storage resources on the card side are idle and wasted in the whole system.
Disclosure of Invention
In view of this, an object of the embodiments of the present invention is to provide a resource scheduling method, device and medium in an FPGA heterogeneous accelerator card cluster, which can fully invoke storage resources of the entire FPGA heterogeneous accelerator card cluster by sending a storage resource request to a master card, so as to prevent the storage resources of a single FPGA heterogeneous accelerator card from being strained or idle.
Based on the above object, an aspect of the embodiments of the present invention provides a resource scheduling method in an FPGA heterogeneous accelerator card cluster, including the following steps: receiving a request for storing information and judging whether the storage resources of the card are used up or not; responding to the exhaustion of storage resources, and sending a storage resource request to a main card in the FPGA heterogeneous accelerator card cluster; and receiving the storage resources of other FPGA heterogeneous accelerator cards distributed by the main card, and storing the storage information into the storage resources distributed by the main card.
In some embodiments, the memory resources of the present card include: the storage resources inside the FPGA heterogeneous accelerator card, the storage resources which are located on the same board card with the FPGA heterogeneous accelerator card and the storage resources reserved for the FPGA heterogeneous accelerator card by the server side are obtained.
In some embodiments, determining whether the memory resources of the card are exhausted comprises: judging whether the delay time requirement of the stored information is smaller than a threshold value; and responding to the delay time requirement of the stored information being smaller than the threshold value, and judging whether the storage resources in the FPGA heterogeneous accelerator card are used up.
In some embodiments, further comprising: and in response to the fact that the storage resources in the FPGA heterogeneous accelerator card are used up, judging whether the storage resources in the same board card with the FPGA heterogeneous accelerator card are used up.
In some embodiments, further comprising: and judging whether the storage resources reserved for the FPGA heterogeneous accelerator card at the server side are used up or not in response to the use up of the storage resources in the same board card with the FPGA heterogeneous accelerator card.
In some embodiments, further comprising: and responding to the delay requirement of the stored information not to be lower than a threshold value, and judging whether the storage resources of the same board card with the FPGA heterogeneous accelerator card are used up.
In some embodiments, further comprising: and judging whether the storage resources reserved for the FPGA heterogeneous accelerator card at the server side are used up or not in response to the use up of the storage resources in the same board card with the FPGA heterogeneous accelerator card.
In some embodiments, further comprising: judging whether the use condition of the storage resource changes or not; and in response to the change of the use condition of the storage resource, updating the storage resource record information of the card and sending the storage resource record information to the main card.
In another aspect of the embodiments of the present invention, there is also provided a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions being executable by the processor to perform the steps of: receiving a request for storing information and judging whether the storage resources of the card are used up or not; responding to the exhaustion of storage resources, and sending a storage resource request to a main card in the FPGA heterogeneous accelerator card cluster; and receiving the storage resources of other FPGA heterogeneous accelerator cards distributed by the main card, and storing the storage information into the storage resources distributed by the main card.
In a further aspect of the embodiments of the present invention, a computer-readable storage medium is also provided, in which a computer program for implementing the above method steps is stored when the computer program is executed by a processor.
The invention has the following beneficial technical effects: by sending a storage resource request to the main card, the storage resource of the whole FPGA heterogeneous accelerator card cluster is fully called, and the situation that the storage resource of a single FPGA heterogeneous accelerator card is in shortage or idle is prevented.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a schematic diagram of an embodiment of a resource scheduling method in an FPGA heterogeneous accelerator card cluster according to the present invention;
FIG. 2 is a schematic structural diagram of an FPGA heterogeneous accelerator card provided by the present invention;
fig. 3 is a flowchart of an embodiment of a resource scheduling method in an FPGA heterogeneous accelerator card cluster according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
Based on the above purpose, a first aspect of the embodiments of the present invention provides an embodiment of a resource scheduling method in an FPGA heterogeneous accelerator card cluster. Fig. 1 is a schematic diagram illustrating an embodiment of a resource scheduling method in an FPGA heterogeneous accelerator card cluster according to the present invention. As shown in fig. 1, the embodiment of the present invention includes the following steps:
s1, receiving the request of storing information and judging whether the storage resources of the card are used up;
s2, responding to the exhaustion of storage resources, and sending a storage resource request to a main card in the FPGA heterogeneous accelerator card cluster; and
and S3, receiving the storage resources of the other FPGA heterogeneous accelerator cards distributed by the main card, and storing the storage information into the storage resources distributed by the main card.
In some embodiments, the storage resources of the FPGA heterogeneous accelerator card include: the storage resources inside the FPGA heterogeneous accelerator card, the storage resources which are located on the same board card with the FPGA heterogeneous accelerator card and the storage resources reserved for the FPGA heterogeneous accelerator card by the server side are obtained. First, the storage resources inside the FPGA heterogeneous accelerator card may be, for example, a High performance storage unit inside the FPGA heterogeneous accelerator card, including but not limited to a High Bandwidth Memory (HBM) and the like; secondly, the storage resource on the same board as the FPGA heterogeneous accelerator card may be, for example, a storage resource outside the FPGA, including but not limited to a DDR (Double Data Rate, Double Data synchronous dynamic random access memory) and the like; and thirdly, reserving a storage resource used by the FPGA heterogeneous accelerator card by a server-side HOST MEMORY.
Fig. 2 is a schematic structural diagram of an FPGA heterogeneous accelerator card provided by the present invention. As shown in fig. 2, a plurality of FPGA heterogeneous accelerator cards are inserted at the server side, and communicate with each other through an interconnection mechanism such as PCIE DMA (Peripheral Component Interconnect Direct Memory Access) or OpenCapi (open standard interface for high performance acceleration). The FPGA heterogeneous accelerator cards are connected through a network and communicate through an interconnection mechanism such as MAC (media Access control) or RDMA (Remote Direct Memory Access).
A block board card is designated as a main card in the system in a mode including but not limited to a hardware dial switch mode, for example, the board card which is dialed to 1 by the dial switch is taken as the main card, the main card is responsible for scheduling and using the whole system resources, and the rest board cards are named as auxiliary cards. Each board card uses a fixed physical mac address as an identification of identity, which is called a board card identification. The specific process can be as follows:
firstly, in a system power-on, hardware reset and FPGA heterogeneous system, each board card updates the storage resource record information of the board card. The auxiliary card sends the storage resource record information of the card to the main card, and the main card updates the storage resource record information for recording the whole system after receiving the storage resource record information. When the heterogeneous acceleration system generates information to be stored in the heterogeneous acceleration process, the single FPGA heterogeneous acceleration card receives a request for storing the information and judges whether the storage resources corresponding to the FPGA heterogeneous acceleration card are used up.
In some embodiments, determining whether the memory resources of the card are exhausted comprises: judging that the delay time requirement of the stored information is less than a threshold value; and responding to the delay time requirement of the stored information being smaller than the threshold value, and judging whether the storage resources in the FPGA heterogeneous accelerator card are used up. And in response to the fact that the storage resources in the FPGA heterogeneous accelerator card are used up, judging whether the storage resources in the same board card with the FPGA heterogeneous accelerator card are used up. And judging whether the storage resources reserved for the FPGA heterogeneous accelerator card at the server side are used up or not in response to the use up of the storage resources in the same board card with the FPGA heterogeneous accelerator card. The delay time requirement of the stored information is smaller than the threshold value, which indicates that the stored information needs low delay, and the delay of the storage resource inside the FPGA heterogeneous accelerator card is the lowest, so that the stored information is preferentially stored in the storage resource inside the FPGA heterogeneous accelerator card.
In some embodiments, further comprising: and responding to the delay requirement of the stored information not to be lower than a threshold value, and judging whether the storage resources of the same board card with the FPGA heterogeneous accelerator card are used up. And judging whether the storage resources reserved for the FPGA heterogeneous accelerator card at the server side are used up or not in response to the use up of the storage resources in the same board card with the FPGA heterogeneous accelerator card. The delay requirement of the stored information is not lower than the threshold value, which indicates that the requirement of the stored information on delay is not high, and the stored information can be preferentially stored in the storage resource which is positioned on the same board card as the FPGA heterogeneous accelerator card, so that the storage resource in the FPGA heterogeneous accelerator card can be reserved for the stored information with higher delay requirement.
In some embodiments, further comprising: judging whether the use condition of the storage resource changes or not; and in response to the change of the use condition of the storage resource, updating the storage resource record information of the card and sending the storage resource record information to the main card.
And if the storage resources of the card are used up, applying for using the idle storage resources of the system from the main card. The main card distributes an idle storage resource to the card according to the storage record information, and the priority using level of the storage resource can be the same as that of the card. The secondary card confirms the use of the idle storage resources allocated by the primary card and updates the system storage resource record.
Fig. 3 is a flowchart illustrating an embodiment of a resource scheduling method in an FPGA heterogeneous accelerator card cluster according to the present invention. As shown in FIG. 3, beginning at block 101 and proceeding to block 102, a request to store information is received; then, the method proceeds to a block 103, whether the storage resources of the card are used up is judged, if not, the method proceeds to a block 107, and if so, the method proceeds to a block 104, and a storage resource request is sent to a main card in the FPGA heterogeneous accelerator card cluster; after sending the resource request, then proceeding to block 105, the main card allocates the storage resource to the card according to the use condition of the existing storage resource; proceeding to block 106, accepting the storage resources of the other FPGA heterogeneous accelerator cards allocated by the main card, storing the storage information into the storage resources allocated by the main card, and proceeding to block 107 to finish the process.
It should be particularly noted that, steps in the foregoing embodiments of the resource scheduling method in the FPGA heterogeneous accelerator card cluster may be intersected, replaced, added, and deleted, so that these resource scheduling methods in the FPGA heterogeneous accelerator card cluster that are transformed by reasonable permutation and combination also belong to the scope of the present invention, and the scope of the present invention should not be limited to the embodiments.
In view of the above object, a second aspect of the embodiments of the present invention provides a computer device, including: at least one processor; and a memory storing computer instructions executable on the processor, the instructions being executable by the processor to perform the steps of: s1, receiving the request of storing information and judging whether the storage resources of the card are used up; s2, responding to the exhaustion of storage resources, and sending a storage resource request to a main card in the FPGA heterogeneous accelerator card cluster; and S3, receiving the storage resources of the other FPGA heterogeneous accelerator cards distributed by the main card, and storing the storage information into the storage resources distributed by the main card.
In some embodiments, the memory resources of the present card include: the storage resources inside the FPGA heterogeneous accelerator card, the storage resources which are located on the same board card with the FPGA heterogeneous accelerator card and the storage resources reserved for the FPGA heterogeneous accelerator card by the server side are obtained.
In some embodiments, determining whether the memory resources of the card are exhausted comprises: judging whether the delay requirement of the stored information is lower than a threshold value; and responding to the delay time requirement of the stored information being smaller than the threshold value, and judging whether the storage resources in the FPGA heterogeneous accelerator card are used up.
In some embodiments, further comprising: and in response to the fact that the storage resources in the FPGA heterogeneous accelerator card are used up, judging whether the storage resources in the same board card with the FPGA heterogeneous accelerator card are used up.
In some embodiments, further comprising: and judging whether the storage resources reserved for the FPGA heterogeneous accelerator card at the server side are used up or not in response to the use up of the storage resources in the same board card with the FPGA heterogeneous accelerator card.
In some embodiments, further comprising: and responding to the delay requirement of the stored information not to be lower than a threshold value, and judging whether the storage resources of the same board card with the FPGA heterogeneous accelerator card are used up.
In some embodiments, further comprising: and judging whether the storage resources reserved for the FPGA heterogeneous accelerator card at the server side are used up or not in response to the use up of the storage resources in the same board card with the FPGA heterogeneous accelerator card.
In some embodiments, further comprising: judging whether the use condition of the storage resource changes or not; and in response to the change of the use condition of the storage resource, updating the storage resource record information of the card and sending the storage resource record information to the main card.
The invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, performs the method as above.
Finally, it should be noted that, as one of ordinary skill in the art can appreciate that all or part of the processes in the methods according to the embodiments described above can be implemented by a computer program to instruct related hardware, and the program of the resource scheduling method in the FPGA heterogeneous accelerator card cluster can be stored in a computer readable storage medium, and when executed, the program can include the processes according to the embodiments of the methods described above. The storage medium of the program may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like. The embodiments of the computer program may achieve the same or similar effects as any of the above-described method embodiments.
Furthermore, the methods disclosed according to embodiments of the present invention may also be implemented as a computer program executed by a processor, which may be stored in a computer-readable storage medium. Which when executed by a processor performs the above-described functions defined in the methods disclosed in embodiments of the invention.
Further, the above method steps and system elements may also be implemented using a controller and a computer readable storage medium for storing a computer program for causing the controller to implement the functions of the above steps or elements.
Further, it should be appreciated that the computer-readable storage media (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of example, and not limitation, nonvolatile memory can include Read Only Memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which can act as external cache memory. By way of example and not limitation, RAM is available in a variety of forms such as synchronous RAM (DRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The storage devices of the disclosed aspects are intended to comprise, without being limited to, these and other suitable types of memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with the following components designed to perform the functions herein: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP, and/or any other such configuration.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary designs, the functions may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk, blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.
Claims (9)
1. A resource scheduling method in an FPGA heterogeneous accelerator card cluster is characterized by comprising the following steps:
receiving a request for storing information and judging whether the storage resources of the card are used up or not;
responding to the exhaustion of the storage resources of the card, and sending a storage resource request to a main card in the FPGA heterogeneous accelerator card cluster; and
receiving the storage resources of other FPGA heterogeneous accelerator cards distributed by the main card, and storing the storage information into the storage resources distributed by the main card,
wherein the memory resources of the card include:
the storage resource inside the FPGA heterogeneous accelerator card, the storage resource which is located on the same board card with the FPGA heterogeneous accelerator card and the storage resource reserved for the FPGA heterogeneous accelerator card by the server side are obtained.
2. The method according to claim 1, wherein the determining whether the storage resource of the card is used up comprises:
judging whether the delay time requirement of the stored information is smaller than a threshold value; and
and responding to the condition that the delay time requirement of the storage information is smaller than a threshold value, and judging whether the storage resources in the FPGA heterogeneous accelerator card are used up.
3. The method for scheduling resources according to claim 2, further comprising:
and responding to the exhaustion of the storage resources in the FPGA heterogeneous accelerator card, and judging whether the storage resources in the same board card with the FPGA heterogeneous accelerator card are exhausted.
4. The method for scheduling resources according to claim 3, further comprising:
and in response to the fact that the storage resources of the same board card with the FPGA heterogeneous accelerator card are used up, judging whether the storage resources reserved for the FPGA heterogeneous accelerator card by a server are used up.
5. The method for scheduling resources according to claim 2, further comprising:
and responding to the delay requirement of the stored information not to be lower than a threshold value, and judging whether the storage resources of the same board card with the FPGA heterogeneous accelerator card are used up.
6. The method for scheduling resources according to claim 5, further comprising:
and in response to the fact that the storage resources of the same board card with the FPGA heterogeneous accelerator card are used up, judging whether the storage resources reserved for the FPGA heterogeneous accelerator card by a server are used up.
7. The method for scheduling resources according to claim 1, further comprising:
judging whether the use condition of the storage resource changes or not; and
and in response to the change of the use condition of the storage resource, updating the storage resource record information of the card and sending the storage resource record information to the main card.
8. A computer device, comprising:
at least one memory; and
a memory storing computer instructions executable on a processor, the instructions when executed by the processor implementing the steps of:
receiving a request for storing information and judging whether the storage resources of the card are used up or not;
responding to the exhaustion of storage resources, and sending a storage resource request to a main card in the FPGA heterogeneous accelerator card cluster; and
receiving the storage resources of other FPGA heterogeneous accelerator cards distributed by the main card, and storing the storage information into the storage resources distributed by the main card,
wherein the memory resources of the card include:
the storage resource inside the FPGA heterogeneous accelerator card, the storage resource which is located on the same board card with the FPGA heterogeneous accelerator card and the storage resource reserved for the FPGA heterogeneous accelerator card by the server side are obtained.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910864468.8A CN110781129B (en) | 2019-09-12 | 2019-09-12 | Resource scheduling method, device and medium in FPGA heterogeneous accelerator card cluster |
PCT/CN2019/130032 WO2021047120A1 (en) | 2019-09-12 | 2019-12-30 | Resource allocation method in fpga heterogeneous accelerator card cluster, device, and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910864468.8A CN110781129B (en) | 2019-09-12 | 2019-09-12 | Resource scheduling method, device and medium in FPGA heterogeneous accelerator card cluster |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110781129A CN110781129A (en) | 2020-02-11 |
CN110781129B true CN110781129B (en) | 2022-02-22 |
Family
ID=69383422
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910864468.8A Active CN110781129B (en) | 2019-09-12 | 2019-09-12 | Resource scheduling method, device and medium in FPGA heterogeneous accelerator card cluster |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110781129B (en) |
WO (1) | WO2021047120A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112087471A (en) * | 2020-09-27 | 2020-12-15 | 山东云海国创云计算装备产业创新中心有限公司 | Data transmission method and FPGA cloud platform |
CN112598565A (en) * | 2020-12-09 | 2021-04-02 | 第四范式(北京)技术有限公司 | Service operation method and device based on accelerator card, electronic equipment and storage medium |
CN113534888B (en) * | 2021-07-23 | 2024-02-06 | 中国兵器装备集团自动化研究所有限公司 | FPGA-based time synchronization method and device for multiple VPX boards |
CN113900982B (en) * | 2021-12-09 | 2022-03-08 | 苏州浪潮智能科技有限公司 | Distributed heterogeneous acceleration platform communication method, system, device and medium |
CN114443616B (en) * | 2021-12-30 | 2024-01-16 | 苏州浪潮智能科技有限公司 | FPGA-based parallel heterogeneous database acceleration method and device |
CN114936043B (en) * | 2022-05-20 | 2024-02-09 | 浪潮电子信息产业股份有限公司 | Method, device, equipment and storage medium for starting pooled heterogeneous resources |
CN114880269B (en) * | 2022-05-26 | 2024-02-02 | 无锡华普微电子有限公司 | Board ID configuration and identification method, microcontroller and control system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0384875A2 (en) * | 1989-02-21 | 1990-08-29 | International Business Machines Corporation | Asynchronous staging of objects between computer systems in cooperative processing systems |
CN102841931A (en) * | 2012-08-03 | 2012-12-26 | 中兴通讯股份有限公司 | Storage method and storage device of distributive-type file system |
CN103577266A (en) * | 2012-07-31 | 2014-02-12 | 国际商业机器公司 | Method and system for distributing field programmable gate array (FPGA) resources |
CN103902225A (en) * | 2012-12-26 | 2014-07-02 | 中国电信股份有限公司 | Method and system for centralized management of storage resources |
CN105302738A (en) * | 2015-12-09 | 2016-02-03 | 北京东土科技股份有限公司 | Method and device for distributing memory |
CN107193500A (en) * | 2017-05-26 | 2017-09-22 | 郑州云海信息技术有限公司 | A kind of distributed file system Bedding storage method and system |
CN107729151A (en) * | 2017-10-19 | 2018-02-23 | 济南浪潮高新科技投资发展有限公司 | A kind of method of cluster management FPGA resource |
WO2018236260A1 (en) * | 2017-06-22 | 2018-12-27 | Telefonaktiebolaget Lm Ericsson (Publ) | Apparatuses and methods for allocating memory in a data center |
US10282136B1 (en) * | 2017-11-30 | 2019-05-07 | Hitachi, Ltd. | Storage system and control method thereof |
CN110209490A (en) * | 2018-04-27 | 2019-09-06 | 腾讯科技(深圳)有限公司 | A kind of EMS memory management process and relevant device |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040064678A1 (en) * | 2002-09-30 | 2004-04-01 | Black Bryan P. | Hierarchical scheduling windows |
CN105389199B (en) * | 2015-10-21 | 2019-09-27 | 同济大学 | A kind of FPGA accelerator virtual platform and application based on Xen |
CN105868388B (en) * | 2016-04-14 | 2019-03-19 | 中国人民大学 | A kind of memory OLAP enquiring and optimizing method based on FPGA |
US10007561B1 (en) * | 2016-08-08 | 2018-06-26 | Bitmicro Networks, Inc. | Multi-mode device for flexible acceleration and storage provisioning |
CN109542625A (en) * | 2018-11-29 | 2019-03-29 | 郑州云海信息技术有限公司 | A kind of storage resource control method, device and electronic equipment |
CN109783032A (en) * | 2019-01-24 | 2019-05-21 | 山东超越数控电子股份有限公司 | A kind of distributed storage accelerating method and device based on Heterogeneous Computing |
-
2019
- 2019-09-12 CN CN201910864468.8A patent/CN110781129B/en active Active
- 2019-12-30 WO PCT/CN2019/130032 patent/WO2021047120A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0384875A2 (en) * | 1989-02-21 | 1990-08-29 | International Business Machines Corporation | Asynchronous staging of objects between computer systems in cooperative processing systems |
CN103577266A (en) * | 2012-07-31 | 2014-02-12 | 国际商业机器公司 | Method and system for distributing field programmable gate array (FPGA) resources |
CN102841931A (en) * | 2012-08-03 | 2012-12-26 | 中兴通讯股份有限公司 | Storage method and storage device of distributive-type file system |
CN103902225A (en) * | 2012-12-26 | 2014-07-02 | 中国电信股份有限公司 | Method and system for centralized management of storage resources |
CN105302738A (en) * | 2015-12-09 | 2016-02-03 | 北京东土科技股份有限公司 | Method and device for distributing memory |
CN107193500A (en) * | 2017-05-26 | 2017-09-22 | 郑州云海信息技术有限公司 | A kind of distributed file system Bedding storage method and system |
WO2018236260A1 (en) * | 2017-06-22 | 2018-12-27 | Telefonaktiebolaget Lm Ericsson (Publ) | Apparatuses and methods for allocating memory in a data center |
CN107729151A (en) * | 2017-10-19 | 2018-02-23 | 济南浪潮高新科技投资发展有限公司 | A kind of method of cluster management FPGA resource |
US10282136B1 (en) * | 2017-11-30 | 2019-05-07 | Hitachi, Ltd. | Storage system and control method thereof |
CN110209490A (en) * | 2018-04-27 | 2019-09-06 | 腾讯科技(深圳)有限公司 | A kind of EMS memory management process and relevant device |
Non-Patent Citations (2)
Title |
---|
FPGA系统设计中硬件资源分配的分析与研究;张静亚;《信息化研究》;20090330;第35卷(第3期);全文 * |
一种面向可重构集群的性能/功耗联合优化资源调度方法;杨劲,庞建民,张鋆萍;《信息工程大学学报》;20180430;第19卷(第2期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110781129A (en) | 2020-02-11 |
WO2021047120A1 (en) | 2021-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110781129B (en) | Resource scheduling method, device and medium in FPGA heterogeneous accelerator card cluster | |
CN107577533B (en) | Resource allocation method and related product | |
US10783086B2 (en) | Method and apparatus for increasing a speed of accessing a storage device | |
CN111367659B (en) | Resource management method, equipment and medium for nodes in Kubernetes | |
US9489328B2 (en) | System on chip and method for accessing device on bus | |
US20190007082A1 (en) | Embedded subscriber identity module including communication profiles | |
CN110633110A (en) | Server starting method, equipment and storage medium | |
US11044729B2 (en) | Function scheduling method, device, and system | |
CN110995616B (en) | Management method and device for large-flow server and readable medium | |
CN111625320B (en) | Mirror image management method, system, device and medium | |
WO2023056797A1 (en) | Blockchain-based data processing method, apparatus, and device, and storage medium | |
CN113794764A (en) | Request processing method and medium for server cluster and electronic device | |
CN110430112B (en) | Method and device for realizing IO priority of virtual machine network | |
CN112395087B (en) | Dynamic memory area of embedded equipment without memory management unit and management method | |
CN102917036A (en) | Memcached-based distributed cache data synchronization realization method | |
CN112231106A (en) | Access data processing method and device for Redis cluster | |
CN115794396A (en) | Resource allocation method, system and electronic equipment | |
CN112600765B (en) | Method and device for scheduling configuration resources | |
CN110851411B (en) | DNS dynamic change system and method based on file synchronization | |
CN110990313B (en) | Method, equipment and storage medium for processing clock stretching of I3C bus | |
CN109189339B (en) | Automatic configuration cache acceleration method under storage system | |
CN105072047A (en) | Message transmitting and processing method | |
CN111722959B (en) | Method, system, equipment and medium for expanding storage pool | |
CN112468499A (en) | Authority control method and device for function call service | |
CN110505273B (en) | Service capability limitation using method, device and readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |