CN114756379B - Method and system for task training based on hybrid accelerator card - Google Patents

Method and system for task training based on hybrid accelerator card Download PDF

Info

Publication number
CN114756379B
CN114756379B CN202210550037.6A CN202210550037A CN114756379B CN 114756379 B CN114756379 B CN 114756379B CN 202210550037 A CN202210550037 A CN 202210550037A CN 114756379 B CN114756379 B CN 114756379B
Authority
CN
China
Prior art keywords
acceleration
card
cards
small
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210550037.6A
Other languages
Chinese (zh)
Other versions
CN114756379A (en
Inventor
李琪龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202210550037.6A priority Critical patent/CN114756379B/en
Publication of CN114756379A publication Critical patent/CN114756379A/en
Application granted granted Critical
Publication of CN114756379B publication Critical patent/CN114756379B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The application discloses a method and a system for task training based on a hybrid accelerator card, wherein the method comprises the following steps: identifying all acceleration cards in the current cluster through an AI platform, reading key information of all acceleration cards, multiplexing, splitting and presetting memories of all acceleration cards, and generating acceleration small cards of corresponding types; utilizing the acceleration small card to build a mixed acceleration card resource library; calling an acceleration small card with corresponding type and memory size from a mixed acceleration card resource library according to the current training task; training tasks are performed using acceleration tabs. The system comprises: the system comprises an identification module, a splitting module, a hybrid accelerator card building module, a calling module and a task executing module. The application can break barriers among different acceleration card types of different products, realize splitting and reorganizing resources, and more accurately distribute the resources, thereby improving the utilization rate of the acceleration card and the utilization rate of the resources.

Description

Method and system for task training based on hybrid accelerator card
Technical Field
The application relates to the technical field of accelerator card resource allocation, in particular to a method and a system for task training based on a hybrid accelerator card.
Background
With the development of AI technology, users have increasingly demanded accelerator cards, and the performance requirements for accelerator cards have increased. In order to ensure the performance of the accelerator card, how to train tasks for different accelerator cards in the same cluster is an important technical problem.
At present, task training is generally performed according to different accelerator card types by using a method for performing task training on different accelerator cards. Specifically, in the same cluster, the accelerator cards are classified according to the AI direction matched with the accelerator cards, mainly comprising picture types, audio types and algorithm types, and then different types of accelerator cards are applied to different training scripts according to the user requirements, so that task training of different accelerator cards is realized.
However, in the current task training method for different accelerator cards, the different accelerator cards are classified, the different accelerator cards are enabled to perform task training only when the user needs the AI research direction, and the user does not perform task training when the user does not need the AI research direction, and the accelerator cards are in an idle state in the whole cluster. Therefore, in the current task training method for different accelerator cards, for the whole cluster, the utilization rate of the accelerator cards is low, the idle state of resources is more, the condition of equipment resource waste exists, and the resource utilization rate is low.
Disclosure of Invention
The application provides a method and a system for task training based on a hybrid accelerator card, which are used for solving the problems of lower utilization rate and lower resource utilization rate of the accelerator card in the task training method in the prior art.
In order to solve the technical problems, the embodiment of the application discloses the following technical scheme:
a method for task training based on a hybrid accelerator card, the method comprising:
identifying all acceleration cards in the current cluster through an AI platform, and reading key information of all the acceleration cards, wherein the key information comprises: the memory, type and affiliated node of the acceleration card;
Multiplexing, splitting and presetting the memory of all the acceleration cards according to the key information to generate acceleration small cards of corresponding types;
constructing a mixed acceleration card resource library by utilizing the acceleration small card;
Calling an acceleration small card with corresponding type and memory size from the mixed acceleration card resource library according to the current training task;
And executing training tasks by using the acceleration small card.
Optionally, the method for building the hybrid acceleration card resource library by using the acceleration small card specifically comprises the following steps:
And setting a mixed acceleration card resource group by using acceleration small cards of different types and memories according to the nodes to which the acceleration small cards belong by adopting a page preset configuration mode.
Optionally, the method for building the hybrid acceleration card resource library by using the acceleration small card specifically comprises the following steps:
and establishing a mapping relation between the calling instruction and the acceleration small card on any node in the cluster according to the AI platform agreeing rule.
Optionally, the calling the acceleration small card with the corresponding type and the memory from the mixed acceleration card resource library according to the current training task includes:
Determining the type and the memory of an acceleration card required by a training script according to the current training task;
Determining the node, type and number of the small acceleration card according to the type and memory of the acceleration card;
And calling the corresponding acceleration small card from the mixed acceleration card resource group according to the node, the type and the number of the acceleration small card.
Optionally, the calling the acceleration small card with the corresponding type and the memory from the mixed acceleration card resource library according to the current training task includes:
And calling the corresponding type and the acceleration small card of the memory under the corresponding node by utilizing the mapping relation according to the acquired instruction.
Optionally, the method further comprises:
In the process of executing training tasks by using the acceleration small card, monitoring the use information of all acceleration cards in the current cluster by adopting an AI platform;
And displaying the use information.
A system for task training based on hybrid accelerator cards, the system comprising:
The identification module is used for identifying all the acceleration cards in the current cluster and reading key information of all the acceleration cards, wherein the key information comprises: the memory, type and affiliated node of the acceleration card;
The splitting module is used for multiplexing, splitting and presetting the memories of all the acceleration cards according to the key information to generate acceleration small cards of corresponding types;
The hybrid acceleration card building module is used for building a hybrid acceleration card resource library by utilizing the acceleration small card;
The calling module is used for calling the acceleration small card with the corresponding type and the memory size from the mixed acceleration card resource library according to the current training task;
And the task execution module is used for executing training tasks by using the acceleration small card.
Optionally, the hybrid accelerator card building module includes:
the preset configuration unit is used for setting a mixed acceleration card resource group by using acceleration small cards of different types and memories according to the nodes to which the acceleration small cards belong by adopting a page preset configuration mode;
And the mapping relation establishing unit is used for establishing a mapping relation between the calling instruction and the acceleration small card on any node in the cluster according to the rule agreed by the AI platform.
Optionally, the system further comprises:
the monitoring module is used for monitoring the use information of all the acceleration cards in the current cluster in the process of executing the training task by using the acceleration small card;
and the display module is used for displaying the use information.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
The application provides a method for training tasks based on hybrid accelerator cards, which comprises the steps of firstly identifying all accelerator cards in a current cluster through an AI platform, reading key information of all accelerator cards, multiplexing, splitting and presetting an accelerator card memory to generate corresponding accelerator cards, building a hybrid accelerator card resource library by using the accelerator cards, calling accelerator cards of corresponding types and memory sizes from the hybrid accelerator card resource library according to the current training tasks when users have demands, and finally executing the training tasks by using the accelerator cards. According to the method, as multiplexing and splitting are preset on a specific accelerator card, and then the mixed accelerator card resource library is built again, the method is equivalent to the process of splitting and recombining the accelerator cards, and can break barriers among different accelerator cards of different products and more accurately distribute resources. The overall accelerator card collocation mode is strong in flexibility, can adapt to different user demands, can not be started when one accelerator card is needed, and is in an idle state when not needed, so that various accelerator cards of different types are combined, the full utilization of resources is realized, the idle rate of the accelerator card is reduced, and the use efficiency is improved.
The application also provides a system for training tasks based on the hybrid accelerator card, which mainly comprises: the system comprises an identification module, a splitting module, a hybrid accelerator card building module, a calling module and a task executing module. The key information of all the acceleration cards in the current cluster can be read through the identification module and used for processing the acceleration cards subsequently. The splitting module and the mixed acceleration card building module are arranged, so that memories of all the acceleration cards can be multiplexed and split to preset to form acceleration small cards, and a mixed acceleration card resource library is built by using the acceleration small cards, so that guarantee is provided for flexible acceleration small card calling according to user requirements. In this embodiment, the hybrid acceleration card resource library formed by using the acceleration small cards is built by the hybrid acceleration card building module, instead of the whole acceleration card of a certain type, the usage mode of the whole cluster for the acceleration card is based on the acceleration small cards of different types and memories, and the acceleration small cards are the smallest usage units. After multiplexing and splitting of the same acceleration card are preset, a plurality of acceleration small cards can be generated, a plurality of different types of acceleration small cards can be generated, and the different types of acceleration small cards are processed by the mixed acceleration card building module to generate different combination modes and finally combined into a mixed acceleration card resource library, so that different user demands are met. Therefore, the structure in the embodiment is more flexible to use the acceleration card, has smaller granularity, and is beneficial to improving the utilization rate of the acceleration card and the utilization rate of resources.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flow chart of a method for performing task training based on a hybrid accelerator card according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a system for performing task training based on a hybrid accelerator card according to an embodiment of the present application.
Detailed Description
In order to make the technical solution of the present application better understood by those skilled in the art, the technical solution of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
For a better understanding of the present application, embodiments of the present application are explained in detail below with reference to the drawings.
Example 1
Referring to fig. 1, fig. 1 is a flow chart of a method for performing task training based on a hybrid accelerator card according to an embodiment of the present application. As can be seen from fig. 1, the method for performing task training based on the hybrid accelerator card in this embodiment mainly includes the following steps:
S1: and identifying all the acceleration cards in the current cluster through the AI platform, and reading key information of all the acceleration cards.
Accelerator cards are a specially designed processor product, such as GPU cards, MLU cards, for accelerating the execution of physical simulation algorithms.
The key information of the speed-up card in this embodiment includes: the memory, type and affiliated node of the accelerator card. Different types of accelerator cards work differently in different training scenarios for different AI research directions. Common accelerator card types include: picture class, audio class and algorithm class. The acceleration cards in the whole cluster are provided with the corresponding nodes, and the positioning is convenient when the acceleration cards are split and recombined later by confirming the node to which any acceleration card belongs, so that the corresponding acceleration small card can be locked quickly, and the efficiency and accuracy of task training are improved.
S2: and multiplexing, splitting and presetting the memories of all the acceleration cards according to the key information to generate the acceleration small cards of the corresponding types.
Specifically, the AI platform multiplexes and splits the accelerator cards according to the corresponding nodes, types, memory sizes and the like of each accelerator card and preset values to generate corresponding accelerator cards. For example: the T4 card of the 64G memory may be configured as a 4 x 16G small card, a 16 x 4G small card, or an 8 x 8G small card.
Referring to fig. 1, it can be seen that the memory of all the accelerator cards is multiplexed, split and preset, and after the accelerator cards of the corresponding types are generated, step S3 is executed: and (5) utilizing the acceleration small card to build a mixed acceleration card resource library.
Specifically, the method for building the hybrid accelerator card resource library by using the accelerator small card in the embodiment includes two methods.
The first is: and setting a mixed acceleration card resource group by using acceleration small cards of different types and memories according to the nodes to which the acceleration small cards belong by adopting a page preset configuration mode.
The AI platform is utilized to integrate all the accelerator cards in the current cluster into default mixed accelerator card resource groups of different types according to types, and nodes of the accelerator cards in the mixed accelerator card resource groups are marked.
The hybrid accelerator card resource group may typically match commonly used accelerator cards as a default mode. The commonly used acceleration small cards can be selected for matching according to the use frequency of the acceleration small cards, for example: and matching the acceleration small cards with the use frequency exceeding the set frequency threshold value to form a default mixed acceleration card resource group.
The user of the hybrid acceleration card resource can call the available acceleration chipcards in the AI platform, and can flexibly match different types of acceleration chipcards to train according to the requirement. For example: a T4 accelerator card with 2x 4g and a MLU accelerator card with 4 x 4g may be selected for mixed use. The mixed acceleration card resource group is set in a page preset configuration mode, and a common acceleration small card can be matched, so that a user can call the mixed acceleration card resource group through the page conveniently, and the acceleration card calling efficiency is improved.
The second is: and establishing a mapping relation between the calling instruction and the acceleration small card on any node in the cluster according to the AI platform agreeing rule.
The corresponding instruction is generated according to the AI platform appointment rule, and when a user calls, different instructions are matched with the acceleration small card under the corresponding node, so that quick call of the acceleration small card is realized. Typically different nodes correspond to different instructions. The method can support the user to directly specify the type of the accelerator card and the size of the resources used by the current training script through the instruction, and the type of the accelerator card which is not needed cannot appear, so that cluster resources are further saved.
S4: and calling the acceleration small card with corresponding type and memory size from the mixed acceleration card resource library according to the current training task.
Corresponding to the two methods of using the acceleration small card to build the hybrid acceleration card resource library in the step S3, there are two implementation methods in the step S4. Specifically, the first implementation method of step S4 includes the following steps:
S41: and determining the type and the memory of the acceleration card required by the training script according to the current training task.
S42: and determining the nodes, types and quantity of the small acceleration cards according to the types and the memories of the acceleration cards.
S43: and calling the corresponding acceleration small card from the mixed acceleration card resource group according to the belonging node, type and number of the acceleration small card.
As can be seen from the above steps S41-S43, when the hybrid acceleration card resource library is built in a page preset configuration mode, firstly, page preset configuration resources are selected according to the current training task, then, the designated nodes, the number of acceleration small cards and the types of the acceleration small cards are selected in the page, and then, training script tasks are submitted.
The second implementation method of the step S4 specifically includes: and calling the acceleration small card of the corresponding type and the memory under the corresponding node by using the mapping relation according to the acquired instruction.
With continued reference to fig. 1, after calling the corresponding type and memory size of the accelerator card from the hybrid accelerator card resource library, step S5 is performed: training tasks are performed using acceleration tabs.
Further, the method in this embodiment further includes:
S6: and in the process of executing training tasks by using the acceleration small card, monitoring the use information of all acceleration cards in the current cluster by adopting an AI platform.
S7: and displaying the use information.
The use information or the use state of all the acceleration cards are monitored, so that the user and a system administrator can master the information of the acceleration cards at the first time, the corresponding adjustment of the acceleration cards is facilitated, the adjustment efficiency and the adjustment accuracy are improved, and the adjustment is timely performed when the acceleration cards fail. The use information or the use state of all the acceleration cards are displayed in time, so that the intuitiveness is high, and the user experience is improved.
Example two
Referring to fig. 2 on the basis of the embodiment shown in fig. 1, fig. 2 is a schematic structural diagram of a system for performing task training based on a hybrid accelerator card according to an embodiment of the present application. As can be seen from fig. 2, the system for performing task training based on the hybrid accelerator card in this embodiment mainly includes: the system comprises an identification module, a splitting module, a hybrid accelerator card building module, a calling module and a task executing module.
The identification module is used for identifying all acceleration cards in the current cluster, reading key information of all the acceleration cards, wherein the key information comprises: the memory, type and affiliated node of the acceleration card; the splitting module is used for multiplexing and splitting the memory of all the acceleration cards according to the key information to preset and generate the acceleration small cards of the corresponding types; the hybrid acceleration card building module is used for building a hybrid acceleration card resource library by utilizing the acceleration small card; the calling module is used for calling the acceleration small card with the corresponding type and the memory size from the mixed acceleration card resource library according to the current training task; and the task execution module is used for executing training tasks by using the acceleration small card.
The hybrid accelerator card building module includes: the method comprises the steps of presetting a configuration unit and a mapping relation establishment unit. The system comprises a preset configuration unit, a mixed acceleration card resource group and a memory, wherein the preset configuration unit is used for setting the mixed acceleration card resource group by using acceleration cards of different types and memories according to nodes to which the acceleration cards belong in a page preset configuration mode; and the mapping relation establishing unit is used for establishing a mapping relation between the calling instruction and the acceleration small card on any node in the cluster according to the rule agreed by the AI platform.
Further, the system also comprises a monitoring module and a display module. The system comprises a monitoring module, a control module and a control module, wherein the monitoring module is used for monitoring the use information of all acceleration cards in a current cluster in the process of executing training tasks by using the acceleration cards; and the display module is used for displaying the use information of the accelerator card.
The working principle and working method of the system for performing task training based on the hybrid accelerator card in this embodiment are already described in detail in the embodiment shown in fig. 1, and are not described here again.
The foregoing is only a specific embodiment of the application to enable those skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A method for task training based on a hybrid accelerator card, the method comprising:
identifying all acceleration cards in the current cluster through an AI platform, and reading key information of all the acceleration cards, wherein the key information comprises: the memory, type and affiliated node of the acceleration card;
Multiplexing, splitting and presetting the memory of all the acceleration cards according to the key information to generate acceleration small cards of corresponding types;
constructing a mixed acceleration card resource library by utilizing the acceleration small card;
Calling an acceleration small card with corresponding type and memory size from the mixed acceleration card resource library according to the current training task;
And executing training tasks by using the acceleration small card.
2. The method for training tasks based on the hybrid accelerator card according to claim 1, wherein the method for building the hybrid accelerator card resource library by using the accelerator card is specifically as follows:
And setting a mixed acceleration card resource group by using acceleration small cards of different types and memories according to the nodes to which the acceleration small cards belong by adopting a page preset configuration mode.
3. The method for training tasks based on the hybrid accelerator card according to claim 1, wherein the method for building the hybrid accelerator card resource library by using the accelerator card is specifically as follows:
and establishing a mapping relation between the calling instruction and the acceleration small card on any node in the cluster according to the AI platform agreeing rule.
4. The method for training tasks based on the hybrid accelerator card according to claim 2, wherein the invoking the accelerator tab of the corresponding type and memory from the hybrid accelerator card resource library according to the current training task comprises:
Determining the type and the memory of an acceleration card required by a training script according to the current training task;
Determining the node, type and number of the small acceleration card according to the type and memory of the acceleration card;
And calling the corresponding acceleration small card from the mixed acceleration card resource group according to the node, the type and the number of the acceleration small card.
5. A method for training tasks based on a hybrid accelerator card according to claim 3, wherein the invoking the accelerator tab of the corresponding type and memory from the hybrid accelerator card resource library according to the current training task comprises:
And calling the corresponding type and the acceleration small card of the memory under the corresponding node by utilizing the mapping relation according to the acquired instruction.
6. A method for task training based on hybrid accelerator cards according to any of claims 1-5, further comprising:
In the process of executing training tasks by using the acceleration small card, monitoring the use information of all acceleration cards in the current cluster by adopting an AI platform;
And displaying the use information.
7. A system for task training based on hybrid accelerator cards, the system comprising:
The identification module is used for identifying all the acceleration cards in the current cluster and reading key information of all the acceleration cards, wherein the key information comprises: the memory, type and affiliated node of the acceleration card;
The splitting module is used for multiplexing, splitting and presetting the memories of all the acceleration cards according to the key information to generate acceleration small cards of corresponding types;
The hybrid acceleration card building module is used for building a hybrid acceleration card resource library by utilizing the acceleration small card;
The calling module is used for calling the acceleration small card with the corresponding type and the memory size from the mixed acceleration card resource library according to the current training task;
And the task execution module is used for executing training tasks by using the acceleration small card.
8. The system for task training based on hybrid accelerator cards of claim 7, wherein the hybrid accelerator card building module comprises:
the preset configuration unit is used for setting a mixed acceleration card resource group by using acceleration small cards of different types and memories according to the nodes to which the acceleration small cards belong by adopting a page preset configuration mode;
And the mapping relation establishing unit is used for establishing a mapping relation between the calling instruction and the acceleration small card on any node in the cluster according to the rule agreed by the AI platform.
9. The system for task training based on hybrid accelerator card of claim 7, further comprising:
the monitoring module is used for monitoring the use information of all the acceleration cards in the current cluster in the process of executing the training task by using the acceleration small card;
and the display module is used for displaying the use information.
CN202210550037.6A 2022-05-20 2022-05-20 Method and system for task training based on hybrid accelerator card Active CN114756379B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210550037.6A CN114756379B (en) 2022-05-20 2022-05-20 Method and system for task training based on hybrid accelerator card

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210550037.6A CN114756379B (en) 2022-05-20 2022-05-20 Method and system for task training based on hybrid accelerator card

Publications (2)

Publication Number Publication Date
CN114756379A CN114756379A (en) 2022-07-15
CN114756379B true CN114756379B (en) 2024-06-11

Family

ID=82336040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210550037.6A Active CN114756379B (en) 2022-05-20 2022-05-20 Method and system for task training based on hybrid accelerator card

Country Status (1)

Country Link
CN (1) CN114756379B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116541338B (en) * 2023-06-27 2023-11-03 苏州浪潮智能科技有限公司 Computing system, model training method, device and product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112241321A (en) * 2020-09-24 2021-01-19 北京影谱科技股份有限公司 Computing power scheduling method and device based on Kubernetes
WO2021143135A1 (en) * 2020-01-13 2021-07-22 苏州浪潮智能科技有限公司 Far-end data migration device and method based on fpga cloud platform
CN113760538A (en) * 2021-07-16 2021-12-07 苏州浪潮智能科技有限公司 AI platform-based accelerator card type pipe control method, system and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021143135A1 (en) * 2020-01-13 2021-07-22 苏州浪潮智能科技有限公司 Far-end data migration device and method based on fpga cloud platform
CN112241321A (en) * 2020-09-24 2021-01-19 北京影谱科技股份有限公司 Computing power scheduling method and device based on Kubernetes
CN113760538A (en) * 2021-07-16 2021-12-07 苏州浪潮智能科技有限公司 AI platform-based accelerator card type pipe control method, system and device

Also Published As

Publication number Publication date
CN114756379A (en) 2022-07-15

Similar Documents

Publication Publication Date Title
US20230244537A1 (en) Efficient gpu resource allocation optimization method and system
CN104618155A (en) Virtual machine fault tolerant method, device and system
CN114756379B (en) Method and system for task training based on hybrid accelerator card
CN111464589A (en) Intelligent contract processing method, computer equipment and storage medium
CN106131020B (en) Firewall virtualization module and management method
CN113220432B (en) Multi-cloud interconnection method, device, equipment, storage medium and product
CN111158800A (en) Method and device for constructing task DAG based on mapping relation
CN113434283B (en) Service scheduling method and device, server and computer readable storage medium
CN109542829B (en) Control method and device of GPU (graphics processing Unit) equipment in multiple systems and electronic equipment
CN111541762A (en) Data processing method, management server, device and storage medium
CN114565502A (en) GPU resource management method, scheduling method, device, electronic equipment and storage medium
CN113282850A (en) Resource label management method, device, electronic equipment, system and storage medium
CN104951346A (en) Process management method for embedded system as well as system
CN114860321B (en) External device control method, device, equipment and medium based on raspberry pie
CN115840648A (en) Simulation task processing method and device and electronic equipment
CN111277626A (en) Server upgrading method and device, electronic equipment and medium
JP2013120440A (en) Test system, test method, and program
CN115933591A (en) Controller diagnosis method, device, equipment and storage medium
CN114281473A (en) Cloud platform test environment management method, system, terminal and storage medium
CN113553264A (en) Test data generation method, test data generation device, electronic device and medium
CN113407197B (en) Service deployment method, device, electronic equipment and storage medium
CN112084827B (en) Data processing method and device
CN113961861A (en) Page returning method and device
CN115712524A (en) Data recovery method and device
CN117873645A (en) Trusted computing method, trusted computing device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant