CN114217976B - Task processing method, device, equipment and storage medium - Google Patents

Task processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114217976B
CN114217976B CN202111589019.0A CN202111589019A CN114217976B CN 114217976 B CN114217976 B CN 114217976B CN 202111589019 A CN202111589019 A CN 202111589019A CN 114217976 B CN114217976 B CN 114217976B
Authority
CN
China
Prior art keywords
resource
task
video memory
gpu
memory resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111589019.0A
Other languages
Chinese (zh)
Other versions
CN114217976A (en
Inventor
李勇
李志�
黎世勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111589019.0A priority Critical patent/CN114217976B/en
Publication of CN114217976A publication Critical patent/CN114217976A/en
Application granted granted Critical
Publication of CN114217976B publication Critical patent/CN114217976B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Abstract

The disclosure provides a task processing method, relates to the technical field of data processing, and particularly relates to a big data processing technology. The specific implementation scheme is as follows: under the condition that a first task uses a first video memory resource in a Graphics Processing Unit (GPU) resource and does not use a first computing resource in the GPU resource, saving first video memory resource information of the first video memory resource used by the first task; and under the condition that the storage is successful, releasing the first video memory resource used by the first task. Therefore, the utilization rate of GPU resources is improved.

Description

Task processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a big data processing technology.
Background
With the development of cloud computing, big data and deep learning, the demand for computing power is increasing day by day. Graphics Processing Units (GPUs) are widely relied on by cloud services, edge computing, and terminal devices due to advantages of floating point operations, parallel computing, and the like. Therefore, how to increase the utilization rate of GPU resources becomes an urgent problem to be solved.
Disclosure of Invention
The disclosure provides a task processing method, a device, equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a task processing method including:
under the condition that a first task uses a first video memory resource in GPU resources and does not use a first computational power resource in the GPU resources, saving first video memory resource information of the first video memory resource used by the first task;
and under the condition that the storage is successful, releasing the first video memory resource used by the first task.
According to another aspect of the present disclosure, there is provided a task processing apparatus including:
the first information processing unit is used for saving first video memory resource information of a first video memory resource used by a first task under the condition that the first task uses the first video memory resource in GPU resources and does not use the first computing power resource in the GPU resources;
and the resource processing unit is used for releasing the first video memory resource used by the first task under the condition of successful storage.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method described above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method described above.
Therefore, according to the scheme, under the condition that the task uses the video memory resource in the GPU resource but does not use the computational power resource in the GPU resource, the first video memory resource used by the task is actively released, namely the use permission of the GPU resource is actively released, so that the condition that the resource is occupied but not used is effectively avoided, the resource waste is avoided, and meanwhile, the utilization rate of the GPU resource is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow chart diagram of a task processing method according to an embodiment of the present disclosure;
FIG. 2 is a flow diagram of a task processing method in a specific example according to an embodiment of the disclosure;
FIG. 3 is a flow diagram of a task processing method in another specific example according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a task processing device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device for implementing a task processing method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
However, in actual use, the application program always occupies the video memory resource of the GPU resource, but only part of the time is used for computing the computational power resource in the GPU resource, and other times may be performing CPU tasks or waiting for data input, which obviously causes great waste of the computational power resource.
Based on the scheme, the scheme of GPU time-sharing multiplexing is provided, and the scheme can be applied to the field of GPU virtualization, so that time-sharing mixed distribution of a plurality of GPU tasks is better realized, and the utilization rate of GPU resources is improved. Specifically, in order to improve the utilization rate of the GPU resources, the GPU time division multiplexing method is adopted, and when the task is in a state where the computational resources are idle, the video memory site of the task (i.e. the related information of the video memory resources) is protected and the use right of the GPU resources is handed out (i.e. released) for use by other GPU tasks; and under the condition that the use needs to be recovered, acquiring the use permission of the GPU resources, recovering the video memory site, and simultaneously keeping the tasks not to exit, thereby realizing hot switching of the GPU tasks and realizing time-sharing multiplexing of a plurality of GPU tasks.
As shown in fig. 1, the present disclosure provides a task processing method, including:
step S101: and under the condition that a first task uses a first video memory resource in GPU resources and does not use a first computing resource in the GPU resources, saving first video memory resource information of the first video memory resource used by the first task.
Here, the first task may also be referred to as a first GPU task, that is, the first task needs to be executed by using GPU resources, and the GPU resources may specifically include computational resources and video memory resources.
It can be understood that, the first task uses the first video memory resource in the GPU resource and does not use the first computational resource in the GPU resource, that is, the first task is in a computational resource idle state. The above saving the first video memory resource information of the first video memory resource used by the first task is equivalent to saving the video memory site of the first task, that is, keeping the first task not to exit.
In a specific example, the first task is started to run in a CPU, and a usage right of the GPU is obtained, that is, a computing resource and a video memory resource of a GPU resource required for normal running of the first task are obtained, so that the first task is processed based on the computing resource and the video memory resource of the GPU resource.
Step S102: and under the condition that the storage is successful, releasing the first video memory resource used by the first task.
It should be noted that, after the first video memory resource used by the first task is released by the first task, the usage right of the GPU resource is released, so that it is convenient for other tasks to use the GPU resource.
Therefore, according to the scheme, under the condition that the first task is in the computing power resource idle state, the related information of the video memory resource used by the first task is stored, namely the first video memory resource information of the first video memory resource used by the first task is stored, and then the first video memory resource used by the first task is released, namely, under the condition that the first task is in the computing power resource idle state, the used first video memory resource is released, namely the use permission of the GPU is released, so that other tasks can use the GPU resource conveniently, the condition that the resource is occupied but not used is effectively avoided, and the utilization rate of the GPU resource is improved.
In a specific example of the present disclosure, the following manner may be adopted to store the information related to the video memory resource used by the first task, specifically, the storing the first video memory resource information of the first video memory resource used by the first task specifically includes: and storing the first video memory resource information of the first video memory resource used by the first task into a storage medium outside the GPU. For example, the first video memory resource information of the first video memory resource is stored in a memory or a disk, so that the first task is ensured not to exit, and a foundation is laid for recovering a site for a subsequent first task (i.e., reusing the first video memory resource, and continuing to process the first task based on the GPU resource, such as the first computing resource, on the basis of reusing the first video memory resource) and realizing hot switch of the GPU task.
In a specific example of the disclosure, after saving the information related to the first video memory resource used by the first task and releasing the first video memory resource used by the first task, a first resource release signal for releasing the first video memory resource may be further sent. For example, after the first task releases the first video memory resource, the first task sends a first resource release signal to indicate that the first task has released the usage right of the GPU resource.
It can be understood that, after the video memory resources of the GPU resources are released, the permission to use the computational resources in the GPU resources is lost, in other words, after the video memory resources of the GPU resources are released, the permission to use the GPU resources is released, and at this time, a release signal can be sent to indicate that the permission to use the GPU resources has been released, so that other tasks can use the GPU resources in a relay manner, and a foundation is laid for realizing time-sharing multiplexing of multiple GPU tasks.
In a specific example of the disclosure, after the first resource release signal is sent, in a case that there is a second task waiting for acquiring the GPU resource, the second task is triggered to acquire the GPU resource. That is to say, after the first task releases the usage right of the GPU resource, there is a second task, the second task starts to run in the CPU and wants to acquire the GPU resource, it should be noted that the second task wants to acquire the GPU resource but does not have the usage right of the GPU resource, at this time, after capturing the first resource release signal, the second task learns that the GPU resource is in an idle state, and then acquires the usage right of the GPU resource, that is, acquires the computational power resource and the video memory resource of the GPU resource required for normal operation of the second task. Therefore, GPU resources are used in a relaying mode, and time-sharing multiplexing of a plurality of GPU tasks is achieved. Here, since the second task uses the GPU resource in a relay manner when the first task is in a state where the computational power resource is idle and after the first task releases the usage right of the GPU resource, waste of the computational power resource is effectively avoided, and the usage rate of the GPU resource is improved.
In a specific example of the disclosed solution, the second task is a task that cannot use the GPU resources simultaneously with the first task. That is to say, in this example, the computational power resource and the video memory resource required by the first task and the second task cannot be simultaneously run on the same display card, but may be run on the same display card at different time periods, which may also be understood as that the first task and the second task are mutually exclusive, or that the GPU resources required by the first task and the second task are mutually exclusive.
Therefore, the problem that the GPU resources are wasted, especially the computing resources are wasted because the first task uses the first video memory resources but does not use the first computing resources (namely the first task is in the condition that the computing resources are idle), and at the moment, the second task mutually exclusive with the first task cannot acquire the GPU resources is solved.
In a specific example of the disclosure, after the first video memory resource used by the first task is released and GPU resources of the first task need to be resumed, a first resource resumption signal may be further sent that the first task needs to resume using the GPU resources. For example, if the first task needs to resume using the GPU resource, the first task sends a first resource resume signal to resume using the GPU resource, so as to ensure that the first task can be continuously processed based on the GPU resource, so that the first task can normally run; meanwhile, a foundation is laid for ensuring hot switching of tasks.
In a specific example of the disclosed solution, in the case of processing the second task based on the GPU resources, in response to the first resource resume signal for the first task, triggering the second task to stop using a second computational resource of the GPU resources; under the condition that the use of a second computing power resource of the GPU resource is stopped, second video memory resource information of a second video memory resource used by the second task is saved; and under the condition that the storage is successful, releasing the second video memory resource used by the second task.
For example, after the first task releases the usage right of the GPU resource, and the second task acquires the usage right of the GPU, and in the process of processing the second task based on the GPU resource, the second task captures the first resource recovery signal, at this time, the second task stops using the second computing power resource of the GPU resource, where in an example, stopping using the second computing power resource of the GPU resource may specifically perform the following operations: firstly, the unprocessed subtasks in the second task are prevented from being transmitted to the GPU resources, namely, new subtasks are organized and transmitted to the GPU resources for processing; secondly, waiting for the completion of the processing of the subtask which is already transmitted to the GPU resource in the second task, namely waiting for the completion of the processing of the subtask which is being processed based on the GPU resource in the second task; and after the sub-task processing based on the GPU resource in the second task is ensured to be completed, protecting the site, namely saving the second video memory resource information of the second video memory resource used by the second task, and then releasing the second video memory resource used by the second task.
That is to say, when the GPU resource is occupied by the second task, if the first task needs to resume using the GPU resource, at this time, the second task needs to stop using the GPU resource, and protect the video memory site of the second task (that is, the related information of the second video memory resource), that is, keep the task not to exit, but give out the usage right of the GPU resource, so as to resume using by the first task. Therefore, hot switching of GPU tasks is realized, and meanwhile, time division multiplexing of a plurality of GPU tasks is realized on the basis of ensuring that the first task can be continuously processed.
In a specific example, the second task in this example may also be a task that cannot use the GPU resources simultaneously with the first task; at this time, the example can be applied to multiple mutually exclusive GPU task scenarios, and under the condition that the current task (i.e., the first task) does not exit, the field is protected and the usage right of the GPU resource of the first task is handed over, so that other GPU tasks (e.g., the second task) use the video memory resource and the computational resource of the GPU. And when the GPU resources are needed to be used again after the current task is interrupted, obtaining the GPU resource use right and recovering the site, and continuing to operate on the basis of the state of the interrupted operation. Therefore, the utilization rate of GPU resources is improved.
In a specific example of the disclosed aspect, the priority of the second task is lower than the priority of the first task. That is, in this example, only if the priority of the second task is equal to the priority of the first task, the second task may give the usage right of the GPU resource for the task with the higher priority, that is, the first task resumes using the GPU resource. Otherwise, if the priority of the second task is higher than that of the first task, even if the first task needs to be recovered from the scene, the second task does not give the usage right of the GPU resources.
Specifically, in the case that the second task is processed based on the GPU resources and the priority of the second task is lower than the priority of the first task, in response to the first resource recovery signal for the first task, triggering the second task to stop using a second computational resource of the GPU resources; under the condition that the use of a second computing power resource of the GPU resource is stopped, second video memory resource information of a second video memory resource used by the second task is saved; and under the condition that the storage is successful, releasing the second video memory resource used by the second task.
In this way, it is ensured that tasks of high priority can preferentially use the GPU resources and be processed preferentially.
In a specific example of the present disclosure, after the second task acquires the GPU resource, if the second task uses a second video memory resource in the GPU resource and does not use a second computational power resource in the GPU resource, second video memory resource information of the second video memory resource used by the second task is saved; and under the condition that the storage is successful, releasing the second video memory resource used by the second task.
It can be understood that the second task described above uses the second video memory resource in the GPU resource and does not use the second computational resource in the GPU resource, that is, it is equivalent to the second task being in a computational resource idle state. The above storing the second video memory resource information of the second video memory resource used by the second task is equivalent to storing the video memory site of the second task, that is, keeping the second task from exiting.
In a specific example, the second task is started to run in a CPU, and after the first task gives out the usage right of the GPU resource, the usage right of the GPU is obtained, that is, the computing power resource and the video memory resource of the GPU resource required for normal running of the second task are obtained, so as to process the second task based on the computing power resource and the video memory resource of the GPU resource.
It should be noted that, similar to the first task, after the second task releases the second video memory resource used by the second task, the usage right of the GPU resource is released, which is convenient for other tasks to use the GPU resource or for the first task to recover the usage right of the GPU resource.
Therefore, according to the scheme of the disclosure, when the second task is in the computing power resource idle state, the related information of the video memory resource used by the second task is stored, that is, the second video memory resource information of the second video memory resource used by the second task is stored, and then the second video memory resource used by the second task is released, that is, when the second task is in the computing power resource idle state, the used second video memory resource is released, that is, the usage right of the GPU is released, so that the GPU resource is conveniently used by other tasks, or the GPU resource is conveniently reused under the condition that other tasks (such as the first task) need to resume using the GPU resource, so that the situation that the resource is occupied but not used is effectively avoided, and the utilization rate of the GPU resource is improved.
It can be understood that the second task may give the GPU usage rights in two cases:
in the first case: in the case that a task with a higher priority than a second task, such as a first task, needs to resume using the GPU resource, at this time, the second task stops using the GPU resource, protects the field, that is, saves second video memory resource information of the second video memory resource used by the second task, and relinquishes (that is, releases) the usage right of the GPU resource.
In the second case: and under the condition that the second task is in a computing resource idle state, at the moment, the second task can automatically protect the site, namely, second video memory resource information of a second video memory resource used by the second task is stored, and the use permission of the GPU resource is handed out (namely, released).
In a specific example of the disclosure, similar to the first task, after the related information of the second video memory resource used by the second task is saved and the second video memory resource used by the second task is released, a second resource release signal for releasing the second video memory resource may be further sent. For example, after the second task releases the second video memory resource, the second task sends a second resource release signal to indicate that the second task has released the usage right of the GPU resource.
It can be understood that, in the present disclosure, after the video memory resource of the GPU resource is released, the permission to use the computing resource in the GPU resource is lost, in other words, after the video memory resource of the GPU resource is released, the usage permission of the GPU resource is released, at this time, a release signal may be sent to indicate that the usage permission of the GPU resource has been released, which is convenient for other tasks to use the GPU resource in a relay manner, and lays a foundation for implementing time-sharing multiplexing of multiple GPU tasks.
In a specific example of the present disclosure, in a case that the second task releases the second video memory resource, for example, the second task releases the usage right of the GPU resource based on the above first case, or spontaneously releases the usage right of the GPU resource based on the above second case, which is not limited in this example; at this time, if the first task needs to recover the GPU resources, copying the first video memory resource information into the GPU resources; and then continuing to process the first task based on the first video memory resource and the first computing resource. That is to say, when the first task needs to resume using the GPU resource, based on the saved first video memory resource information, the first task is restored to a state where the first task releases the first video memory resource, and then the first task continues to be processed in the original state. In this way, hot switching of tasks is ensured, and meanwhile, the first task can be continuously processed based on the GPU resources, so that the first task can normally run.
It may be understood that the first task and the second task described in the present disclosure may be specifically any task that needs to be executed by using GPU resources in an application program, and the present disclosure is not limited in this respect. Further, the first task and the second task according to the present disclosure may also be tasks that can run on the same graphics card at the same time slice, and at this time, even if both tasks can use the GPU resources at the same time, when a certain task, such as the first task or the second task, is in an idle state of computational resources, the task can be kept on site and the usage right of the GPU resources is handed over, so as to maximally ensure effective utilization of the GPU resources.
Therefore, according to the scheme of the disclosure, under the condition that the first task is in the computing power resource idle state, the related information of the video memory resource used by the first task is stored, that is, the first video memory resource information of the first video memory resource used by the first task is stored, and then the first video memory resource used by the first task is released, that is, under the condition that the first task is in the computing power resource idle state, the first video memory resource used is released, that is, the use permission of the GPU is released, so that other tasks can use the GPU resource conveniently, the condition that the resource is occupied but not used is effectively avoided, and the utilization rate of the GPU resource is improved.
Specifically, in this example, the computing power resource and the video memory resource required for executing the two tasks cannot be simultaneously run on the same video card, but may be run on the same video card at different time slices, which may be understood as that the task a and the task B are mutually exclusive, and essentially that the GPU tasks of the two tasks are mutually exclusive.
Based on the above premise, the following two examples are given to illustrate the specific flow, specifically:
as an example one, as shown in fig. 2, the steps include:
step 1: and starting the task A to run in the CPU, and acquiring the use permission of the GPU, namely acquiring the computing power resource and the video memory resource of the GPU resource required by the normal running of the task A.
And 2, step: when the task a is in a waiting state (i.e., a computing resource idle state), where the waiting state refers to a state where the task a occupies the video memory resource of the GPU resource but does not use the computing resource, at this time, the computing resource for the task a in the GPU resource is in an idle state, and accordingly, the task a may be referred to as being in a computing resource idle state. Further, the task a protects a site, where the protecting site refers to that the task a stores video memory resource information (i.e., first video memory resource information) of a video memory resource (i.e., first video memory resource) occupied by the task a to a memory or a disk; and then, the task A releases the video memory resource in the GPU resource. It should be noted that, after the task a releases the video memory resource in the GPU resource, the task a does not have the authority to use the computing resource of the GPU resource, that is, the task a releases the usage authority of the GPU resource. After the video memory resource information is stored, the protection field of the task A is considered to be completely executed, and at the moment, the task A sends out a resource release signal to indicate that the use permission of the GPU resource is released.
It can be understood that protecting the site may specifically refer to storing the video memory resource information and releasing the video memory resource; alternatively, it can also be understood that protecting the site only refers to saving the video memory resource information; and after the site is protected, the video memory resource is released. The disclosed solution is not limited in this regard. In addition, it should be further noted that the timing for sending the resource release signal may be performed after the completion of the execution of the protection site, that is, after the video memory resource information is successfully saved, or may be performed after the video memory resource of the GPU resource is released, which is not limited in this disclosure.
It can be understood that the sequence of sending the resource release signal and releasing the video memory resource may be different, and the scheme of the present disclosure is not limited to this specifically; for example, the resource release signal may be sent after the video memory resource is released; or even if the time for sending the resource release signal is before the video memory resource is released, at this time, the task B obtains the usage right of the GPU resource after the task a releases the usage right of the GPU resource, so as to ensure that the task B can successfully obtain the usage right of the GPU resource.
And 3, step 3: the task B is started and operated in the CPU, and the use permission of the GPU resource is required to be obtained, wherein the task B does not represent the use permission of the GPU resource; further, after capturing the resource release signal sent by the task a, the task B learns that the GPU resource is in an idle state, and then obtains the usage right of the GPU resource, that is, obtains the computing power resource and the video memory resource of the GPU resource required by the task B for normal operation.
And 4, step 4: the task B normally runs, and when the task B runs, it is known that the task A needs to be in a running state, that is, the task B acquires a resource recovery signal sent by the task A needing to be in a running state, and the task B actively enters a waiting state, for example, the task B stops using computing resources in GPU resources but occupies video memory resources, and here, the task B can be understood that the priority of the task A is higher than that of the task B, so the task B needs to release the GPU resources for the task A to normally run.
And 5: b, protecting the site by the task; that is, the task B stores the video memory resource information (i.e., the second video memory resource information) of the video memory resource (i.e., the second video memory resource) occupied by the task B to the memory or the disk, and then the task B releases the video memory resource in the GPU resource. After the video memory resource information is stored, the B task protection field is considered to be executed completely, and at the moment, the B task sends out a resource release signal to indicate that the use permission of the GPU resource is released.
Step 6: after capturing the resource release signal sent by the task B, the task A acquires the use permission of GPU resources (video memory resources and computational resources) and recovers the site. Here, the restoring the site refers to copying the video memory resource information for the task a, which has been previously saved in the memory or the disk, to the video memory of the GPU resource (this video memory is the video memory used when the task a runs). And after the recovery site is finished, the task A can normally run. At this time, the task a continues to run based on the aforementioned state, i.e., the waiting state in step 2.
In this way, in the task a, in the previous state (that is, in the case that step 2 is in the waiting state), the subtask already running on the GPU resource does not sense the interruption due to the operations of the protection field and the recovery field, and thus the task is continuously run.
Further, after the task A enters the waiting state again, the step 2 is executed, and then after the task B captures a resource release signal sent by the task A, the task B knows that the GPU resource is in an idle state, and the task B recovers to the site. And after the recovery site is finished, the task B normally runs. And the process is circulated until the task is executed.
Example two, as shown in fig. 3, the steps include:
step 1: and starting the task A to run in the CPU, and acquiring the use permission of the GPU, namely acquiring the computing power resource and the video memory resource of the GPU resource required by the normal running of the task A.
Step 2: when the task a is in a waiting state (i.e., an effort resource idle state), where the waiting state refers to a state in which the task a occupies the video memory resources of the GPU resources but does not use the effort resources of the GPU resources, at this time, the effort resources for the task a in the GPU resources are in an idle state, and accordingly, the task a may also be referred to as an effort resource idle state. Further, the task a protects a site, where the protecting site refers to that the task a stores video memory resource information (i.e., first video memory resource information) of a video memory resource (i.e., first video memory resource) occupied by the task a to a memory or a disk; and then, the task A releases the video memory resources in the GPU resources. It should be noted that, after the task a releases the video memory resource in the GPU resource, the task a does not have the right to use the computing resource of the GPU resource, that is, the task a releases the right to use the GPU resource. After the video memory resource information is stored, the task A is considered to finish the protection field execution, and at the moment, the task A sends a resource release signal to indicate that the use authority of the GPU resource is released.
It can be understood that protecting the site may specifically refer to storing the video memory resource information and releasing the video memory resource; alternatively, it can also be understood that protecting the site only refers to saving the video memory resource information; and after the site is protected, the video memory resource is released. The disclosed solution is not limited in this regard. In addition, it should be noted that the timing for sending the resource release signal may be performed after the completion of the execution of the protection field, that is, after the information of the video memory resource is successfully stored, or may be performed after the video memory resource of the GPU resource is released, which is also not limited by the present disclosure.
It can be understood that the sequence of sending the resource release signal and releasing the video memory resource may be different, and the scheme of the present disclosure is not limited to this specifically; for example, after the video memory resource is released, the resource release signal may be sent; or even if the time for sending the resource release signal is before the video memory resource is released, at this time, the task B obtains the usage right of the GPU resource after the task a releases the usage right of the GPU resource, so as to ensure that the task B can successfully obtain the usage right of the GPU resource.
And step 3: the task B is started to run in the CPU, and the use permission of the GPU resource is required to be obtained, wherein the task B does not represent the use permission of the GPU resource; further, after capturing the resource release signal sent by the task A, the task B learns that the GPU resource is in an idle state, and acquires the use permission of the GPU resource, namely acquires the computing power resource and the video memory resource of the GPU resource required by the normal operation of the task B.
And 4, step 4: the task B runs normally, and then, the task B enters a waiting state (i.e., an computing resource idle state), where the waiting state refers to a state in which the task B occupies a video memory resource of the GPU resource but does not use the computing resource thereof, and at this time, the computing resource for the task B in the GPU resource is in an idle state, which may be referred to as the task B being in the computing resource idle state. Further, the task B protects a site, where the protecting site refers to that the task B stores video memory resource information (i.e., second video memory resource information) of a video memory resource (i.e., second video memory resource) occupied by the task B to a memory or a disk; and then, the task B releases the video memory resources in the GPU resources. It should be noted that, after the task B releases the video memory resource targeted in the GPU resource, the task B does not have the right to use the computational resource of the GPU resource, that is, the task B releases the right to use the GPU resource. After the video memory resource information is stored, the task B is considered to finish the protection field execution, and at the moment, the task B sends a resource release signal to indicate that the use authority of the GPU resource is released.
And 5: when other tasks need to use GPU resources, or the task a needs to recover the running state, here, the example that the task a needs to recover the running state is described; specifically, after capturing a resource release signal sent by the task B, the task a acquires the usage right of the GPU resource (video memory resource and computational resource), and restores the field. Here, the restoring the site refers to copying the video memory resource information for the task a, which has been previously saved in the memory or the disk, to the video memory of the GPU resource (this video memory is the video memory used when the task a runs). And after the recovery site is finished, the task A can normally run. At this time, the task a continues to run based on the aforementioned state, i.e., the waiting state in step 2.
In this way, in the task a, in the previous state (that is, in the case that step 2 is in the waiting state), the subtask already running on the GPU resource does not sense the interruption due to the operations of the protection field and the recovery field, and thus the task is continuously run.
Further, after the task A enters the waiting state again, the step 2 is executed, and then, if the task B needs to recover the site and captures a resource release signal sent by the task A, the task B knows that the GPU resource is in an idle state and the task B recovers the site. And after the recovery site is finished, the task B normally runs. And the process is circulated until the task is executed.
It is understood that the above examples are only exemplified by the a task and the B task, and in practical applications, there may be more tasks, and the more tasks successively use GPU resources based on the present disclosure. For example, when a specific task is in a waiting state (which can also be understood as an interruption operation), the site is protected; and when the use right of the GPU resource needs to be obtained again after the operation of the specific task is interrupted, restoring the site, and continuing to operate on the basis of the state of the interrupted operation. In addition, in this example, the running of the high-priority task may also be prioritized, for example, when the high-priority task needs to use the GPU resource, the running low-priority task may be interrupted by sending a signal, and at this time, the low-priority task that is forced to be interrupted may protect the field.
According to the scheme, time-sharing multiplexing of multiple GPU tasks is achieved, time-sharing mixed distribution of the GPU tasks is achieved, meanwhile, all the GPU tasks can be kept not to exit and can run after interruption, the GPU tasks continue to run on the basis of the previous interruption state, and the utilization rate of GPU resources is improved to the maximum extent.
The present disclosure also provides a task processing apparatus, as shown in fig. 4, including:
a first information processing unit 401, configured to, when a first task uses a first video memory resource in a GPU resource and does not use a first computational resource in the GPU resource, save first video memory resource information of the first video memory resource used by the first task;
a resource processing unit 402, configured to release the first video memory resource used by the first task when the storage is successful.
In a specific example of the disclosed aspect, wherein,
the first information processing unit is specifically configured to store first video memory resource information of the first video memory resource used by the first task in a storage medium other than the GPU.
In a specific example of the present disclosure, the method further includes:
and the first sending unit is used for sending a first resource release signal for releasing the first video memory resource.
In a specific example of the disclosed aspect, wherein,
the resource processing unit is further configured to trigger the second task to acquire the GPU resource when a second task waiting for acquisition of the GPU resource exists after the first resource release signal is sent.
In a specific example of the disclosed solution, the second task is a task that cannot use the GPU resources simultaneously with the first task.
In a specific example of the present disclosure, the method further includes:
a second sending unit, configured to send a first resource recovery signal that the first task needs to recover using the GPU resource.
In a specific example of the present disclosure, the method further includes: a second information processing unit; wherein the content of the first and second substances,
the second information processing unit is used for triggering the second task to stop using a second computational power resource of the GPU resource in response to the first resource recovery signal aiming at the first task under the condition that the second task is processed based on the GPU resource; under the condition that the use of a second computing power resource of the GPU resource is stopped, second video memory resource information of a second video memory resource used by the second task is saved;
and the resource processing unit is further configured to release the second video memory resource used by the second task when the storage is successful.
In a specific example of the disclosed solution, wherein the priority of the second task is lower than the priority of the first task.
In a specific example of the present disclosure, the method further includes: a third information processing unit in which, among others,
a third information processing unit, configured to store second video memory resource information of a second video memory resource used by the second task when the second task uses the second video memory resource in the GPU resource and does not use a second computational power resource in the GPU resource;
and the resource processing unit is further configured to release the second video memory resource used by the second task when the storage is successful.
In a specific example of the present disclosure, the method further includes:
and the third sending unit is used for sending a second resource release signal for releasing the second video memory resource.
In a specific example of the disclosed aspect, wherein,
the resource processing unit is further configured to copy the first video memory resource information to the GPU resource when the second task releases the second video memory resource; and continuing to process the first task based on the first video memory resource and the first computing power resource.
The specific functions of the units in the above device can be described with reference to the above method, and are not described again here.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the device 500 comprises a computing unit 501 which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 executes the respective methods and processes described above, such as the task processing method. For example, in some embodiments, the task processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 500 via ROM 502 and/or communications unit 509. When the computer program is loaded into the RAM503 and executed by the computing unit 501, one or more steps of the task processing method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the task processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (30)

1. A method of task processing, comprising:
under the condition that a first task is in a waiting state of using a first video memory resource in a GPU resource and not using a first computational power resource in the GPU resource, saving first video memory resource information of the first video memory resource used by the first task;
under the condition that the storage is successful, releasing the first video memory resource used by the first task, and interrupting the operation of the first task;
and under the condition that the running state needs to be restored again after the first task is interrupted from running, restoring the state that the first task releases the first video memory resource based on the saved first video memory resource information, and further continuing to process the first task based on the first video memory resource and the first computing resource.
2. The method of claim 1, wherein the saving first video memory resource information of the first video memory resource used by the first task comprises:
and storing the first video memory resource information of the first video memory resource used by the first task into a storage medium outside the GPU.
3. The method of claim 1 or 2, further comprising:
and sending a first resource release signal for releasing the first video memory resource.
4. The method of claim 3, further comprising:
and triggering the second task to acquire a second video memory resource and a second computing power resource of the GPU resource required by the operation of the second task under the condition that the second task waiting for acquiring the GPU resource exists after the first resource release signal is sent.
5. The method of claim 4, wherein the second task is a task that cannot use the GPU resources simultaneously with the first task.
6. The method of claim 4 or 5, further comprising:
sending a first resource recovery signal that the first task needs to recover using the GPU resources.
7. The method of claim 6, further comprising:
in response to the first resource resume signal for the first task, triggering the second task to cease using a second computing power resource of the GPU resources while processing the second task based on the second video memory resource and the second computing power resource of the GPU resources;
under the condition that the use of a second computing power resource of the GPU resource is stopped, second video memory resource information of the second video memory resource used by the second task is saved;
and under the condition that the storage is successful, releasing the second video memory resource used by the second task, and interrupting the operation of the second task.
8. The method of claim 7, wherein the second task has a lower priority than the first task.
9. The method of claim 4 or 5, further comprising:
under the condition that the second task is in a waiting state of using a second video memory resource in the GPU resource and not using a second computing power resource in the GPU resource, saving second video memory resource information of the second video memory resource used by the second task;
and under the condition of successful storage, releasing the second video memory resource used by the second task, and interrupting the operation of the second task.
10. The method of claim 6, further comprising:
under the condition that the second task is in a waiting state of using a second video memory resource in the GPU resource and not using a second computing power resource in the GPU resource, saving second video memory resource information of the second video memory resource used by the second task;
and under the condition of successful storage, releasing the second video memory resource used by the second task, and interrupting the operation of the second task.
11. The method of claim 9, further comprising:
and sending a second resource release signal for releasing the second video memory resource.
12. The method of claim 10, further comprising:
and sending a second resource release signal for releasing the second video memory resource.
13. The method of any of claims 7 or 8 or 10 to 12, further comprising:
under the condition that the second task releases the second video memory resource, copying the first video memory resource information into the GPU resource;
and continuing to process the first task based on the first video memory resource and the first computing resource.
14. The method of claim 9, further comprising:
under the condition that the second task releases the second video memory resource, copying the first video memory resource information into the GPU resource;
and continuing to process the first task based on the first video memory resource and the first computing resource.
15. A task processing device comprising:
the first information processing unit is used for saving first video memory resource information of a first video memory resource used by a first task under the condition that the first task is in a waiting state of using the first video memory resource in GPU resources and not using the first computing power resource in the GPU resources;
the resource processing unit is used for releasing the first video memory resource used by the first task under the condition of successful storage, and the first task is interrupted to run;
the resource processing unit is further configured to, when the running state needs to be restored again after the first task is interrupted from running, restore the state where the first task releases the first video memory resource based on the saved first video memory resource information, and then continue to process the first task based on the first video memory resource and the first computing power resource.
16. The apparatus of claim 15, wherein,
the first information processing unit is specifically configured to store first video memory resource information of the first video memory resource used by the first task in a storage medium other than the GPU.
17. The apparatus of claim 15 or 16, further comprising:
and the first sending unit is used for sending a first resource release signal for releasing the first video memory resource.
18. The apparatus of claim 17, wherein,
the resource processing unit is further configured to, after the first resource release signal is sent, trigger the second task to acquire a second video memory resource and a second computational power resource of the GPU resource, where the second task waits to acquire the GPU resource, when the second task exists.
19. The apparatus of claim 18, wherein the second task is a task that cannot use the GPU resources simultaneously with the first task.
20. The apparatus of claim 18 or 19, further comprising:
a second sending unit, configured to send a first resource recovery signal that the first task needs to recover using the GPU resource.
21. The apparatus of claim 20, further comprising: a second information processing unit; wherein, the first and the second end of the pipe are connected with each other,
the second information processing unit is configured to, in a case where the second task is processed based on the second video memory resource and the second computational resource of the GPU resource, trigger the second task to stop using a second computational resource of the GPU resource in response to the first resource recovery signal for the first task; under the condition that the use of a second computing power resource of the GPU resource is stopped, second video memory resource information of the second video memory resource used by the second task is saved;
the resource processing unit is further configured to release the second video memory resource used by the second task when the storage is successful, and the second task is interrupted from running.
22. The apparatus of claim 21, wherein the second task has a lower priority than the first task.
23. The apparatus of claim 18 or 19, further comprising: a third information processing unit in which, among others,
a third information processing unit, configured to save second video memory resource information of a second video memory resource used by the second task when the second task is in a waiting state where the second video memory resource in the GPU resource is used and a second computational power resource in the GPU resource is not used;
the resource processing unit is further configured to release the second video memory resource used by the second task when the storage is successful, and the second task is interrupted from running.
24. The apparatus of claim 20, further comprising: a third information processing unit in which, among others,
a third information processing unit, configured to save second video memory resource information of a second video memory resource used by the second task when the second task is in a waiting state where the second video memory resource in the GPU resource is used and a second computational resource in the GPU resource is not used;
the resource processing unit is further configured to release the second video memory resource used by the second task when the storage is successful, and the second task is interrupted from running.
25. The apparatus of claim 23, further comprising:
and the third sending unit is used for sending a second resource release signal for releasing the second video memory resource.
26. The apparatus of claim 24, further comprising:
and the third sending unit is used for sending a second resource release signal for releasing the second video memory resource.
27. The apparatus of any one of claims 21 or 22 or 24 to 26,
the resource processing unit is further configured to copy the first video memory resource information to the GPU resource when the second task releases the second video memory resource; and continuing to process the first task based on the first video memory resource and the first computing resource.
28. The apparatus of claim 25, wherein,
the resource processing unit is further configured to copy the first video memory resource information to the GPU resource when the second task releases the second video memory resource; and continuing to process the first task based on the first video memory resource and the first computing power resource.
29. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-14.
30. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-14.
CN202111589019.0A 2021-12-23 2021-12-23 Task processing method, device, equipment and storage medium Active CN114217976B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111589019.0A CN114217976B (en) 2021-12-23 2021-12-23 Task processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111589019.0A CN114217976B (en) 2021-12-23 2021-12-23 Task processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114217976A CN114217976A (en) 2022-03-22
CN114217976B true CN114217976B (en) 2023-02-28

Family

ID=80705310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111589019.0A Active CN114217976B (en) 2021-12-23 2021-12-23 Task processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114217976B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208103A (en) * 2013-04-15 2013-07-17 中国科学院苏州纳米技术与纳米仿生研究所 Graphic processing unit (GPU)-based low-luminance image enhancement method
CN109766183A (en) * 2018-12-28 2019-05-17 郑州云海信息技术有限公司 A kind of method and system of cluster GPU multiplexing and intelligent load
CN109918233A (en) * 2019-03-06 2019-06-21 珠海金山网络游戏科技有限公司 A kind of data processing method, calculates equipment and storage medium at device
CN109961404A (en) * 2017-12-25 2019-07-02 沈阳灵景智能科技有限公司 A kind of high clear video image Enhancement Method based on GPU parallel computation
CN111124691A (en) * 2020-01-02 2020-05-08 上海交通大学 Multi-process shared GPU (graphics processing Unit) scheduling method and system and electronic equipment
CN112506666A (en) * 2020-12-22 2021-03-16 鹏城实验室 GPU time-sharing method and system based on drive packaging
CN113450770A (en) * 2021-06-25 2021-09-28 平安科技(深圳)有限公司 Voice feature extraction method, device, equipment and medium based on display card resources

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457135A (en) * 2019-08-09 2019-11-15 重庆紫光华山智安科技有限公司 A kind of method of resource regulating method, device and shared GPU video memory

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208103A (en) * 2013-04-15 2013-07-17 中国科学院苏州纳米技术与纳米仿生研究所 Graphic processing unit (GPU)-based low-luminance image enhancement method
CN109961404A (en) * 2017-12-25 2019-07-02 沈阳灵景智能科技有限公司 A kind of high clear video image Enhancement Method based on GPU parallel computation
CN109766183A (en) * 2018-12-28 2019-05-17 郑州云海信息技术有限公司 A kind of method and system of cluster GPU multiplexing and intelligent load
CN109918233A (en) * 2019-03-06 2019-06-21 珠海金山网络游戏科技有限公司 A kind of data processing method, calculates equipment and storage medium at device
CN111124691A (en) * 2020-01-02 2020-05-08 上海交通大学 Multi-process shared GPU (graphics processing Unit) scheduling method and system and electronic equipment
CN112506666A (en) * 2020-12-22 2021-03-16 鹏城实验室 GPU time-sharing method and system based on drive packaging
CN113450770A (en) * 2021-06-25 2021-09-28 平安科技(深圳)有限公司 Voice feature extraction method, device, equipment and medium based on display card resources

Also Published As

Publication number Publication date
CN114217976A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
EP2492810A1 (en) Method and device for managing operating systems in embedded system
US10261874B2 (en) Enabling a cloud controller to communicate with power systems
CN113032152B (en) Scheduling method, scheduling apparatus, electronic device, storage medium, and program product for deep learning framework
CN113867916B (en) Task processing method and device and electronic equipment
CN114328098B (en) Slow node detection method and device, electronic equipment and storage medium
CN112650575A (en) Resource scheduling method and device and cloud service system
CN111858040A (en) Resource scheduling method and device
CN115904761A (en) System on chip, vehicle and video processing unit virtualization method
CN113360266B (en) Task processing method and device
CN114217976B (en) Task processing method, device, equipment and storage medium
US9436505B2 (en) Power management for host with devices assigned to virtual machines
CN113051055A (en) Task processing method and device
CN112965799A (en) Task state prompting method and device, electronic equipment and medium
CN116431313A (en) Scheduling method, device, equipment and medium for polling task
US10402234B2 (en) Fine-grain synchronization in data-parallel jobs
US20160378536A1 (en) Control method and information processing device
CN114374657A (en) Data processing method and device
CN113595887A (en) Flow control method and device in mail system
JP2018538632A (en) Method and device for processing data after node restart
CN113867920A (en) Task processing method and device, electronic equipment and medium
EP4030735A1 (en) Method of data interaction, data interaction apparatus, electronic device and non-transitory computer readable storage medium
CN114006902B (en) Cloud mobile phone restarting method, device, equipment and storage medium
CN113138881B (en) Distributed file system backup method, device and system
CN113760319B (en) Method and system for updating application
CN116707786A (en) Elastic telescopic load method, device and equipment of server cipher machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant