CN117453486A - Method, device, equipment and medium for determining GPU utilization rate of process - Google Patents

Method, device, equipment and medium for determining GPU utilization rate of process Download PDF

Info

Publication number
CN117453486A
CN117453486A CN202311507693.9A CN202311507693A CN117453486A CN 117453486 A CN117453486 A CN 117453486A CN 202311507693 A CN202311507693 A CN 202311507693A CN 117453486 A CN117453486 A CN 117453486A
Authority
CN
China
Prior art keywords
gpu
time
task
target
sampling period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311507693.9A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Moore Threads Technology Co Ltd
Original Assignee
Moore Threads Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moore Threads Technology Co Ltd filed Critical Moore Threads Technology Co Ltd
Priority to CN202311507693.9A priority Critical patent/CN117453486A/en
Publication of CN117453486A publication Critical patent/CN117453486A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3024Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3051Monitoring arrangements for monitoring the configuration of the computing system or of the computing system component, e.g. monitoring the presence of processing resources, peripherals, I/O links, software programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The disclosure relates to a method, a device, equipment and a medium for determining process-oriented GPU utilization rate. The method comprises the following steps: for any process, obtaining a target GPU use start time and a target GPU use end time of each task in the process in a target sampling period of GPU utilization rate, wherein the target GPU use start time of any task represents the time when the task starts to use GPU resources in the target sampling period, and the target GPU use end time of the task represents the time when the task ends to use GPU resources in the target sampling period; determining the total utilization time of the GPU of each task in the target sampling period according to the target GPU use starting time and the target GPU use ending time of each task; and determining the GPU utilization rate of the process according to the total GPU utilization time and the length of the target sampling period.

Description

Method, device, equipment and medium for determining GPU utilization rate of process
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method for determining process-oriented GPU utilization, a device for determining process-oriented GPU utilization, an electronic device, and a storage medium.
Background
By determining process level GPU (Graphics Processing Unit, graphics processor) utilization, a graphics card user is enabled to view and monitor process level GPU utilization. Therefore, determining GPU utilization at the process level is significant. How to determine the GPU utilization of the process level is a technical problem to be solved.
Disclosure of Invention
The disclosure provides a technical scheme for determining the utilization rate of a process-oriented GPU.
According to an aspect of the present disclosure, there is provided a method for determining a process-oriented GPU utilization, including:
for any process, obtaining a target GPU use start time and a target GPU use end time of each task in the process in a target sampling period of GPU utilization rate, wherein the target GPU use start time of any task represents the time when the task starts to use GPU resources in the target sampling period, and the target GPU use end time of the task represents the time when the task ends to use GPU resources in the target sampling period;
determining the total utilization time of the GPU of each task in the target sampling period according to the target GPU use starting time and the target GPU use ending time of each task;
And determining the GPU utilization rate of the process according to the total GPU utilization time and the length of the target sampling period.
In one possible implementation manner, the determining the GPU total utilization time of each task in the target sampling period according to the target GPU use start time and the target GPU use end time of each task includes:
determining the total idle time of the GPU corresponding to the process in the target sampling period according to the starting time and the ending time of the target sampling period and the target GPU use starting time and the target GPU use ending time of each task;
and determining the difference value between the length of the target sampling period and the total idle time of the GPU as the total utilization time of the GPU of each task in the target sampling period.
In one possible implementation manner, the determining the GPU total idle time corresponding to the process in the target sampling period according to the starting time and the ending time of the target sampling period, and the target GPU use starting time and the target GPU use ending time of each task includes:
determining a first GPU idle time between the starting time of the target sampling period and the task in the process according to the starting time of the target sampling period and the earliest target GPU use starting time in the target GPU use starting time of each task;
Determining a second GPU idle time between each task according to the target GPU use start time and the target GPU use end time of each task;
determining a third GPU idle time between the task in the process and the ending time of the target sampling period according to the latest target GPU use ending time in the target GPU use ending time of each task and the ending time of the target sampling period;
and determining the total idle time of the GPU corresponding to the process in the target sampling period according to the sum of the first idle time of the GPU, the second idle time of the GPU and the third idle time of the GPU.
In one possible implementation manner, the determining the second GPU idle time between the tasks according to the target GPU usage start time and the target GPU usage end time of the tasks includes:
determining idle trigger tasks in the tasks according to the target GPU use start time and the target GPU use end time of the tasks; the idle trigger tasks do not comprise the latest task in the use ending time of the target GPU in each task, and for any idle trigger task, responding to the end of the idle trigger task, the process pauses the use of GPU resources;
And for each idle trigger task, respectively determining a second GPU idle time between the idle trigger task and a next task of the idle trigger task, wherein for any idle trigger task, the next task of the idle trigger task represents a task which uses GPU resources earliest after the use end time of the target GPU of the idle trigger task in the process.
In one possible implementation, the method further includes:
for any task in the process, determining the task as an idle trigger task in response to the target GPU use ending time of the task not being between the target GPU use starting time and the target GPU use ending time of other tasks in the process, and the target GPU use ending time of the task not being the same as the target GPU use ending time of other tasks whose target GPU use starting time is earlier than the task.
In one possible implementation manner, in the target sampling period for obtaining the GPU utilization, the target GPU usage start time of each task in the process includes:
for any task in the process, responding to the fact that the GPU use starting time of the task is later than or equal to the starting time of the target sampling period, determining the GPU use starting time of the task as the target GPU use starting time of the task, wherein the GPU use starting time of the task represents the time when the task starts to use GPU resources;
Or,
for any task in the process, responding to the fact that the GPU use starting time of the task is earlier than the starting time of the target sampling period, and determining the starting time of the target sampling period as the target GPU use starting time of the task.
In one possible implementation manner, in a target sampling period for obtaining GPU utilization, a target GPU usage end time of each task in the process includes:
for any task in the process, responding to the fact that the GPU use ending time of the task is earlier than or equal to the ending time of the target sampling period, determining the GPU use ending time of the task as the target GPU use ending time of the task, wherein the GPU use ending time of the task represents the time when the task finishes using GPU resources;
or,
and for any task in the process, responding to the fact that the GPU use end time of the task is later than the end time of the target sampling period, and determining the end time of the target sampling period as the target GPU use end time of the task.
In one possible implementation, the method further includes:
And responding to the end of any task in the process, and storing the target GPU use starting time and the target GPU use ending time of the task.
In one possible implementation, the tasks in the process are stored in sequence according to the starting time of the tasks.
In one possible implementation manner, the determining the GPU utilization of the process according to the GPU total utilization time and the length of the target sampling period includes:
and determining the ratio of the total utilization time of the GPU to the length of the target sampling period as the GPU utilization rate of the process.
According to an aspect of the present disclosure, there is provided a process-oriented GPU utilization determining apparatus, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a target GPU use start time and a target GPU use end time of each task in a process in a target sampling period of GPU utilization rate, wherein the target GPU use start time of any task represents the time when the task starts to use GPU resources in the target sampling period, and the target GPU use end time of the task represents the time when the task ends to use GPU resources in the target sampling period;
The first determining module is used for determining the total utilization time of the GPU of each task in the target sampling period according to the target GPU use starting time and the target GPU use ending time of each task;
and the second determining module is used for determining the GPU utilization rate of the process according to the total GPU utilization time and the length of the target sampling period.
In one possible implementation manner, the first determining module is configured to:
determining the total idle time of the GPU corresponding to the process in the target sampling period according to the starting time and the ending time of the target sampling period and the target GPU use starting time and the target GPU use ending time of each task;
and determining the difference value between the length of the target sampling period and the total idle time of the GPU as the total utilization time of the GPU of each task in the target sampling period.
In one possible implementation manner, the first determining module is configured to:
determining a first GPU idle time between the starting time of the target sampling period and the task in the process according to the starting time of the target sampling period and the earliest target GPU use starting time in the target GPU use starting time of each task;
Determining a second GPU idle time between each task according to the target GPU use start time and the target GPU use end time of each task;
determining a third GPU idle time between the task in the process and the ending time of the target sampling period according to the latest target GPU use ending time in the target GPU use ending time of each task and the ending time of the target sampling period;
and determining the total idle time of the GPU corresponding to the process in the target sampling period according to the sum of the first idle time of the GPU, the second idle time of the GPU and the third idle time of the GPU.
In one possible implementation manner, the first determining module is configured to:
determining idle trigger tasks in the tasks according to the target GPU use start time and the target GPU use end time of the tasks; the idle trigger tasks do not comprise the latest task in the use ending time of the target GPU in each task, and for any idle trigger task, responding to the end of the idle trigger task, the process pauses the use of GPU resources;
and for each idle trigger task, respectively determining a second GPU idle time between the idle trigger task and a next task of the idle trigger task, wherein for any idle trigger task, the next task of the idle trigger task represents a task which uses GPU resources earliest after the use end time of the target GPU of the idle trigger task in the process.
In one possible implementation, the apparatus further includes:
and a third determining module, configured to, for any task in the process, determine, as an idle trigger task, the task in response to the target GPU usage end time of the task not being between the target GPU usage start time and the target GPU usage end time of other tasks in the process, and the target GPU usage end time of the task not being the same as the target GPU usage end time of other tasks whose target GPU usage start time is earlier than the task.
In one possible implementation manner, the obtaining module is configured to:
for any task in the process, responding to the fact that the GPU use starting time of the task is later than or equal to the starting time of the target sampling period, determining the GPU use starting time of the task as the target GPU use starting time of the task, wherein the GPU use starting time of the task represents the time when the task starts to use GPU resources;
or,
for any task in the process, responding to the fact that the GPU use starting time of the task is earlier than the starting time of the target sampling period, and determining the starting time of the target sampling period as the target GPU use starting time of the task.
In one possible implementation manner, the obtaining module is configured to:
for any task in the process, responding to the fact that the GPU use ending time of the task is earlier than or equal to the ending time of the target sampling period, determining the GPU use ending time of the task as the target GPU use ending time of the task, wherein the GPU use ending time of the task represents the time when the task finishes using GPU resources;
or,
and for any task in the process, responding to the fact that the GPU use end time of the task is later than the end time of the target sampling period, and determining the end time of the target sampling period as the target GPU use end time of the task.
In one possible implementation, the apparatus further includes:
and the storage module is used for responding to the end of any task in the process and storing the use starting time of the target GPU and the use ending time of the target GPU of the task.
In one possible implementation, the tasks in the process are stored in sequence according to the starting time of the tasks.
In one possible implementation manner, the second determining module is configured to:
And determining the ratio of the total utilization time of the GPU to the length of the target sampling period as the GPU utilization rate of the process.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the executable instructions stored by the memory to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
According to an aspect of the present disclosure, there is provided a computer program product comprising a computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the above method.
In the embodiment of the disclosure, a target GPU use start time and a target GPU use end time of each task in a process are obtained in a target sampling period of GPU utilization rate for any process, wherein the target GPU use start time of any task represents a time when the task starts using GPU resources in the target sampling period, the target GPU use end time of the task represents a time when the task ends using GPU resources in the target sampling period, a GPU total utilization time of each task in the target sampling period is determined according to the target GPU use start time and the target GPU use end time of each task, and the GPU utilization rate of the process is determined according to the GPU total utilization time and the length of the target sampling period.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a flowchart of a method for determining process-oriented GPU utilization provided by an embodiment of the present disclosure.
Fig. 2 is a schematic diagram illustrating a target GPU usage start time and a target GPU usage end time of all tasks of a process in the method for determining the process-oriented GPU utilization provided in the embodiments of the present disclosure.
FIG. 3 shows a schematic diagram of a graphics pipeline flow in OpenGL technology.
Fig. 4 is a schematic diagram illustrating a method for determining process-oriented GPU utilization according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of a process-oriented GPU utilization determination apparatus provided by an embodiment of the present disclosure.
Fig. 6 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
The embodiment of the disclosure provides a method for determining GPU utilization rate of a process, which comprises the steps of obtaining a target GPU utilization start time and a target GPU utilization end time of each task in a process in a target sampling period of GPU utilization rate for any process, wherein the target GPU utilization start time of any task represents the time when the task starts to use GPU resources in the target sampling period, the target GPU utilization end time of the task represents the time when the task ends to use GPU resources in the target sampling period, determining the GPU total utilization time of each task in the target sampling period according to the target GPU utilization start time and the target GPU utilization end time of each task, and determining the GPU utilization rate of the process according to the GPU total utilization time and the length of the target sampling period.
For example, in the case where a developer of artificial intelligence (Artificial Intelligence, AI) wishes to know whether his own algorithm fully uses the computing power provided by the GPU, the process-oriented GPU utilization determination method provided by the embodiments of the present disclosure may be employed to view the GPU utilization of the current AI process.
For another example, in the case that developers of open-source 2D, 3D graphics libraries (such as OpenGL, openGL ES) pay attention to GPU utilization at a process level, the method for determining GPU utilization for processes provided by the embodiments of the present disclosure may be used to view GPU utilization of each process.
For another example, in the case that the user of the common graphics card wants to know whether the own game process is actually accelerated by using the independent graphics card, the method for determining the GPU utilization rate of the process provided by the embodiment of the present disclosure may be adopted to check the GPU utilization rate of the game process.
The method for determining the process-oriented GPU utilization provided by the embodiments of the present disclosure is described in detail below with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a method for determining process-oriented GPU utilization provided by an embodiment of the present disclosure. In one possible implementation manner, the execution subject of the method for determining the process-oriented GPU utilization may be a device for determining the process-oriented GPU utilization, for example, the method for determining the process-oriented GPU utilization may be performed by a terminal device or a server or other electronic devices. The terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or the like. In some possible implementations, the method for determining the process-oriented GPU utilization may be implemented by a processor invoking computer readable instructions stored in a memory. As shown in fig. 1, the method for determining the GPU utilization rate of the process includes steps S11 to S13.
In step S11, for any process, a target GPU usage start time and a target GPU usage end time of each task in the process are obtained in a target sampling period of GPU utilization, where the target GPU usage start time of any task represents a time when the task starts using GPU resources in the target sampling period, and the target GPU usage end time of the task represents a time when the task ends using GPU resources in the target sampling period.
In step S12, the GPU total utilization time of each task in the target sampling period is determined according to the target GPU utilization start time and the target GPU utilization end time of each task.
In step S13, the GPU utilization of the process is determined according to the GPU total utilization time and the length of the target sampling period.
The method for determining the process-oriented GPU utilization rate provided by the embodiment of the disclosure can be applied toUnix-like operating system (Linux) of free and open source code TM ) Microsoft Server operating system (Windows Server) TM ) Apple Inc. developed graphical user interface based operating System (Mac OS X TM ) Etc., without limitation herein.
The method for determining the process-oriented GPU utilization rate provided by the embodiments of the present disclosure may be applied to the graphics cards of the graphics card manufacturers, that is, the hardware platform of the method for determining the process-oriented GPU utilization rate provided by the embodiments of the present disclosure may be the graphics card of the graphics card manufacturer, which is not limited herein.
In the disclosed embodiments, the target sampling period may represent a period of GPU utilization for the process to be determined. The length of a single sampling period for GPU utilization may be 100ms,200ms,500ms, etc., without limitation. The length of a single sampling period of the GPU utilization rate can be flexibly set according to the actual application scene requirements.
The method for determining the process-oriented GPU utilization rate provided by the sample disclosure embodiment can obtain the GPU utilization rate of each process in the target sampling period, or can obtain the GPU utilization rate of at least one designated process in the target sampling period, or can obtain the GPU utilization rate of each process of a designated type in the target sampling period, and the like.
In the embodiment of the disclosure, for any process, a target GPU usage start time and a target GPU usage end time of each task in the process may be obtained in a target sampling period. The target GPU use starting time of any task represents the time when the task starts to use the GPU resource in the target sampling period, and the target GPU use ending time of the task represents the time when the task ends to use the GPU resource in the target sampling period. The use start time of the target GPU of any task is later than or equal to the start time of the target sampling period, and the use end time of the target GPU of any task is earlier than or equal to the end time of the target sampling period.
Fig. 2 is a schematic diagram illustrating a target GPU usage start time and a target GPU usage end time of all tasks of a process in the method for determining the process-oriented GPU utilization provided in the embodiments of the present disclosure. In fig. 2, the start time indicates the start time of the target sampling period, and the end time indicates the end time of the target sampling period. In the example shown in fig. 2, the process includes 4 tasks, TA1, TA2, TA3, and TA4, respectively. The use start time of the target GPU of TA1 is time 1, and the use end time of the target GPU is time 4; the use start time of the target GPU of TA2 is time 2, and the use end time of the target GPU is time 5; the use start time of the target GPU of TA3 is time 3, and the use end time of the target GPU is time 4; the target GPU use start time of TA4 is time 6, and the target GPU use end time is time 7.
In one possible implementation manner, in the target sampling period for obtaining the GPU utilization, the target GPU usage start time of each task in the process includes: for any task in the process, responding to the fact that the GPU use starting time of the task is later than or equal to the starting time of the target sampling period, determining the GPU use starting time of the task as the target GPU use starting time of the task, wherein the GPU use starting time of the task represents the time when the task starts to use GPU resources; or for any task in the process, determining the starting time of the target sampling period as the target GPU use starting time of the task in response to the GPU use starting time of the task being earlier than the starting time of the target sampling period.
In this implementation, for any task, the GPU usage start time for that task represents the time at which that task begins to use GPU resources. The target GPU usage start time of any task is equal to or later than the GPU usage start time of the task.
In this implementation, for any task in the process, in a case where the GPU usage start time of the task is later than or equal to the start time of the target sampling period, that is, in a case where the GPU usage start time of the task is not before the target sampling period, the GPU usage start time of the task is determined as the target GPU usage start time of the task.
In this implementation, for any task in the process, in a case where the GPU usage start time of the task is later than or equal to the start time of the target sampling period, the target GPU usage start time of the task is equal to or later than the GPU usage start time of the task.
In this implementation, for any task in the process, in a case where the GPU usage start time of the task is earlier than the start time of the target sampling period, that is, in a case where the GPU usage start time of the task is earlier than the target sampling period, the start time of the target sampling period is determined as the target GPU usage start time of the task.
In this implementation, for any task in the process, in a case where a GPU usage start time of the task is earlier than a start time of the target sampling period, the target GPU usage start time of the task is later than the GPU usage start time of the task.
In this implementation manner, the GPU usage start time of the task is determined to be the target GPU usage start time of the task in response to the GPU usage start time of the task being later than or equal to the start time of the target sampling period for any task in the process, where the GPU usage start time of the task represents the time when the task starts using GPU resources, or the start time of the target sampling period is determined to be the target GPU usage start time of the task in response to the GPU usage start time of the task being earlier than the start time of the target sampling period for any task in the process, so that the target GPU usage start time of each task in the process can be accurately determined.
In one possible implementation manner, in a target sampling period for obtaining GPU utilization, a target GPU usage end time of each task in the process includes: for any task in the process, responding to the fact that the GPU use ending time of the task is earlier than or equal to the ending time of the target sampling period, determining the GPU use ending time of the task as the target GPU use ending time of the task, wherein the GPU use ending time of the task represents the time when the task finishes using GPU resources; or for any task in the process, determining the end time of the target sampling period as the target GPU use end time of the task in response to the GPU use end time of the task being later than the end time of the target sampling period.
In this implementation, for any task, the GPU usage end time of the task represents the time when the task ends using GPU resources. The end of use time of the target GPU of any task is equal to or earlier than the end of use time of the GPU of the task.
In this implementation, for any task in the process, when the GPU usage end time of the task is earlier than or equal to the end time of the target sampling period, that is, when the GPU usage end time of the task is not after the target sampling period, the GPU usage end time of the task is determined as the target GPU usage end time of the task.
In this implementation, for any task in the process, in a case where the GPU usage end time of the task is earlier than or equal to the end time of the target sampling period, the target GPU usage end time of the task is equal to or later than the GPU usage end time of the task.
In this implementation, for any task in the process, when the GPU usage end time of the task is later than the end time of the target sampling period, that is, when the GPU usage end time of the task is later than the target sampling period, the end time of the target sampling period is determined as the target GPU usage end time of the task.
In this implementation, for any task in the process, in a case where a GPU usage end time of the task is later than an end time of the target sampling period, the target GPU usage end time of the task is earlier than the GPU usage end time of the task.
In this implementation manner, the GPU use end time of the task is determined to be the target GPU use end time of the task in response to the GPU use end time of the task being earlier than or equal to the end time of the target sampling period for any task in the process, where the GPU use end time of the task represents the time when the GPU use of resources of the task ends, or the end time of the target sampling period is determined to be the target GPU use end time of the task in response to the GPU use end time of the task being later than the end time of the target sampling period for any task in the process, so that the target GPU use end time of each task in the process can be accurately determined.
In one possible implementation, the method further includes: and for any task in the process, obtaining the GPU use starting time of the task, wherein the GPU use starting time of the task represents the time when the task starts to use GPU resources.
As an example of this implementation, for any task in the process, obtaining the GPU usage start time of the task includes: and for any task in the process, responding to the task as a rendering task, and determining the moment when the task starts to carry out primitive assembly as the GPU use starting moment of the task.
Of course, the manner of determining the GPU usage start time of the task may be flexibly set according to different task types, which is not limited herein.
In one possible implementation, the method further includes: and for any task in the process, obtaining the GPU use end time of the task, wherein the GPU use end time of the task represents the time when the task finishes using GPU resources.
As an example of this implementation, for any task in the process, obtaining the GPU usage end time of the task includes: and for any task in the process, determining the time for destroying the video memory resources occupied by the task as the GPU use end time of the task.
As another example of this implementation, for any task in the process, obtaining the GPU usage end time of the task includes: and for any task in the process, determining the time when the task ends as the GPU use end time of the task.
Of course, the determination manner of the GPU usage end time of the task may be flexibly set according to different task types, which is not limited herein.
FIG. 3 shows a schematic diagram of a graphics pipeline flow in OpenGL technology. As shown in fig. 3, the flow of pipeline tasks for OpenGL-based graphical displays may include: step 1, inputting graphic vertex data on the CPU (Central Processing Unit ) side; step 2, transforming the vertex coordinates of the CPU side; step 3, GPU side triangles and line drawing primitive assembly (2D and 3D graphics are composed of triangles); step 4, coloring and texturing the GPU side; step 5, GPU side rasterization (i.e. pixelation); step 6, displaying on the GPU side; and 7, destroying the video memory resources occupied by the graphics by the GPU side.
For any task in the process, when the task is a rendering task, the starting time of the step 3 may be determined as the GPU usage starting time of the task, and the ending time of the step 7 may be determined as the GPU usage ending time of the task. For example, in a task in a game or other application process that uses OpenGL as the lowest layer 2D/3D engine, the method may be employed to determine the GPU usage start time and GPU usage end time of the task.
In one possible implementation, the method further includes: and responding to the end of any task in the process, and storing the target GPU use starting time and the target GPU use ending time of the task.
As an example of this implementation, the target GPU usage start time and the target GPU usage end time for each task may be saved in memory.
In this implementation, the target GPU use start time and the target GPU use end time of the task that ends first may be saved earlier than the target GPU use start time and the target GPU use end time of the task that ends late. For example, in the example shown in fig. 2, TA3 ends earlier than TA2, and thus, the target GPU use start time and the target GPU use end time of TA3 may be saved first.
In this implementation, by storing the target GPU usage start time and the target GPU usage end time of any task in the process in response to the end of the task, the GPU utilization rate of the process can be calculated based on the stored target GPU usage start time and target GPU usage end time of each task.
As an example of this implementation, the tasks in the process are saved in order according to the starting time of the task.
In this example, the tasks in the process are sequentially saved according to the start time of the task, which may refer to the time when the target GPU use start time and the target GPU use end time of each task in the process are sequentially saved in position according to the start time of the task, rather than the time when the target GPU use start time and the target GPU use end time of each task are saved in the memory.
According to this example, the target GPU use start time and the target GPU use end time of each task in the process can be saved in order in position.
As another example of this implementation, the tasks in the process are saved in order according to GPU usage end time of the task.
In an embodiment of the disclosure, for any process, after determining a target GPU usage start time and a target GPU usage end time of each task in the process, a GPU total utilization time of each task in the target sampling period may be determined.
In one possible implementation manner, for any process, the new utilization time of the GPU corresponding to each task may be determined according to the order of the target GPU usage start time of each task in the process, and the sum of the new utilization times of the GPU corresponding to each task may be determined as the total utilization time of the GPU of each task in the target sampling period.
Fig. 4 is a schematic diagram illustrating a method for determining process-oriented GPU utilization according to an embodiment of the present disclosure. In fig. 4, the start time indicates the start time of the target sampling period, and the end time indicates the end time of the target sampling period. The following description will take process 1 in fig. 4 as an example. In the example shown in fig. 4, process 1 includes 4 tasks, TA1, TA2, TA3, and TA4, respectively. The use start time of the target GPU of TA1 is earlier than TA2, the use start time of the target GPU of TA2 is earlier than TA3, and the use start time of the target GPU of TA3 is earlier than TA4. The time interval between the target GPU usage start time of TA1 and the target GPU usage end time of TA1 may be determined as the GPU utilization newly added time corresponding to TA1 (i.e., process 1_cycles_1 in fig. 4). Since the target GPU usage start time of TA2 is earlier than the target GPU usage end time of TA1 and the target GPU usage end time of TA2 is later than the target GPU usage end time of TA1, the time interval between the target GPU usage end time of TA1 and the target GPU usage end time of TA2 can be determined as the GPU usage newly increased time corresponding to TA2 (i.e., process 1_cycles_2 in fig. 4). Because the use start time of the target GPU of TA3 is earlier than the use end time of the target GPU of TA2, and the use end time of the target GPU of TA3 is earlier than the use end time of the target GPU of TA2, the GPU corresponding to TA3 has a new time of 0. Since the destination GPU usage end time of TA2 is the latest and the destination GPU usage start time of TA4 is later than the destination GPU usage end time of TA2 in TA1 to TA3, the time interval between the destination GPU usage start time of TA4 and the destination GPU usage end time of TA4 can be determined as the GPU usage newly added time corresponding to TA4 (i.e., process 1_cycles_3 in fig. 4). GPU total utilization time of each task in process 1=gpu utilization new time corresponding to TA 1+gpu utilization new time corresponding to TA 2+gpu utilization new time corresponding to TA 3+gpu utilization new time corresponding to TA4, i.e. GPU total utilization time of each task in process 1=process 1_cycles_1+process 1_cycles_2+process 1_cycles_3.
In another possible implementation manner, the determining the GPU total utilization time of each task in the target sampling period according to the target GPU use start time and the target GPU use end time of each task includes: determining the total idle time of the GPU corresponding to the process in the target sampling period according to the starting time and the ending time of the target sampling period and the target GPU use starting time and the target GPU use ending time of each task; and determining the difference value between the length of the target sampling period and the total idle time of the GPU as the total utilization time of the GPU of each task in the target sampling period.
In this implementation manner, for any process, the GPU total idle time corresponding to the process in the target sampling period may be determined according to the start time and the end time of the target sampling period, and the target GPU use start time and the target GPU use end time of each task in the process. The GPU total idle time corresponding to the process may represent a total time when the process does not use GPU resources, that is, the GPU total idle time corresponding to the process may represent a total time when each task in the process does not use GPU resources.
In this implementation manner, in determining the GPU total idle time corresponding to the process in the target sampling period, the GPU total idle time may be subtracted from the length of the target sampling period to obtain the GPU total utilization time of each task in the target sampling period.
In this implementation manner, the total GPU idle time corresponding to the process in the target sampling period is determined according to the starting time and the ending time of the target sampling period, and the target GPU use starting time and the target GPU use ending time of each task, and the difference between the length of the target sampling period and the total GPU idle time is determined as the total GPU use time of each task in the target sampling period, so that the total GPU use time of each task in the target sampling period can be determined more efficiently, and the efficiency of determining the GPU use rate of the process can be improved.
In one possible implementation manner, the determining the GPU total idle time corresponding to the process in the target sampling period according to the starting time and the ending time of the target sampling period, and the target GPU use starting time and the target GPU use ending time of each task includes: determining a first GPU idle time between the starting time of the target sampling period and the task in the process according to the starting time of the target sampling period and the earliest target GPU use starting time in the target GPU use starting time of each task; determining a second GPU idle time between each task according to the target GPU use start time and the target GPU use end time of each task; determining a third GPU idle time between the task in the process and the ending time of the target sampling period according to the latest target GPU use ending time in the target GPU use ending time of each task and the ending time of the target sampling period; and determining the total idle time of the GPU corresponding to the process in the target sampling period according to the sum of the first idle time of the GPU, the second idle time of the GPU and the third idle time of the GPU.
In this implementation, for any process, the first GPU idle time may represent a GPU idle time between a starting time of a target sampling period and a task in the process, i.e., for any process, the first GPU idle time may represent a time interval between the starting time of the target sampling period and a target GPU use starting time of a task of the processes that uses GPU resources earliest among the respective tasks. Wherein, for any process, the first GPU idle time may be greater than or equal to 0.
For example, in the implementation shown in fig. 4, the process 1 includes TA1, TA2, TA3, and TA4, where the target GPU use start time of TA1 is time 1, the target GPU use start time of TA2 is time 2, the target GPU use start time of TA3 is time 3, and the target GPU use start time of TA4 is time 6. The target GPU of TA1 has the earliest start time, so the time interval between the start time of the target sampling period and time 1 can be determined as the first GPU idle time.
In this implementation, for any process, the second GPU idle time may represent GPU idle time between various tasks in the process.
In this implementation, for any process, the third GPU idle time may represent a GPU idle time between a task in the process and an end time of the target sampling period, i.e., for any process, the third GPU idle time may represent a time interval between a target GPU use end time of a task that uses up GPU resources at the latest among the respective tasks of the process and the end time of the target sampling period. Wherein, for any process, the second GPU idle time may be greater than or equal to 0.
For example, in the implementation shown in fig. 4, the process 1 includes TA1, TA2, TA3, and TA4, where the target GPU usage end time of TA1 is time 4, the target GPU usage end time of TA2 is time 5, the target GPU usage end time of TA3 is time 4, and the target GPU usage end time of TA4 is time 7. The target GPU of TA4 uses the latest end time, so the time interval between time 7 and the end time of the target sampling period can be determined as the third GPU idle time.
In this implementation manner, for any process, according to the starting time of the target sampling period and the earliest target GPU use starting time of the target GPU use starting times of the tasks in the process, a first GPU idle time between the starting time of the target sampling period and the tasks in the process is determined, according to the target GPU use starting time and the target GPU use ending time of the tasks, a second GPU idle time between the tasks is determined, and according to the latest target GPU use ending time of the tasks and the ending time of the target sampling period, a third GPU idle time between the tasks in the process and the ending time of the target sampling period is determined, and according to the sum of the first GPU idle time, the second GPU idle time and the third GPU idle time, a total GPU idle time corresponding to the process in the target sampling period is determined, thereby being able to accurately determine the total GPU idle time corresponding to the process in the target sampling period.
In one possible implementation manner, the determining the second GPU idle time between the tasks according to the target GPU usage start time and the target GPU usage end time of the tasks includes: determining idle trigger tasks in the tasks according to the target GPU use start time and the target GPU use end time of the tasks; the idle trigger tasks do not comprise the latest task in the use ending time of the target GPU in each task, and for any idle trigger task, responding to the end of the idle trigger task, the process pauses the use of GPU resources; and for each idle trigger task, respectively determining a second GPU idle time between the idle trigger task and a next task of the idle trigger task, wherein for any idle trigger task, the next task of the idle trigger task represents a task which uses GPU resources earliest after the use end time of the target GPU of the idle trigger task in the process.
In this implementation, for any task in any process, if the task is not the latest task in the end time of use of the target GPU in the process, and the process pauses use of the GPU resources in response to the end of the task, the task may be determined to be an idle trigger task; for any task in any process, if the process continues to use GPU resources after the task is finished, it can be determined that the task is not an idle trigger task; for any task in any process, if the task is the latest task in the end time of the target GPU use in the process, it can be determined that the task is not an idle trigger task.
In this implementation manner, for any process, in the case that no idle trigger task exists in the process, the second GPU idle time may be determined to be 0, and the sum of the first GPU idle time and the third GPU idle time may be determined to be the GPU total idle time corresponding to the process in the target sampling period; for any process, under the condition that the process has only one idle trigger task, determining a second GPU idle time between the idle trigger task and a next task of the idle trigger task, and determining the sum of a first GPU idle time, the second GPU idle time and a third GPU idle time as the GPU total idle time corresponding to the process in the target sampling period; for any process, under the condition that M idle trigger tasks exist in the process, the second GPU idle time between the M idle trigger tasks and the next task can be respectively determined, namely, M second GPU idle times can be determined, and the sum of the first GPU idle time, the M second GPU idle times and the third GPU idle time can be determined as the GPU total idle time corresponding to the process in the target sampling period.
In this implementation, for any idle trigger task, the next task of the idle trigger task may represent a task with the earliest target GPU usage start time among tasks with target GPU usage start times after the target GPU usage end times of the idle trigger task.
In this implementation manner, by determining the idle trigger task in each task according to the target GPU use start time and the target GPU use end time of each task, where the idle trigger task does not include the latest task in the target GPU use end time of each task, and for any idle trigger task, in response to the idle trigger task ending, the process suspends using GPU resources, and for each idle trigger task, the second GPU idle time between the idle trigger task and the next task of the idle trigger task is determined separately, so that the GPU idle time between different tasks in the process can be determined efficiently.
In one example, the method further comprises: for any task in the process, determining the task as an idle trigger task in response to the target GPU use ending time of the task not being between the target GPU use starting time and the target GPU use ending time of other tasks in the process, and the target GPU use ending time of the task not being the same as the target GPU use ending time of other tasks whose target GPU use starting time is earlier than the task.
In this example, for any process, it may be determined sequentially whether each task in the process is an idle trigger task according to the order of the start time of use of the target GPU of the task from early to late.
For example, in the example shown in fig. 4, process 1 includes TA1, TA2, TA3, and TA4, where the target GPU usage start time for TA1 is earlier than TA2, the target GPU usage start time for TA2 is earlier than TA3, and the target GPU usage start time for TA3 is earlier than TA4.
Because the destination GPU use end time of TA1 is between the destination GPU use start time and the destination GPU use end time of TA2, it can be determined that TA1 is not an idle trigger task; because the target GPU use end time of TA2 is not between the target GPU use start time and the target GPU use end time of other tasks in process 1, and the target GPU use end time of TA2 is not the same as the target GPU use end time of other tasks whose target GPU use start time is earlier than TA2, it can be determined that TA2 is an idle trigger task; because the destination GPU use end time of TA3 is between the destination GPU use start time and the destination GPU use end time of TA2, it can be determined that TA3 is not an idle trigger task; since TA4 is the latest task in the end time of use of the target GPU among the tasks of process 1, it may be determined that TA4 is not an idle trigger task.
It follows that among TA1 to TA4, only TA2 is the idle trigger task. Since in process 1, TA4 is the task that uses GPU resources earliest after the end of use of the target GPU of TA2, it can be determined that TA4 is the next task of TA 2. That is, in the process 1, TA4 is the task having the earliest target GPU use start time among the tasks having the target GPU use start time after the target GPU use end time of TA2, and therefore, it is possible to determine that TA4 is the next task of TA 2. The time interval between the end of use of the target GPU of TA2 and the start of use of the target GPU of TA4 may be determined as the second GPU idle time.
In this example, by determining, for any one of the tasks, the task as an idle trigger task in response to the target GPU usage end time of the task not being between the target GPU usage start time and the target GPU usage end time of other tasks in the process, and the target GPU usage end time of the task not being the same as the target GPU usage end time of other tasks whose target GPU usage start times are earlier than the task, the idle trigger task in each process can be accurately and efficiently determined.
In another example, the method further comprises: for any task in the process, determining the task as an idle trigger task in response to the target GPU use ending time of the task not being between the target GPU use starting time and the target GPU use ending time of other tasks in the process, and the target GPU use ending time of the task not being the same as the target GPU use ending time of other tasks whose target GPU use starting time is later than the task.
In another possible implementation manner, the determining the second GPU idle time between the tasks according to the target GPU usage start time and the target GPU usage end time of the tasks includes: determining non-idle trigger tasks in the tasks according to the target GPU use start time and the target GPU use end time of the tasks; for any non-idle trigger task, responding to the non-idle trigger task to start using the GPU resource, and restoring the GPU resource by the process; and respectively determining a second GPU idle time between the non-idle trigger task and a last task of the non-idle trigger task for each non-idle trigger task, wherein for any non-idle trigger task, the last task of the non-idle trigger task represents a task for ending the use of GPU resources at the latest before the use starting moment of the target GPU of the non-idle trigger task in the process.
In one example, the method further comprises: for any task in the process, in response to the target GPU use start time of the task not being between the target GPU use start time and the target GPU use end time of other tasks in the process, the task is not the task with the earliest target GPU use start time in the tasks, the target GPU use start time of the task is not the same as the target GPU use start time of other tasks with earlier target GPU use start time than the task, and the task is determined to be a non-idle trigger task.
In another example, the method further comprises: for any task in the process, in response to the target GPU use start time of the task not being between the target GPU use start time and the target GPU use end time of other tasks in the process, the task is not the task with the earliest target GPU use start time in the tasks, the target GPU use start time of the task is not the same as the target GPU use start time of other tasks with the target GPU use start time later than the task, and the task is determined to be a non-idle trigger task.
In one example, in the example shown in fig. 4, TA1 may be used as a reference, and TA2 may be found here according to the ascending order of the start time of the task. Since the target GPU usage start time of TA2 is smaller than the target GPU usage end time of TA1, and the target GPU usage end time of TA2 is greater than the target GPU usage end time of TA1 (i.e. TA2 and TA1 have a cross), the reference at this time becomes TA2, TA1 is discarded, TA1 is marked as not needing to be found (e.g. the variable bRecorded corresponding to TA1 may be set to false), and the reference of TA1 stops.
Taking TA2 as a reference, looking back in ascending order according to the starting time of the task, and finding TA3. Because the target GPU usage start time of TA3 is less than the target GPU usage end time of TA2, and the target GPU usage end time of TA3 is less than the target GPU usage end time of TA2 (i.e., TA3 is completely contained by TA 2), TA3 is marked as not needed to find, and continues to find. Since the target GPU usage start time of TA4 is greater than the target GPU usage end time of TA2, it may be determined that there is a GPU idle time between TA4 and TA2, and the recording period TAp1 is the second GPU idle time (target GPU usage start time of TAp 1=ta 4-target GPU usage end time of TA 2). If there is a task whose target GPU use start time is earlier than TA4 and whose target GPU use end time is later than TA2 after TA4, TAp1 is updated to the difference between the target GPU use start time of the task and the target GPU use end time of TA2. And so on.
Since TA3 is marked as not needed, it is skipped directly here, entering TA4. And the like, so as to obtain the idle time of the second GPU of each section.
In the embodiment of the disclosure, the GPU utilization of the process may be in the form of a percentage or a fraction, which is not limited herein.
In one possible implementation manner, the determining the GPU utilization of the process according to the GPU total utilization time and the length of the target sampling period includes: and determining the ratio of the total utilization time of the GPU to the length of the target sampling period as the GPU utilization rate of the process. According to this implementation, the GPU utilization of the process can be accurately determined.
In another possible implementation manner, the determining the GPU utilization of the process according to the GPU total utilization time and the length of the target sampling period includes: determining a ratio of the GPU total utilization time to the length of the target sampling period; and multiplying the ratio by 100% to obtain the GPU utilization rate of the process.
The method for determining the process-oriented GPU utilization rate provided by the embodiments of the present disclosure may be applied to the technical fields of GPU software algorithms, open-source 2D and 3D graphics libraries (e.g., openGL and OpenGL ES), parallel computing libraries (e.g., MUSA), and the like, which are not limited herein. Wherein the MUSA may be used to provide GPU hardware acceleration computing power, for example, may be used for Artificial Intelligence (AI) computing.
The method for determining the process-oriented GPU utilization provided by the embodiments of the present disclosure is described below through a specific application scenario.
In the application scene, for any process, a target sampling period of the GPU utilization rate can be obtained, and a target GPU use start time and a target GPU use end time of each task in the process can be obtained. For example, in the example shown in fig. 4, process 1 includes 4 tasks, TA1, TA2, TA3, and TA4, respectively. The use start time of the target GPU of TA1 is time 1, and the use end time of the target GPU is time 4; the use start time of the target GPU of TA2 is time 2, and the use end time of the target GPU is time 5; the use start time of the target GPU of TA3 is time 3, and the use end time of the target GPU is time 4; the target GPU use start time of TA4 is time 6, and the target GPU use end time is time 7.
After determining the target GPU use start time and the target GPU use end time of each task in the process, a first GPU idle time between the start time of the target sampling period and the task in the process may be determined according to the start time of the target sampling period and the earliest target GPU use start time of the target GPU use start time of each task, a second GPU idle time between each task may be determined according to the target GPU use start time and the target GPU use end time of each task, and a third GPU idle time between the task in the process and the end time of the target sampling period may be determined according to the latest target GPU use end time of the target GPU use end time of each task and the end time of the target sampling period, and a total GPU idle time corresponding to the process in the target sampling period may be determined according to the sum of the first GPU idle time, the second GPU idle time and the third GPU idle time.
For example, in the example shown in fig. 4, the time interval between the start time of the target sampling period and the target GPU usage start time of TA1 (i.e., instant 1) may be determined as the first GPU idle time; the time interval between the target GPU usage end time of TA2 (i.e., instant 7) and the target GPU usage start time of TA4 (i.e., instant 6) may be determined as the second GPU idle time; the time interval between the end of use time of the target GPU of TA4 (i.e., instant 7) and the end of the target sampling period may be determined as the third GPU idle time. And determining the sum of the first GPU idle time, the second GPU idle time and the third GPU idle time as the total idle time of the GPU corresponding to the process 1.
Wherein, the first GPU idle time = time 1-start time;
second GPU idle time = time 6-time 5;
third GPU idle time = end time-time 7;
the total idle time of the GPUs corresponding to the process 1=the first idle time of the gpu+the second idle time of the gpu+the third idle time of the gpu=the moment 1-the starting moment+the moment 6-the moment 5+the ending moment-the moment 7;
GPU total utilization time corresponding to process 1 = end time-start time-GPU total idle time corresponding to process 1;
GPU utilization corresponding to process 1 in the target sampling period= (GPU total utilization time corresponding to process 1)/(end time-start time) ×100%.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides a device for determining the process-oriented GPU utilization rate, an electronic device, a computer readable storage medium, and a computer program product, which can be used to implement any of the method for determining the process-oriented GPU utilization rate provided in the disclosure, and the corresponding technical scheme and technical effect can be referred to the corresponding records of the method section and are not repeated.
Fig. 5 shows a block diagram of a process-oriented GPU utilization determination apparatus provided by an embodiment of the present disclosure. As shown in fig. 5, the device for determining the process-oriented GPU utilization includes:
the obtaining module 51 is configured to obtain, for any process, a target GPU usage start time and a target GPU usage end time of each task in the process in a target sampling period of GPU utilization, where the target GPU usage start time of any task represents a time when the task starts using GPU resources in the target sampling period, and the target GPU usage end time of the task represents a time when the task ends using GPU resources in the target sampling period;
The first determining module 52 is configured to determine, according to the target GPU usage start time and the target GPU usage end time of each task, a GPU total utilization time of each task in the target sampling period;
a second determining module 53, configured to determine a GPU utilization of the process according to the GPU total utilization time and the length of the target sampling period.
In one possible implementation, the first determining module 52 is configured to:
determining the total idle time of the GPU corresponding to the process in the target sampling period according to the starting time and the ending time of the target sampling period and the target GPU use starting time and the target GPU use ending time of each task;
and determining the difference value between the length of the target sampling period and the total idle time of the GPU as the total utilization time of the GPU of each task in the target sampling period.
In one possible implementation, the first determining module 52 is configured to:
determining a first GPU idle time between the starting time of the target sampling period and the task in the process according to the starting time of the target sampling period and the earliest target GPU use starting time in the target GPU use starting time of each task;
Determining a second GPU idle time between each task according to the target GPU use start time and the target GPU use end time of each task;
determining a third GPU idle time between the task in the process and the ending time of the target sampling period according to the latest target GPU use ending time in the target GPU use ending time of each task and the ending time of the target sampling period;
and determining the total idle time of the GPU corresponding to the process in the target sampling period according to the sum of the first idle time of the GPU, the second idle time of the GPU and the third idle time of the GPU.
In one possible implementation, the first determining module 52 is configured to:
determining idle trigger tasks in the tasks according to the target GPU use start time and the target GPU use end time of the tasks; the idle trigger tasks do not comprise the latest task in the use ending time of the target GPU in each task, and for any idle trigger task, responding to the end of the idle trigger task, the process pauses the use of GPU resources;
and for each idle trigger task, respectively determining a second GPU idle time between the idle trigger task and a next task of the idle trigger task, wherein for any idle trigger task, the next task of the idle trigger task represents a task which uses GPU resources earliest after the use end time of the target GPU of the idle trigger task in the process.
In one possible implementation, the apparatus further includes:
and a third determining module, configured to, for any task in the process, determine, as an idle trigger task, the task in response to the target GPU usage end time of the task not being between the target GPU usage start time and the target GPU usage end time of other tasks in the process, and the target GPU usage end time of the task not being the same as the target GPU usage end time of other tasks whose target GPU usage start time is earlier than the task.
In one possible implementation, the obtaining module 51 is configured to:
for any task in the process, responding to the fact that the GPU use starting time of the task is later than or equal to the starting time of the target sampling period, determining the GPU use starting time of the task as the target GPU use starting time of the task, wherein the GPU use starting time of the task represents the time when the task starts to use GPU resources;
or,
for any task in the process, responding to the fact that the GPU use starting time of the task is earlier than the starting time of the target sampling period, and determining the starting time of the target sampling period as the target GPU use starting time of the task.
In one possible implementation, the obtaining module 51 is configured to:
for any task in the process, responding to the fact that the GPU use ending time of the task is earlier than or equal to the ending time of the target sampling period, determining the GPU use ending time of the task as the target GPU use ending time of the task, wherein the GPU use ending time of the task represents the time when the task finishes using GPU resources;
or,
and for any task in the process, responding to the fact that the GPU use end time of the task is later than the end time of the target sampling period, and determining the end time of the target sampling period as the target GPU use end time of the task.
In one possible implementation, the apparatus further includes:
and the storage module is used for responding to the end of any task in the process and storing the use starting time of the target GPU and the use ending time of the target GPU of the task.
In one possible implementation, the tasks in the process are stored in sequence according to the starting time of the tasks.
In one possible implementation, the second determining module 53 is configured to:
And determining the ratio of the total utilization time of the GPU to the length of the target sampling period as the GPU utilization rate of the process.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementation and technical effects of the functions or modules may refer to the descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. Wherein the computer readable storage medium may be a non-volatile computer readable storage medium or may be a volatile computer readable storage medium.
The disclosed embodiments also propose a computer program comprising computer readable code which, when run in an electronic device, causes a processor in the electronic device to carry out the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, causes a processor in the electronic device to perform the above method.
The embodiment of the disclosure also provides an electronic device, including: one or more processors; a memory for storing executable instructions; wherein the one or more processors are configured to invoke the executable instructions stored by the memory to perform the above-described method.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 6 shows a block diagram of an electronic device 1900 provided by an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server or a terminal. Referring to FIG. 6, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, and a wired or wireless network interface 1950 configured to configure the electronic device The device 1900 is connected to the network and an input/output interface 1958 (I/O interface). Electronic device 1900 may operate an operating system based on memory 1932, such as the Microsoft Server operating system (Windows Server) TM ) Apple Inc. developed graphical user interface based operating System (Mac OS X TM ) Multi-user multi-process computer operating system (Unix) TM ) Unix-like operating system (Linux) of free and open source code TM ) Unix-like operating system (FreeBSD) with open source code TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
If the technical scheme of the embodiment of the disclosure relates to personal information, the product applying the technical scheme of the embodiment of the disclosure clearly informs the personal information processing rule and obtains personal independent consent before processing the personal information. If the technical solution of the embodiment of the present disclosure relates to sensitive personal information, the product applying the technical solution of the embodiment of the present disclosure obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of "explicit consent". For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. The method for determining the GPU utilization rate facing the process is characterized by comprising the following steps of:
for any process, obtaining a target GPU use start time and a target GPU use end time of each task in the process in a target sampling period of GPU utilization rate, wherein the target GPU use start time of any task represents the time when the task starts to use GPU resources in the target sampling period, and the target GPU use end time of the task represents the time when the task ends to use GPU resources in the target sampling period;
determining the total utilization time of the GPU of each task in the target sampling period according to the target GPU use starting time and the target GPU use ending time of each task;
And determining the GPU utilization rate of the process according to the total GPU utilization time and the length of the target sampling period.
2. The method according to claim 1, wherein determining the GPU total utilization time of each task in the target sampling period according to the target GPU utilization start time and the target GPU utilization end time of each task comprises:
determining the total idle time of the GPU corresponding to the process in the target sampling period according to the starting time and the ending time of the target sampling period and the target GPU use starting time and the target GPU use ending time of each task;
and determining the difference value between the length of the target sampling period and the total idle time of the GPU as the total utilization time of the GPU of each task in the target sampling period.
3. The method according to claim 2, wherein determining the GPU total idle time corresponding to the process in the target sampling period according to the start time and the end time of the target sampling period, and the target GPU usage start time and the target GPU usage end time of each task includes:
determining a first GPU idle time between the starting time of the target sampling period and the task in the process according to the starting time of the target sampling period and the earliest target GPU use starting time in the target GPU use starting time of each task;
Determining a second GPU idle time between each task according to the target GPU use start time and the target GPU use end time of each task;
determining a third GPU idle time between the task in the process and the ending time of the target sampling period according to the latest target GPU use ending time in the target GPU use ending time of each task and the ending time of the target sampling period;
and determining the total idle time of the GPU corresponding to the process in the target sampling period according to the sum of the first idle time of the GPU, the second idle time of the GPU and the third idle time of the GPU.
4. A method according to claim 3, wherein said determining a second GPU idle time between the respective tasks from a target GPU usage start time and a target GPU usage end time for the respective tasks comprises:
determining idle trigger tasks in the tasks according to the target GPU use start time and the target GPU use end time of the tasks; the idle trigger tasks do not comprise the latest task in the use ending time of the target GPU in each task, and for any idle trigger task, responding to the end of the idle trigger task, the process pauses the use of GPU resources;
And for each idle trigger task, respectively determining a second GPU idle time between the idle trigger task and a next task of the idle trigger task, wherein for any idle trigger task, the next task of the idle trigger task represents a task which uses GPU resources earliest after the use end time of the target GPU of the idle trigger task in the process.
5. The method according to claim 4, wherein the method further comprises:
for any task in the process, determining the task as an idle trigger task in response to the target GPU use ending time of the task not being between the target GPU use starting time and the target GPU use ending time of other tasks in the process, and the target GPU use ending time of the task not being the same as the target GPU use ending time of other tasks whose target GPU use starting time is earlier than the task.
6. The method according to any one of claims 1 to 5, wherein in the target sampling period for obtaining GPU utilization, a target GPU usage start time of each task in the process includes:
for any task in the process, responding to the fact that the GPU use starting time of the task is later than or equal to the starting time of the target sampling period, determining the GPU use starting time of the task as the target GPU use starting time of the task, wherein the GPU use starting time of the task represents the time when the task starts to use GPU resources;
Or,
for any task in the process, responding to the fact that the GPU use starting time of the task is earlier than the starting time of the target sampling period, and determining the starting time of the target sampling period as the target GPU use starting time of the task.
7. The method according to any one of claims 1 to 5, wherein the target GPU usage end time of each task in the process in the target sampling period for obtaining GPU usage comprises:
for any task in the process, responding to the fact that the GPU use ending time of the task is earlier than or equal to the ending time of the target sampling period, determining the GPU use ending time of the task as the target GPU use ending time of the task, wherein the GPU use ending time of the task represents the time when the task finishes using GPU resources;
or,
and for any task in the process, responding to the fact that the GPU use end time of the task is later than the end time of the target sampling period, and determining the end time of the target sampling period as the target GPU use end time of the task.
8. The method according to any one of claims 1 to 5, further comprising:
And responding to the end of any task in the process, and storing the target GPU use starting time and the target GPU use ending time of the task.
9. The method of claim 8, wherein the tasks in the process are stored in order according to the starting time of the tasks.
10. The method according to any one of claims 1 to 5, wherein determining the GPU utilization of the process according to the GPU total utilization time and the length of the target sampling period comprises:
and determining the ratio of the total utilization time of the GPU to the length of the target sampling period as the GPU utilization rate of the process.
11. A process-oriented GPU utilization determining device, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a target GPU use start time and a target GPU use end time of each task in a process in a target sampling period of GPU utilization rate, wherein the target GPU use start time of any task represents the time when the task starts to use GPU resources in the target sampling period, and the target GPU use end time of the task represents the time when the task ends to use GPU resources in the target sampling period;
The first determining module is used for determining the total utilization time of the GPU of each task in the target sampling period according to the target GPU use starting time and the target GPU use ending time of each task;
and the second determining module is used for determining the GPU utilization rate of the process according to the total GPU utilization time and the length of the target sampling period.
12. An electronic device, comprising:
one or more processors;
a memory for storing executable instructions;
wherein the one or more processors are configured to invoke the memory-stored executable instructions to perform the method of any of claims 1 to 10.
13. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 10.
CN202311507693.9A 2023-11-13 2023-11-13 Method, device, equipment and medium for determining GPU utilization rate of process Pending CN117453486A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311507693.9A CN117453486A (en) 2023-11-13 2023-11-13 Method, device, equipment and medium for determining GPU utilization rate of process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311507693.9A CN117453486A (en) 2023-11-13 2023-11-13 Method, device, equipment and medium for determining GPU utilization rate of process

Publications (1)

Publication Number Publication Date
CN117453486A true CN117453486A (en) 2024-01-26

Family

ID=89590849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311507693.9A Pending CN117453486A (en) 2023-11-13 2023-11-13 Method, device, equipment and medium for determining GPU utilization rate of process

Country Status (1)

Country Link
CN (1) CN117453486A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977302A (en) * 2017-11-24 2018-05-01 杭州迪普科技股份有限公司 A kind of CPU usage output method and device
CN109002377A (en) * 2018-07-26 2018-12-14 郑州云海信息技术有限公司 A kind of processor detection method, processor detection device and computer equipment
CN111176966A (en) * 2019-12-26 2020-05-19 京信通信系统(中国)有限公司 Method, device and equipment for determining CPU utilization rate and storage medium
CN111427758A (en) * 2020-03-17 2020-07-17 北京百度网讯科技有限公司 Task calculation amount determining method and device and electronic equipment
CN115220921A (en) * 2022-09-19 2022-10-21 浙江大华技术股份有限公司 Resource scheduling method, image processor, image pickup device, and medium
CN115309507A (en) * 2022-08-08 2022-11-08 科东(广州)软件科技有限公司 Method, device, equipment and medium for calculating CPU resource occupancy rate
CN116740248A (en) * 2023-08-08 2023-09-12 摩尔线程智能科技(北京)有限责任公司 Control method, chip and device, controller, equipment and medium for distributing image blocks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977302A (en) * 2017-11-24 2018-05-01 杭州迪普科技股份有限公司 A kind of CPU usage output method and device
CN109002377A (en) * 2018-07-26 2018-12-14 郑州云海信息技术有限公司 A kind of processor detection method, processor detection device and computer equipment
CN111176966A (en) * 2019-12-26 2020-05-19 京信通信系统(中国)有限公司 Method, device and equipment for determining CPU utilization rate and storage medium
CN111427758A (en) * 2020-03-17 2020-07-17 北京百度网讯科技有限公司 Task calculation amount determining method and device and electronic equipment
CN115309507A (en) * 2022-08-08 2022-11-08 科东(广州)软件科技有限公司 Method, device, equipment and medium for calculating CPU resource occupancy rate
CN115220921A (en) * 2022-09-19 2022-10-21 浙江大华技术股份有限公司 Resource scheduling method, image processor, image pickup device, and medium
CN116740248A (en) * 2023-08-08 2023-09-12 摩尔线程智能科技(北京)有限责任公司 Control method, chip and device, controller, equipment and medium for distributing image blocks

Similar Documents

Publication Publication Date Title
CN109523187B (en) Task scheduling method, device and equipment
WO2020207454A1 (en) Information pushing method and device
US10339899B2 (en) Character string display method and apparatus
US11514263B2 (en) Method and apparatus for processing image
CN110162338B (en) Operation method, device and related product
CN111177433B (en) Method and apparatus for parallel processing of information
CN112925587A (en) Method and apparatus for initializing applications
CN107526623B (en) Data processing method and device
CN111309416B (en) Information display method, device and equipment of application interface and readable medium
CN112506581A (en) Method and device for rendering small program, electronic equipment and readable storage medium
CN110673886B (en) Method and device for generating thermodynamic diagrams
US11474924B2 (en) Graphics processing unit performance analysis tool
CN110825461B (en) Data processing method and device
CN117453486A (en) Method, device, equipment and medium for determining GPU utilization rate of process
CN111124523A (en) Method and apparatus for initializing applications
CN113792869B (en) Video processing method and device based on neural network chip and electronic equipment
CN115391204A (en) Test method and device for automatic driving service, electronic equipment and storage medium
CN114528433A (en) Template selection method and device, electronic equipment and storage medium
CN110083357B (en) Interface construction method, device, server and storage medium
CN111770385A (en) Card display method and device, electronic equipment and medium
CN116402674B (en) GPU command processing method and device, electronic equipment and storage medium
CN116466958B (en) Construction method and device of An Zhuo Rongqi, electronic equipment and storage medium
CN113312131B (en) Method and device for generating and operating marking tool
CN115033366A (en) Task scheduling method, device, electronic equipment, storage medium and program product
CN113986388B (en) Program set loading method, system, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination