US20210192674A1 - Methods and apparatus to improve operation of a graphics processing unit - Google Patents
Methods and apparatus to improve operation of a graphics processing unit Download PDFInfo
- Publication number
- US20210192674A1 US20210192674A1 US17/096,590 US202017096590A US2021192674A1 US 20210192674 A1 US20210192674 A1 US 20210192674A1 US 202017096590 A US202017096590 A US 202017096590A US 2021192674 A1 US2021192674 A1 US 2021192674A1
- Authority
- US
- United States
- Prior art keywords
- gpu
- instructions
- kernel
- records
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/302—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
- G06F11/3419—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
- G06F11/3423—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time where the assessed time is active or idle time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
- G06F11/3433—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/81—Threshold
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/865—Monitoring of software
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Debugging And Monitoring (AREA)
Abstract
Methods, apparatus, systems, and articles of manufacture are disclosed to improve operation of a graphics processing unit (GPU). An example apparatus includes an instruction generator to insert profiling instructions into a GPU kernel to generate an instrumented GPU kernel, the instrumented GPU kernel is to be executed by a GPU, a trace analyzer to generate an occupancy map associated with the GPU executing the instrumented GPU kernel, a parameter calculator to determine one or more operating parameters of the GPU based on the occupancy map, and a processor optimizer to invoke a GPU driver to adjust a workload of the GPU based on the one or more operating parameters.
Description
- This patent arises from a continuation of U.S. patent application Ser. No. 16/129,525, (now U.S. Pat. No. ______) which was filed on Sep. 12, 2018. U.S. patent application Ser. No. 16/129,525 is hereby incorporated herein by reference in its entirety. Priority to U.S. patent application Ser. No. 16/129,525 is hereby claimed.
- This disclosure relates generally to computers and, more particularly, to methods and apparatus to improve operation of a graphics processing unit (GPU).
- Software developers seek to develop code that may be executed as efficiently as possible. To better understand code execution, profiling is used to measure different code execution statistics such as, for example, execution time, memory consumption, etc. In some examples, profiling is implemented by insertion of profiling instructions into the code. Such profiling instructions can be used to store and analyze information about the code execution.
-
FIG. 1 is a block diagram illustrating an example binary instrumentation engine inserting profiling instructions into a GPU kernel in accordance with teachings of this disclosure. -
FIG. 2 depicts an example trace buffer generated in accordance with teachings of this disclosure. -
FIG. 3 is a block diagram of the example binary instrumentation engine ofFIG. 1 in accordance with teachings of this disclosure. -
FIG. 4 depicts an example occupancy map generated in accordance with teachings of this disclosure. -
FIG. 5 is a flowchart representative of machine readable instructions which may be executed to implement the example binary instrumentation engine ofFIGS. 1 and 3 to improve operation of a GPU. -
FIG. 6 is a flowchart representative of machine readable instructions which may be executed to implement the example binary instrumentation engine ofFIGS. 1 and 3 to process the example trace buffer ofFIG. 2 to generate the example occupancy map ofFIG. 4 . -
FIG. 7 is a block diagram of an example processing platform structured to execute the instructions ofFIGS. 5-6 to implement the example binary instrumentation engine ofFIGS. 1 and 3 . - The figures are not to scale. In general, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts.
- A graphics processing unit (GPU) is an electronic circuit that executes instructions to modify contents of a buffer. Typically, the buffer is a frame buffer that is used to output information to a display device (e.g., a monitor, a touchscreen, etc.). Recently, GPUs have been used for tasks that are not necessarily related to generating output images.
- GPUs execute instruction packages commonly referred to as kernels, compute kernels, and/or shaders. Typically, the term shader is used when a kernel is used for graphics-related tasks such as, for example, DirectX, Open Graphics Library (OpenGL) tasks, pixel shader/shading tasks, vertex shader/shading tasks, etc. The term kernel is used for general purpose computational tasks such as, for example, Open Computing Language (OpenCL) tasks, C for Media tasks, etc. While example approaches disclosed herein use the term kernel, such approaches are equally well suited to be used on shaders. Such kernels roughly correspond to an inner loop of a program that is iterated multiple times. As used herein, a GPU kernel refers to a kernel in binary format. A GPU programmer develops kernels/shaders in a high-level programming language such as, for example, a High-Level Shader Language (HLSL), OpenCL, etc., and then compiles the code into a binary version of the kernel which is then executed by a GPU. Example approaches disclosed herein are applied to the binary version of the kernel.
- Developers want to create the most computationally efficient kernels to perform their desired task. To gain a better understanding of the performance of a kernel, developers use a profiler and/or profiling system to collect operational statistics (e.g., performance statistics) of the kernel. Profilers insert additional instructions into the kernel to collect such operational statistics. However, prior profilers and/or profiling systems are used to determine occupancy of a central processing unit (CPU). Prior profilers and/or profiling systems determine the occupancy of the CPU because an operating system running on the CPU provides visibility of the CPU utilization for each of the cores and threads of the CPU. However, GPUs do not have an operating system running on the GPUs and, therefore, do not have an ability to measure busy and idle time intervals at the granularity of the execution units and hardware threads of the GPUs.
- Examples disclosed herein improve operation of a GPU by measuring operating parameters of the GPU and determining whether to adjust operation of the GPU based on the measured operating parameters. In some disclosed examples, one or more processors included in a central processing unit (CPU) determines one or more operating parameters (e.g., operational statistics, performance statistics, etc.) associated with the GPU including at least one of a busy time parameter, an idle time parameter, an occupancy time parameter, or a utilization parameter. As used herein, a busy time of the GPU refers to a time interval, a time duration, etc., when a hardware thread of the GPU is busy executing a computational task. As used herein, an idle time of the GPU refers to a time interval, a time duration, etc., when a hardware thread of the GPU is not executing a computational task. As used herein, an occupancy of the GPU refers to a set of busy and/or idle time intervals associated with an execution unit and/or hardware thread of the GPU during execution of one or more computational tasks. As used herein, utilization of the GPU refers to a ratio of the busy time and a total time associated with the execution of the one or more computational tasks.
- In some disclosed examples, the CPU inserts additional instructions into kernels to collect information corresponding to the one or more operating parameters associated with the kernels. Additional instructions may include profiling instructions to instruct the GPU to record and/or otherwise store timestamps associated with a start time, an end time, etc., of an execution of the kernel. For example, when the GPU executes a kernel that includes the additional instructions, the GPU may store a start time associated with starting an execution of the kernel and an end time associated with ending the execution of the kernel. The GPU may store the timestamps and a corresponding hardware thread identifier in a trace buffer in memory. In such examples, the CPU may obtain the trace buffer and determine the one or more operating parameters based on information included in the trace buffer. In some disclosed examples, the CPU can determine that the GPU can execute additional computational tasks, fewer additional tasks, etc., based on the one or more operating parameters and, thus, improve operation of the GPU, scheduling operations of the CPU, etc.
-
FIG. 1 is a block diagram illustrating an examplebinary instrumentation engine 100 insertingexample profiling instructions 102 into a firstexample GPU kernel 104 to generate a secondexample GPU kernel 106 to be executed by anexample GPU 108. Thesecond GPU kernel 106 is an instrumented GPU kernel. TheGPU 108 may use theprofiling instructions 102 to generateexample profile data 110. Theprofile data 110 corresponds to data generated by theGPU 108 in response to executing theprofiling instructions 102 included in thesecond kernel 106. Thebinary instrumentation engine 100 may obtain and analyze theprofile data 110 to better understand the execution of thesecond kernel 106 by theGPU 108. Thebinary instrumentation engine 100 may determine to adjust operation of theGPU 108 based on analyzing theprofile data 110. - In some examples, the
profiling instructions 102 create and/or store operational information such as, for example, counters, timestamps, etc., that can be used to better understand the execution of a kernel. For example, theprofiling instructions 102 may profile and/or otherwise characterize an execution of thesecond kernel 106 by theGPU 108. In some examples, theprofiling instructions 102 are inserted at a first address (e.g., a first position) of a kernel (e.g., the beginning of the first kernel 104) to initialize variables used for profiling. In some examples, theprofiling instructions 102 are inserted at locations intermediate the original instructions (e.g., intermediate the instructions from the first kernel 104). In some examples, the profilinginstructions 102 are inserted at a second address (e.g., a second position) of the kernel (e.g., after the instructions from the first kernel 104) and, when executed, cause theGPU 108 to collect and/or otherwise store the metrics that is accessible by thebinary instrumentation engine 100. In some examples, the profilinginstructions 102 are inserted at the end of the kernel (e.g., the first kernel 104) to perform cleanup (e.g., freeing memory locations, etc.). However,such profiling instructions 102 may additionally or alternatively be inserted at any location or position and in any order. - In the illustrated example of
FIG. 1 , anexample CPU 112 includes thebinary instrumentation engine 100, anexample application 114, anexample GPU driver 116, and anexample GPU compiler 118. Theapplication 114 may be used to display an output from theGPU 108 when theGPU 108 executes graphics-related tasks such as, for example, DirectX tasks, OpenGL tasks, pixel shader/shading tasks, vertex shader/shading tasks, etc. Additionally or alternatively, theapplication 114 may be used to display and/or otherwise process outputs from theGPU 108 when theGPU 108 executes non-graphics related tasks. Additionally or alternatively, theapplication 114 may be used by a GPU programmer to facilitate development of kernels/shaders in a high-level programming language such as, for example, HLSL, OpenCL, etc. - In
FIG. 1 , theapplication 114 transmits tasks (e.g., computational tasks, graphics-related tasks, non-graphics related tasks, etc.) to theGPU driver 116. TheGPU driver 116 receives the tasks and instructs theGPU compiler 118 to compile code associated with the tasks into a binary version (e.g., a binary format corresponding to binary code, binary instructions, machine readable instructions, etc.) to generate thefirst kernel 104. TheGPU compiler 118 transmits the compiled binary version of thefirst kernel 104 to theGPU driver 116. - The
binary instrumentation engine 100 ofFIG. 1 obtains the first kernel 104 (e.g., in a binary format) from theGPU driver 116. Thebinary instrumentation engine 100 instruments thefirst kernel 104 by inserting additional instructions such as the profilinginstructions 102 into thefirst kernel 104. As used herein, an instrumented kernel refers to a kernel that includes profiling and/or tracing instructions to be executed to measure statistics or monitor an execution of the kernel. For example, thebinary instrumentation engine 100 may modify thefirst kernel 104 to create an instrumented GPU kernel such as thesecond kernel 106. That is, thebinary instrumentation engine 100 creates thesecond kernel 106 without executing any compilation of the GPU kernel. In this manner, already-compiled GPU kernels can be instrumented and/or profiled. Thesecond kernel 106 is passed to theGPU 108 viaexample memory 120. For example, thebinary instrumentation engine 100 may transmit thesecond kernel 106 to theGPU driver 116, which, in turn, stores thesecond kernel 106 in thememory 120 for retrieval by theGPU 108. - The
GPU 108 uses theprofiling instructions 102 ofFIG. 1 to generate theprofile data 110. InFIG. 1 , the profilinginstructions 102 include afirst example instruction 102 a of “A=RDTSC” inserted at a first position, where thefirst instruction 102 a corresponds to a read (RD) operation of a register (e.g., a hardware register) associated with a time-stamp counter (TSC) and a store operation of a first value of the register in a variable A. The profilinginstructions 102 include asecond example instruction 102 b of “B=RDTSC” inserted at a second position, where thesecond instruction 102 b corresponds to reading the register associated with the TSC and storing a second value of the register in a variable B. The profilinginstructions 102 include athird example instruction 102 c of “Trace (A, B, HW-thread-ID)” at a third position, where thethird instruction 102 c corresponds to generating a trace and storing the variables A, B, and an identifier (ID) of a hardware (HW) thread (HW-THREAD-ID) in the trace. For example, the trace may refer to a sequence of data records that are written (e.g., dynamically written) into a memory buffer (referred to herein as a trace buffer). - In
FIG. 1 , the HW-THREAD-ID corresponds to a hardware thread that executed thesecond kernel 106 includingexample GPU instructions 122 disposed between thefirst instruction 102 a and thesecond instruction 102 b. In response to executing the profilinginstructions 102 and theGPU instructions 122, theGPU 108 stores the trace that includes information included in the variables A, B, and HW-THREAD-ID in anexample trace buffer 124 included in theprofile data 110. Thetrace buffer 124 includes example records 126. For example, a first one of therecords 126 inFIG. 1 is [A1, B1, 7], where A1 corresponds to a first timestamp, B1 corresponds to a second timestamp, and 7 corresponds to a hardware thread identifier, where the second timestamp is after the first timestamp. The first timestamp (A1) of the first one of therecords 126 may correspond to when a hardware thread with a hardware thread identifier of 7 begins executing the instrumentedGPU kernel 106. The second timestamp (B1) of the first one of therecords 126 may correspond to when the hardware thread with the hardware thread identifier of 7 concludes executing the instrumentedGPU kernel 106. - In the illustrated example of
FIG. 1 , thememory 120 includes one or more kernels such as thesecond kernel 106, theprofile data 110, andexample GPU data 128. Alternatively, thememory 120 may not store one or more kernels. Thedata 128 corresponds to data generated by theGPU 108 in response to executing at least thesecond kernel 106. For example, thedata 128 may correspond to graphics-related data, output information to a display device, etc. - The
profile data 110 includes thetrace buffer 124, which is an example implementation of anexample trace buffer 200 depicted in the illustrated example ofFIG. 2 . Thetrace buffer 200 ofFIG. 2 represents an example format that may be used by theGPU 108 to generate thetrace buffer 124 ofFIG. 1 . InFIG. 2 , thetrace buffer 200 is a buffer that includes a plurality of example records 202. InFIG. 2 , therecords 202 may correspond to therecords 126 ofFIG. 1 . For example, a first one of therecords 202 ofFIG. 2 may correspond to the first one of therecords 126 ofFIG. 1 . Each of therecords 202 includes example data fields (e.g., data entries) 204, 206, 208 including a firstexample data field 204, a secondexample data field 206, and a thirdexample data field 208. Alternatively, one or more of therecords 202 may include fewer or more data fields than depicted inFIG. 2 . InFIG. 2 , thefirst data field 204 is a first data storage unit that stores a first value of a timestamp counter (A) associated with a hardware thread executing thesecond kernel 106. Thesecond data field 206 is a second data storage unit that stores a second value of the timestamp counter (B), where the second value is greater than the first value. For example, the first value may correspond to a first time and the second value may correspond to a second time, where the second time is after or later than the first time. InFIG. 2 , thethird data field 208 is a third data storage unit that stores an identifier of the hardware thread (THREAD ID). - In the illustrated example of
FIG. 2 , thetrace buffer 200 is generated in an atomic manner. For example, theGPU 108 may generate thetrace buffer 200 sequentially where a first one of therecords 202 is adjacent to a second one of therecords 202, where the first one of therecords 202 is generated prior to the second one of therecords 202. TheGPU 108 generates therecords 202 from different hardware threads that are intermixed in thetrace buffer 200. For example, thetrace buffer 200 may not be stored in chronological order, in order of hardware thread identifier, etc. For example, two records k and m having the same hardware thread identifier have the following characteristics: if k<m, then Ak<Bk<Am<Bm. - Turning back to
FIG. 1 , thebinary instrumentation engine 100 retrieves (e.g., iteratively retrieves, periodically retrieves, etc.) thetrace buffer 124 from thememory 120. In some examples, thebinary instrumentation engine 100 determines one or more operating parameters associated with thesecond kernel 106, and/or, more generally, theGPU 108. For example, thebinary instrumentation engine 100 may determine a busy time parameter, an idle time parameter, an occupancy time parameter, and/or a utilization parameter. In some examples, thebinary instrumentation engine 100 adjusts operation of theGPU 108 based on the one or more operating parameters. For example, thebinary instrumentation engine 100 may instruct theCPU 112 to schedule an increased quantity of instructions to be performed by theGPU 108, a decreased quantity of instructions to be performed by theGPU 108, etc., based on the one or more operating parameters. -
FIG. 3 is a block diagram of thebinary instrumentation engine 100 ofFIG. 1 to improve operation of theGPU 108 ofFIG. 1 . Thebinary instrumentation engine 100 instruments binary shaders/kernels prior to sending them to theGPU 108. Thebinary instrumentation engine 100 collects traces including timestamps associated with when the instrumented code is executed by theGPU 108. Thebinary instrumentation engine 100 generates an occupancy map and/or one or more operating parameters based on the collected traces, where the occupancy map and/or the one or more operating parameters may be used to improve operation of theGPU 108, theCPU 112, etc. In the illustrated example ofFIG. 3 , thebinary instrumentation engine 100 includes anexample instruction generator 300, anexample trace analyzer 310, anexample parameter calculator 320, and anexample processor optimizer 330. - In the illustrated example of
FIG. 3 , thebinary instrumentation engine 100 includes theinstruction generator 300 to instrument kernels such as thefirst kernel 104 ofFIG. 1 . For example, theinstruction generator 300 may access the first kernel 104 (e.g., access thefirst kernel 104 from memory included in the CPU 112). Theinstruction generator 300 may instrument thefirst kernel 104 to generate thesecond kernel 106 ofFIG. 2 . For example, theinstruction generator 300 may generate and insert binary code associated with the profilinginstructions 102 ofFIG. 1 into thefirst kernel 104 to generate thesecond kernel 106. Theinstruction generator 300 includes means to generate binary code (e.g., binary instructions, machine readable instructions, etc.) based on theprofiling instructions 102. Theinstruction generator 300 includes means to insert the generated binary code into thefirst kernel 104 at one or more places or positions within thefirst kernel 104 to generate thesecond kernel 106. - In the illustrated example of
FIG. 3 , thebinary instrumentation engine 100 includes thetrace analyzer 310 to retrieve and/or otherwise collect theprofile data 110 from thememory 120 ofFIG. 1 . Thetrace analyzer 310 includes means to extract thetrace buffer 124 from theprofile data 110. Thetrace analyzer 310 processes thetrace buffer 124 by traversing thetrace buffer 124 from a first position (e.g., a beginning) of thetrace buffer 124 to a second position (e.g., an end) of thetrace buffer 124. For example, a first one of therecords 202 ofFIG. 2 at the first position may have a lower hardware thread ID compared to a second one of therecords 202 at the second position. In other examples, the first one of therecords 202 at the first position may have lower timestamps compared to the second one of therecords 202 at the second position. - In some examples, the
trace analyzer 310 includes means to group therecords 202 into one or more sub-traces based on the hardware thread identifiers. For example, thetrace analyzer 310 may sort and/or otherwise organize therecords 202 into subsets or groups having the same hardware thread ID. In such examples, thetrace analyzer 310 may generate new indices for ones of therecords 202 that have the same hardware thread ID. For example, for two records k and m having the same hardware thread identifier where k<m, thetrace analyzer 310 may assign a new index of k′ to the record k and a new index of m′ to the record m. For example, if a first one of therecords 202 has an index of 24 (e.g., Record 24) and a hardware thread identifier of 234 and a second one of therecords 202 has an index of 37 (e.g., Record 37) and the hardware thread identifier of 234, thetrace analyzer 310 may assign an index of 0 to the first one of therecords 202 and an index of 1 to the second one of therecords 202. - In some examples, the
trace analyzer 310 traverses each of the sub-traces from ones of therecords 202 having the lower indices to the ones of therecords 202 having the higher indices. Thetrace analyzer 310 may generate a timeline (e.g., an occupancy timeline) associated with each of therecords 202 in the sub-traces. For example, thetrace analyzer 310 may select a first one of therecords 202 in a sub-trace of interest, where the first one of therecords 202 has timestamps represented by [A,B], where A refers to thefirst data field 204 and B refers to thesecond data field 206 ofFIG. 2 . Thetrace analyzer 310 may determine that a time interval spanning time A to time B is busy whereas the time outside of the time interval is idle. Thetrace analyzer 310 may generate (e.g., iteratively generate) timelines for each of therecords 202 in one or more sub-traces of interest. Thetrace analyzer 310 may generate an occupancy map such as anexample occupancy map 400 depicted inFIG. 4 based on the one or more timelines. - In the illustrated example of
FIG. 3 , thebinary instrumentation engine 100 includes theparameter calculator 320 to determine one or more operating parameters associated with theGPU 108 ofFIG. 1 . In some examples, theparameter calculator 320 includes means to determine a busy time parameter, an idle time parameter, an occupancy time parameter, and/or a utilization parameter associated with theGPU 108. In some examples, theparameter calculator 320 determines the one or more operating parameters based on theoccupancy map 400 depicted inFIG. 4 . For example, theparameter calculator 320 may determine a busy time parameter for a hardware thread by determining a quantity of time that the hardware thread is busy during a time period. In other examples, theparameter calculator 320 may calculate an idle parameter for the hardware thread by determining a quantity of time that the hardware thread is idle during the time period. In yet other examples, theparameter calculator 320 may determine a utilization parameter by calculating a ratio of the busy parameter and a total quantity of time associated with a time duration of interest. - In some examples, the
parameter calculator 320 determines aggregate operating parameters that are based on a quantity of hardware threads. For example, theparameter calculator 320 may calculate an aggregate utilization parameter by calculating a ratio of one or more busy hardware threads and a total quantity of hardware threads for a time duration or time period of interest. - In the illustrated example of
FIG. 3 , thebinary instrumentation engine 100 includes theprocessor optimizer 330 to adjust operation of theCPU 112 and/or theGPU 108 based on the occupancy map, the one or more operating parameters, etc. In some examples, theprocessor optimizer 330 transmits the one or more operating parameters to theapplication 114 ofFIG. 1 . For example, theprocessor optimizer 330 may report and/or otherwise communicate a hardware thread utilization, an execution unit utilization, etc., associated with theGPU 108 to developers (e.g., software developers, processor designers, GPU engineers, etc.) with a performance analysis tool, a graphical user interface included in the performance analysis tool, etc. In such examples, the developers may improve their software by improving, for example, load balance of computational tasks, provisioning different data distribution among hardware threads, execution units, etc., of theGPU 108, etc. - In some examples, the
processor optimizer 330 includes means to improve and/or otherwise optimize resource scheduling (e.g., hardware scheduling, memory allocation, etc.) by theCPU 112. For example, developers may develop and/or improve hardware scheduling functions or mechanisms by analyzing the one or more operating parameters associated with theGPU 108. In other examples, theprocessor optimizer 330 invokes hardware, software, firmware, and/or any combination of hardware, software, and/or firmware (e.g., theGPU driver 116, theCPU 112, etc.) to improve operation of theGPU 108. For example, theprocessor optimizer 330 may generate and transmit an instruction (e.g., a command, machine readable instructions, etc.) to theGPU driver 116, theCPU 112, etc., ofFIG. 1 . In response to receiving and/or otherwise executing the instruction, theGPU driver 116, theCPU 112, etc., is invoked to determine whether to adjust an operation of theGPU 108. For example, theGPU driver 116, and/or, more generally, theCPU 112 may be called to adjust scheduling of computational tasks, jobs, workloads, etc., to be executed by theGPU 108. - In some examples, the
processor optimizer 330 invokes theGPU driver 116 to analyze one or more operating parameters based on an occupancy map. For example, the GPU driver 116 (or the CPU 112) may compare an operating parameter to an operating parameter threshold (e.g., a busy threshold, an idle threshold, a utilization threshold, etc.). For example, when invoked, the GPU driver 116 (or the CPU 112) may determine that a utilization of theGPU 108 is 95% corresponding to theGPU 108 being busy 95% of a measured time interval. TheGPU driver 116 may compare the utilization of 95% to a utilization threshold of 80% and determine that theGPU 108 should not accept more computational tasks based on the utilization satisfying the utilization threshold (e.g., the utilization is greater than the utilization threshold). As used herein, a job or a workload may refer to a set of one or more computational tasks to be executed by one or more hardware threads. - In other examples, when invoked by the
processor optimizer 330, the GPU driver 116 (or the CPU 112) may determine that a utilization of theGPU 108 is 40%. TheGPU driver 116 may compare the utilization of 40% to the utilization threshold of 80% and determine that theGPU 108 has available bandwidth to execute more computational tasks. For example, theGPU driver 116 may determine that the utilization of 40% does not satisfy the utilization threshold of 80%. In response to determining that the utilization of theGPU 108 does not satisfy the utilization threshold, theGPU driver 116 may adjust or modify a schedule of resources to facilitate tasks to be executed by theGPU 108. For example, theGPU driver 116 may increase a quantity of computational tasks that theGPU 108 is currently executing and/or will be executing based on the utilization parameter. - While an example manner of implementing the
binary instrumentation engine 100 ofFIG. 1 is illustrated inFIG. 3 , one or more of the elements, processes, and/or devices illustrated inFIG. 3 may be combined, divided, re-arranged, omitted, eliminated, and/or implemented in any other way. Further, theexample instruction generator 300, theexample trace analyzer 310, theexample parameter calculator 320, theexample processor optimizer 330, and/or, more generally, the examplebinary instrumentation engine 100 ofFIG. 1 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of theexample instruction generator 300, theexample trace analyzer 310, theexample parameter calculator 320, theexample processor optimizer 330, and/or, more generally, the examplebinary instrumentation engine 100 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)), and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of theexample instruction generator 300, theexample trace analyzer 310, theexample parameter calculator 320, and/or theexample processor optimizer 330 is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc., including the software and/or firmware. Further still, the examplebinary instrumentation engine 100 ofFIG. 1 may include one or more elements, processes, and/or devices in addition to, or instead of, those illustrated inFIG. 3 , and/or may include more than one of any or all of the illustrated elements, processes, and devices. As used herein, the phrase “in communication,” including variations thereof, encompasses direct communication and/or indirect communication through one or more intermediary components, and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic intervals, scheduled intervals, aperiodic intervals, and/or one-time events. -
FIG. 4 depicts anexample occupancy map 400 generated by thebinary instrumentation engine 100 ofFIGS. 1 and 3 . For example, thetrace analyzer 310 ofFIG. 3 may generate theoccupancy map 400 based on one or more sub-traces included in thetrace buffer 200 processed by thetrace analyzer 310 ofFIG. 3 . InFIG. 4 , thebinary instrumentation engine 100 organized therecords 202 intoexample sub-traces first example sub-trace 402, asecond example sub-trace 404, athird example sub-trace 406, afourth example sub-trace 408, and afifth example sub-trace 410. For example, a sub-trace may refer to a sequence of one or more records corresponding to the same hardware thread identifier. - In the illustrated example of
FIG. 4 , the first, third, andfourth sub-traces records 202. InFIG. 4 , the second andfifth sub-traces records 202. For example, a first one and a second one of therecords 202 included in thesecond sub-trace 404 have the same hardware thread ID of 1. Alternatively, the first throughfifth sub-traces records 202. - In
FIG. 4 , thebinary instrumentation engine 100 generates theoccupancy map 400 by processing therecords 202 included in the sub-traces 402, 404, 406, 408, 410. For example, thetrace analyzer 310 may map the one ormore records 202 included in the sub-traces 402, 404, 406, 408, 410 to an example time interval (e.g., a timeline, an occupancy timeline, a time duration, etc.) 412 of theoccupancy map 400. For example, thetrace analyzer 310 may map therecord 202 of thefirst sub-trace 402 to thetimeline 412 of the occupancy map defined by [A,B], where A corresponds to a first timestamp ofhardware thread ID 1 and B corresponds to a second timestamp of thehardware thread ID 1, where the second timestamp is after the first timestamp. The time duration spanning from the first timestamp until the second timestamp corresponds to thetimeline 412. For example, thetrace analyzer 310 may map timelines associated with the records 202 (e.g., the timeline 412) to generate theoccupancy map 400, where the timelines represent time durations during which the corresponding hardware threads are busy. InFIG. 4 , thetimeline 412 has a starting point at a first position corresponding to the first timestamp and has an end point at a second position corresponding to the second timestamp. Thetrace analyzer 310 represents, denotes, marks, etc., the time interval between the starting point and the end point as busy (e.g., represented inFIG. 4 as a rectangle) and represents the time interval outside of the starting point and the end point as idle (e.g., represented by empty space). - In some examples, the
trace analyzer 310 updates (e.g., iteratively updates, continuously updates, etc.) theoccupancy map 400 based on (continuously) obtaining and (continuously) processing thetrace buffer 200. In some examples, theparameter calculator 320 generates the one or more operating parameters based on theoccupancy map 400. For example, theparameter calculator 320 may determine a utilization of hardware thread identifier 0 included in theGPU 108 by calculating a ratio of a busy time of the hardware thread identifier 0 with respect to a measured time period. In other examples, theparameter calculator 320 may determine an aggregate utilization of theGPU 108 by calculating a ratio of a first quantity of hardware threads that are busy and a second quantity of total hardware threads of theGPU 108 for a measured time period. - Flowcharts representative of example hardware logic, machine readable instructions, hardware implemented state machines, and/or any combination thereof for implementing the
binary instrumentation engine 100 ofFIGS. 1 and 3 are shown inFIGS. 5-6 . The machine readable instructions may be an executable program or portion of an executable program for execution by a computer processor such as theprocessor 712 shown in theexample processor platform 700 discussed below in connection withFIG. 7 . The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with theprocessor 712, but the entire program and/or parts thereof could alternatively be executed by a device other than theprocessor 712 and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowcharts illustrated inFIGS. 5-6 , many other methods of implementing the examplebinary instrumentation engine 100 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, an FPGA, an ASIC, a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. - As mentioned above, the example processes of
FIGS. 5-6 may be implemented using executable instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory, and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. - “Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim employs any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, having, etc.) as a preamble or within a claim recitation of any kind, it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim or recitation. As used herein, when the phrase “at least” is used as the transition term in, for example, a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
-
FIG. 5 is a flowchart representative of example machinereadable instructions 500 which may be executed to implement thebinary instrumentation engine 100 ofFIGS. 1 and 3-4 to improve operation of theGPU 108 ofFIG. 1 . The machinereadable instructions 500 begin atblock 502, at which thebinary instrumentation engine 100 generates binary instructions to be included a kernel to be executed by a GPU. For example, the instruction generator 300 (FIG. 3 ) may instrument thefirst kernel 104 ofFIG. 1 by generating binary instructions corresponding to theprofiling instructions 102 ofFIG. 1 and inserting the binary instructions into thefirst kernel 104 to generate thesecond kernel 106 ofFIG. 1 . - At
block 504, thebinary instrumentation engine 100 instructs a GPU driver to transmit the kernel including the binary instructions to the GPU for execution. For example, theinstruction generator 300 may transmit thesecond kernel 106 to theGPU driver 116 and instruct theGPU driver 116 to store thesecond kernel 106 in thememory 120. TheGPU 108 may retrieve thesecond kernel 106 form thememory 120 and execute thesecond kernel 106. - At
block 506, thebinary instrumentation engine 100 obtains a trace buffer associated with executing the kernel. For example, thetrace analyzer 310 may retrieve thetrace buffer 124 ofFIG. 1 or thetrace buffer 200 ofFIG. 2 from thememory 120. - At
block 508, thebinary instrumentation engine 100 processes the trace buffer to generate an occupancy map. For example, the trace analyzer 310 (FIG. 3 ) may sort and/or otherwise organize therecords 202 ofFIG. 2 into one or more sub-traces such as the sub-traces 402, 404, 406, 408, 410 ofFIG. 4 . In such examples, thetrace analyzer 310 may map ones of therecords 202 included in the sub-traces 402, 404, 406, 408, 410 to timelines to generate theoccupancy map 400 ofFIG. 4 . An example process that may be used to implementblock 508 is described below in connection withFIG. 6 . - At
block 510, thebinary instrumentation engine 100 determines operating parameter(s) of the GPU. For example, the parameter calculator 320 (FIG. 3 ) may determine one or more operating parameters such as a busy time parameter, an idle time parameter, an occupancy time parameter, and/or a utilization parameter associated with theGPU 108 executing thesecond kernel 106. In some examples, theparameter calculator 320 determines the one or more operating parameters based on the information included in theoccupancy map 400 ofFIG. 4 such as thetimeline 412. - At
block 512, the CPU 112 (FIG. 1 ) determines whether to adjust a workload of the GPU based on the operating parameter(s). For example, the processor optimizer 330 (FIG. 3 ) may invoke the GPU driver 116 (FIG. 1 ) to compare a value of an operating parameter to an operating parameter threshold and determine whether the value satisfies the operating parameter threshold based on the comparison. For example, theGPU driver 116 may compare a utilization of 50% of theGPU 108 to a utilization threshold of 75% and determine that the utilization of 50% does not satisfy the utilization threshold of 75% based on the utilization of 50% being less than the utilization threshold of 75%. In such examples, theGPU driver 116 may determine to adjust and/or otherwise modify the workload of theGPU 108 based on the utilization of theGPU 108 satisfying the utilization threshold. For example, theGPU driver 116 may adjust the workload of theGPU 108 by increasing a quantity of computational tasks to be executed by theGPU 108. - If, at
block 512, theCPU 112 determines not to adjust the workload of the GPU based on the operating parameter(s), control proceeds to block 516 to determine whether to generate additional binary instructions. If, atblock 512, theCPU 112 determines to adjust the workload of the GPU based on the operating parameter(s), then, atblock 514, thebinary instrumentation engine 100 invokes the GPU driver to adjust the workload of the GPU. For example, theprocessor optimizer 330 may generate a command, an instruction, etc., to invoke theGPU driver 116 to adjust the workload of theGPU 108. For example, theGPU driver 116, and/or, more generally, theCPU 112 may determine to increase a quantity of computational tasks to be executed by theGPU 108 when invoked by the instruction generated by theprocessor optimizer 330. - At
block 516, thebinary instrumentation engine 100 determines whether to generate additional binary instructions. For example, theinstruction generator 300 may determine to instrument another kernel different from thefirst kernel 104. If, atblock 516, thebinary instrumentation engine 100 determines to generate additional binary instructions, control returns to block 502 to generate binary instructions to be included in another kernel to be executed by the GPU. - If, at
block 516, thebinary instrumentation engine 100 determines not to generate additional binary instructions, then, atblock 518, thebinary instrumentation engine 100 determines whether to continue monitoring the GPU. For example, thetrace analyzer 310 may determine to maintain retrieving thetrace buffer 124 either asynchronously or synchronously. - If, at
block 518, thebinary instrumentation engine 100 determines to continue monitoring the GPU, control returns to block 506 to obtain the trace buffer associated with executing the kernel, otherwise the machinereadable instructions 500 ofFIG. 5 conclude. -
FIG. 6 is a flowchart representative of the machinereadable instructions 508 which may be executed to implement the examplebinary instrumentation engine 100 ofFIGS. 1 and 3-4 to process thetrace buffer 124 ofFIG. 1 or thetrace buffer 200 ofFIG. 2 to generate theoccupancy map 400 ofFIG. 4 . The machinereadable instructions 508 begin atblock 602, at which thebinary instrumentation engine 100 groups records into sub-traces based on hardware thread identifier. For example, the trace analyzer 310 (FIG. 3 ) may organize therecords 202 ofFIG. 2 included in thetrace buffer 200 based on hardware thread identifiers of therecords 202 into thesub-traces FIG. 4 . - At
block 604, thebinary instrumentation engine 100 selects a sub-trace of interest to process. For example, thetrace analyzer 310 may select thesecond sub-trace 404 to process. Atblock 606, thebinary instrumentation engine 100 determines whether the sub-trace has more than one record. For example, thetrace analyzer 310 may determine that thesecond sub-trace 404 has two of therecords 202, where a first one of therecords 202 has a first index of 2 (Record 2) and a second one of therecords 202 has a second index of 3 (Record 3). - If, at
block 606, thebinary instrumentation engine 100 determines that the sub-trace does not have more than one record, control proceeds to block 610 to select a record of interest to process. If, atblock 606, thebinary instrumentation engine 100 determines that the sub-trace has more than one record, then atblock 608, thebinary instrumentation engine 100 assigns new indices to the records. For example, thetrace analyzer 310 may assign an index of 1 to the first one of therecords 202 included in thesecond sub-trace 404 and assign an index of 2 to the second one of therecords 202 included in thesecond sub-trace 404. - At
block 610, thebinary instrumentation engine 100 selects a record of interest to process. For example, thetrace analyzer 310 may select the first one of therecords 202 included in thesecond sub-trace 404 to process. Atblock 612, thebinary instrumentation engine 100 maps a time interval in the record to an occupancy map. For example, thetrace analyzer 310 may map the time interval represented by [A,B] in the first one of therecords 202 included in thesecond sub-trace 404 to theoccupancy map 400. Thetrace analyzer 310 may designate the time interval from [A,B] as busy in theoccupancy map 400 and designate the time interval outside of [A,B] as idle. - At
block 614, thebinary instrumentation engine 100 determines whether to select another record of interest to process. For example, thetrace analyzer 310 may determine to select the second one of therecords 202 included in thesecond sub-trace 404 to process. - If, at
block 614, thebinary instrumentation engine 100 determines to select another record of interest to process, control returns to block 610 to select another record of interest to process. If, atblock 614, thebinary instrumentation engine 100 determines not to select another record of interest to process, then, atblock 616, thebinary instrumentation engine 100 determines whether to select another sub-trace of interest to process. For example, thetrace analyzer 310 may determine to select thethird sub-trace 406 of thetrace buffer 124 to process. - If, at
block 616, thebinary instrumentation engine 100 determines to select another sub-trace of interest to process, control returns to block 604 to select another sub-trace of interest to process. If, atblock 616, thebinary instrumentation engine 100 determines not to select another sub-trace of interest to process, control returns to block 510 of the machinereadable instructions 500 ofFIG. 5 to determine operating parameter(s) of the GPU. -
FIG. 7 is a block diagram of anexample processor platform 700 structured to execute the instructions ofFIGS. 5-6 to implement the binary instrumentation engine ofFIGS. 1 and 3-4 . Theprocessor platform 700 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a headset or other wearable device, or any other type of computing device. - The
processor platform 700 of the illustrated example includes aprocessor 712. Theprocessor 712 of the illustrated example is hardware. For example, theprocessor 712 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, theprocessor 712 implements theexample instruction generator 300, theexample trace analyzer 310, theexample parameter calculator 320, and theexample processor optimizer 330 ofFIG. 3 . - The
processor 712 of the illustrated example includes a local memory 713 (e.g., a cache). Theprocessor 712 of the illustrated example is in communication with a main memory including avolatile memory 714 and anon-volatile memory 716 via abus 718. Thevolatile memory 714 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®), and/or any other type of random access memory device. Thenon-volatile memory 716 may be implemented by flash memory and/or any other desired type of memory device. Access to themain memory - The
processor platform 700 of the illustrated example also includes aninterface circuit 720. Theinterface circuit 720 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a near field communication (NFC) interface, and/or a PCI express interface. - In the illustrated example, one or
more input devices 722 are connected to theinterface circuit 720. The input device(s) 722 permit(s) a user to enter data and/or commands into theprocessor 712. The input device(s) 722 can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, an isopoint device, and/or a voice recognition system. - One or
more output devices 724 are also connected to theinterface circuit 720 of the illustrated example. Theoutput devices 724 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. Theinterface circuit 720 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or a graphics driver processor. - The
interface circuit 720 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via anetwork 726. The communication can be via, for example, an Ethernet connection, a digital subscriber line (DSL) connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc. - The
processor platform 700 of the illustrated example also includes one or moremass storage devices 728 for storing software and/or data. Examples of suchmass storage devices 728 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, redundant array of independent disks (RAID) systems, and digital versatile disk (DVD) drives. - The machine
executable instructions 732 ofFIGS. 5-6 be stored in themass storage device 728, in thevolatile memory 714, in thenon-volatile memory 716, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD. - From the foregoing, it will be appreciated that example methods, apparatus, and articles of manufacture have been disclosed that improve operation of a processor, a graphics processing unit, etc. The disclosed methods, apparatus, and articles of manufacture improve the efficiency of using a computing device by adjusting a resource schedule based on available bandwidth of resources. By increasing a quantity of computational tasks to be executed by a GPU based on determining one or more operating parameters disclosed herein, the GPU may execute more computational tasks compared to prior systems. By determining the one or more parameters disclosed herein, developers can generate kernels that can be executed quickly and more efficiently by GPUs compared to prior systems. The disclosed methods, apparatus, and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer.
- The following pertain to further examples disclosed herein.
- Example 1 includes an apparatus to improve operation of a graphics processing unit (GPU), the apparatus comprising an instruction generator to insert profiling instructions into a GPU kernel to generate an instrumented GPU kernel, the instrumented GPU kernel is to be executed by a GPU, a trace analyzer to generate an occupancy map associated with the GPU executing the instrumented GPU kernel, a parameter calculator to determine one or more operating parameters of the GPU based on the occupancy map, and a processor optimizer to invoke hardware adjust a workload of the GPU based on the one or more operating parameters.
- Example 2 includes the apparatus of example 1, wherein the instruction generator is to insert the profiling instructions by inserting a first subset of the profiling instructions at a first address of the GPU kernel and inserting a second subset of the profiling instructions at a second address of the GPU kernel, the first address different from the second address.
- Example 3 includes the apparatus of example 1, wherein the instrumented GPU kernel is to cause the GPU to generate a trace buffer including timestamps and hardware thread identifiers, the trace buffer including one or more records, the one or more records each including a first data field corresponding to a first timestamp included in the timestamps, a second data field corresponding to a second timestamp included in the timestamps, and a third data field corresponding to one of the hardware thread identifiers.
- Example 4 includes the apparatus of example 1, wherein the trace analyzer is to generate the occupancy map by grouping one or more records of a trace buffer generated by the GPU into one or more sub-traces based on hardware thread identifiers included in the trace buffer, the one or more records having first indices, assigning second indices to the one or more records in the one or more sub-traces when the one or more sub-traces have more than one of the one or more records, the second indices different from the first indices, and mapping timelines associated with the one or more records to the occupancy map.
- Example 5 includes the apparatus of example 4, wherein the trace analyzer is to map the timelines to the occupancy map by representing first time durations of the occupancy map corresponding to the timelines as busy and representing second time durations of the occupancy map as idle, the second time durations corresponding to time periods not included in the timelines.
- Example 6 includes the apparatus of example 1, wherein the one or more operating parameters include at least one of a busy time parameter, an idle time parameter, an occupancy time parameter, or a utilization parameter.
- Example 7 includes the apparatus of example 1, wherein the hardware is to adjust the workload of the GPU by comparing a first one of the one or more operating parameters to a threshold, determining whether to increase a quantity of computational tasks to be executed by the GPU based on the comparison, and increasing the quantity of computational tasks when the first one of the one or more parameters satisfies the threshold.
- Example 8 includes a non-transitory computer readable medium comprising instructions which, when executed, cause a machine to at least insert profiling instructions into a GPU kernel to generate an instrumented GPU kernel, the instrumented GPU kernel is to be executed by a GPU, generate an occupancy map associated with the GPU executing the instrumented GPU kernel, determine one or more operating parameters of the GPU based on the occupancy map, and adjust a workload of the GPU based on the one or more operating parameters.
- Example 9 includes the non-transitory computer readable medium of example 8, further including instructions which, when executed, cause the machine to at least insert a first subset of the profiling instructions at a first address of the GPU kernel and insert a second subset of the profiling instructions at a second address of the GPU kernel, the first address different from the second address.
- Example 10 includes the non-transitory computer readable medium of example 8, wherein the instrumented GPU kernel is to cause the GPU to generate a trace buffer including timestamps and hardware thread identifiers, the trace buffer including one or more records, the one or more records each including a first data field corresponding to a first timestamp included in the timestamps, a second data field corresponding to a second timestamp included in the timestamps, and a third data field corresponding to one of the hardware thread identifiers.
- Example 11 includes the non-transitory computer readable medium of example 8, further including instructions which, when executed, cause the machine to at least group one or more records of a trace buffer generated by the GPU into one or more sub-traces based on hardware thread identifiers included in the trace buffer, the one or more records having first indices, assign second indices to the one or more records in the one or more sub-traces when the one or more sub-traces have more than one of the one or more records, the second indices different from the first indices, and map timelines associated with the one or more records to the occupancy map.
- Example 12 includes the non-transitory computer readable medium of example 11, further including instructions which, when executed, cause the machine to at least represent first time durations of the occupancy map corresponding to the timelines as busy and represent second time durations of the occupancy map as idle, the second time durations corresponding to time periods not included in the timelines.
- Example 13 includes the non-transitory computer readable medium of example 8, wherein the one or more operating parameters include at least one of a busy time parameter, an idle time parameter, an occupancy time parameter, or a utilization parameter.
- Example 14 includes the non-transitory computer readable medium of example 8, further including instructions which, when executed, cause the machine to at least compare a first one of the one or more operating parameters to a threshold, determine whether to increase a quantity of computational tasks to be executed by the GPU based on the comparison, and increase the quantity of computational tasks when the first one of the one or more parameters satisfies the threshold.
- Example 15 includes an apparatus to improve operation of a graphics processing unit (GPU), the apparatus comprising means for inserting profiling instructions into a GPU kernel to generate an instrumented GPU kernel, the instrumented GPU kernel is to be executed by a GPU, means for generating an occupancy map associated with the GPU executing the instrumented GPU kernel, means for determining one or more operating parameters of the GPU based on the occupancy map, and means for adjusting a workload of the GPU based on the one or more operating parameters.
- Example 16 includes the apparatus of example 15, wherein the means for inserting the profiling instructions is to insert a first subset of the profiling instructions at a first address of the GPU kernel and insert a second subset of the profiling instructions at a second address of the GPU kernel, the first address different from the second address.
- Example 17 includes the apparatus of example 15, wherein the instrumented GPU kernel is to cause the GPU to generate a trace buffer including timestamps and hardware thread identifiers, the trace buffer including one or more records, the one or more records each including a first data field corresponding to a first timestamp included in the timestamps, a second data field corresponding to a second timestamp included in the timestamps, and a third data field corresponding to one of the hardware thread identifiers.
- Example 18 includes the apparatus of example 15, wherein the means for generating the occupancy map is to group one or more records of a trace buffer generated by the GPU into one or more sub-traces based on hardware thread identifiers included in the trace buffer, the one or more records having first indices, assign second indices to the one or more records in the one or more sub-traces when the one or more sub-traces have more than one of the one or more records, the second indices different from the first indices, and map timelines associated with the one or more records to the occupancy map.
- Example 19 includes the apparatus of example 18, wherein the means for generating the occupancy map is to map the timelines to the occupancy map by representing first time durations of the occupancy map corresponding to the timelines as busy and representing second time durations of the occupancy map as idle, the second time durations corresponding to time periods not included in the timelines.
- Example 20 includes the apparatus of example 15, wherein the one or more operating parameters include at least one of a busy time parameter, an idle time parameter, an occupancy time parameter, or a utilization parameter.
- Example 21 includes the apparatus of example 15, wherein the means for adjusting the workload of the GPU is to compare a first one of the one or more operating parameters to a threshold, determine whether to increase a quantity of computational tasks to be executed by the GPU based on the comparison, and increase the quantity of computational tasks when the first one of the one or more parameters satisfies the threshold.
- Example 22 includes a method to improve operation of a graphic processing unit (GPU), the method comprising inserting profiling instructions into a GPU kernel to generate an instrumented GPU kernel, the instrumented GPU kernel is to be executed by a GPU, generating an occupancy map associated with the GPU executing the instrumented GPU kernel, determining one or more operating parameters of the GPU based on the occupancy map, and adjusting a workload of the GPU based on the one or more operating parameters.
- Example 23 includes the method of example 22, wherein the instrumented GPU kernel is to cause the GPU to generate a trace buffer including timestamps and hardware thread identifiers, the trace buffer including one or more records, the one or more records each including a first data field corresponding to a first timestamp included in the timestamps, a second data field corresponding to a second timestamp included in the timestamps, and a third data field corresponding to one of the hardware thread identifiers.
- Example 24 includes the method of example 22, further including grouping one or more records of a trace buffer generated by the GPU into one or more sub-traces based on hardware thread identifiers included in the trace buffer, the one or more records having first indices, assigning second indices to the one or more records in the one or more sub-traces when the one or more sub-traces have more than one of the one or more records, the second indices different from the first indices, and mapping timelines associated with the one or more records to the occupancy map.
- Example 25 includes the method of example 22, further including comparing a first one of the one or more operating parameters to a threshold, determining whether to increase a quantity of computational tasks to be executed by the GPU based on the comparison, and increasing the quantity of computational tasks when the first one of the one or more parameters satisfies the threshold.
- Although certain example methods, apparatus, and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus, and articles of manufacture fairly falling within the scope of the claims of this patent.
Claims (21)
1. (canceled)
2. An apparatus to improve operation of a graphics processing unit (GPU), the apparatus comprising:
instructions; and
at least one processor to execute the instructions to:
populate a trace buffer based on one or more records, corresponding ones of the records having a hardware thread identifier and a timestamp, the one or more records generated in response to an execution of an instrumented GPU kernel by the GPU;
generate one or more sub-traces based on at least some of the hardware thread identifiers;
determine one or more timelines associated with the timestamps of the one or more sub-traces;
generate an occupancy map associated with the GPU based on the one or more timelines; and
adjust a workload of the GPU based on the occupancy map.
3. The apparatus of claim 2 , wherein the one or more records include a first record having a first index, the one or more sub-traces include a first sub-trace including the first record, and the at least one processor is to assign a second index to the first record in response to a determination the first sub-trace includes more than one of the one or more records.
4. The apparatus of claim 2 , wherein the instructions are first instructions, and the at least one processor is to insert profiling instructions into a GPU kernel to generate the instrumented GPU kernel, the instrumented GPU kernel to be executed by a GPU.
5. The apparatus of claim 4 , wherein the at least one processor is to insert the profiling instructions by inserting a first subset of the profiling instructions at a first address of the GPU kernel and inserting a second subset of the profiling instructions at a second address of the GPU kernel, the first address different from the second address.
6. The apparatus of claim 2 , wherein the at least one processor is to map the one or more timelines to the occupancy map by representing first time durations of the occupancy map corresponding to the timelines as busy and representing second time durations of the occupancy map as idle, the second time durations corresponding to time periods not included in the timelines.
7. The apparatus of claim 2 , wherein the at least one processor is to determine one or more operating parameters of the GPU based on the occupancy map, the workload of the GPU to be adjusted based on the one or more operating parameters.
8. The apparatus of claim 7 , wherein the one or more operating parameters include at least one of a busy time parameter, an idle time parameter, an occupancy time parameter, or a utilization parameter.
9. A non-transitory computer readable storage medium comprising instructions that, when executed, cause at least one processor to at least:
populate a trace buffer based on one or more records, corresponding ones of the records having a hardware thread identifier and a timestamp, the one or more records generated in response to an execution of an instrumented graphics processing unit (GPU) kernel by a GPU;
generate one or more sub-traces based on at least some of the hardware thread identifiers;
determine one or more timelines associated with the timestamps of the one or more sub-traces;
generate an occupancy map associated with the GPU based on the one or more timelines; and
adjust a workload of the GPU based on the occupancy map.
10. The non-transitory computer readable storage medium of claim 9 , wherein the one or more records include a first record having a first index, the one or more sub-traces include a first sub-trace including the first record, and the instructions, when executed, cause the at least one processor to assign a second index to the first record in response to a determination the first sub-trace includes more than one of the one or more records.
11. The non-transitory computer readable storage medium of claim 9 , wherein the instructions are first instructions and, when executed, the first instructions cause the at least one processor to insert profiling instructions into a GPU kernel to generate the instrumented GPU kernel, the instrumented GPU kernel to be executed by a GPU.
12. The non-transitory computer readable storage medium of claim 11 , wherein the first instructions, when executed, cause the at least one processor to insert the profiling instructions by inserting a first subset of the profiling instructions at a first address of the GPU kernel and inserting a second subset of the profiling instructions at a second address of the GPU kernel, the first address different from the second address.
13. The non-transitory computer readable storage medium of claim 9 , wherein the instructions, when executed, cause the at least one processor to map the one or more timelines to the occupancy map by representing first time durations of the occupancy map corresponding to the timelines as busy and representing second time durations of the occupancy map as idle, the second time durations corresponding to time periods not included in the timelines.
14. The non-transitory computer readable storage medium of claim 9 , wherein the instructions, when executed, cause the at least one processor to determine one or more operating parameters of the GPU based on the occupancy map, the workload of the GPU to be adjusted based on the one or more operating parameters.
15. The non-transitory computer readable storage medium of claim 14 , wherein the one or more operating parameters include at least one of a busy time parameter, an idle time parameter, an occupancy time parameter, or a utilization parameter.
16. A method to improve operation of a graphics processing unit (GPU), the method comprising:
populating a trace buffer based on one or more records in response to an execution of an instrumented GPU kernel by the GPU, corresponding ones of the records having a hardware thread identifier and a timestamp;
generating one or more sub-traces based on at least some of the hardware thread identifiers;
determining one or more timelines associated with the timestamps of the one or more sub-traces;
generating an occupancy map associated with the GPU based on the one or more timelines; and
adjusting a workload of the GPU based on the occupancy map.
17. The method of claim 16 , wherein the one or more records include a first record having a first index, the one or more sub-traces include a first sub-trace including the first record, and further including assigning a second index to the first record in response to a determination the first sub-trace includes more than one of the one or more records.
18. The method of claim 16 , further including inserting profiling instructions into a GPU kernel to generate the instrumented GPU kernel.
19. The method of claim 18 , further including inserting the profiling instructions by inserting a first subset of the profiling instructions at a first address of the GPU kernel and inserting a second subset of the profiling instructions at a second address of the GPU kernel, the first address different from the second address.
20. The method of claim 16 , further including mapping the one or more timelines to the occupancy map by representing first time durations of the occupancy map corresponding to the timelines as busy and representing second time durations of the occupancy map as idle, the second time durations corresponding to time periods not included in the timelines.
21. The method of claim 16 , further including determining one or more operating parameters of the GPU based on the occupancy map, the workload of the GPU to be adjusted based on the one or more operating parameters.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/096,590 US20210192674A1 (en) | 2018-09-12 | 2020-11-12 | Methods and apparatus to improve operation of a graphics processing unit |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/129,525 US10867362B2 (en) | 2018-09-12 | 2018-09-12 | Methods and apparatus to improve operation of a graphics processing unit |
US17/096,590 US20210192674A1 (en) | 2018-09-12 | 2020-11-12 | Methods and apparatus to improve operation of a graphics processing unit |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/129,525 Continuation US10867362B2 (en) | 2018-09-12 | 2018-09-12 | Methods and apparatus to improve operation of a graphics processing unit |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210192674A1 true US20210192674A1 (en) | 2021-06-24 |
Family
ID=65229662
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/129,525 Active 2038-09-20 US10867362B2 (en) | 2018-09-12 | 2018-09-12 | Methods and apparatus to improve operation of a graphics processing unit |
US17/096,590 Abandoned US20210192674A1 (en) | 2018-09-12 | 2020-11-12 | Methods and apparatus to improve operation of a graphics processing unit |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/129,525 Active 2038-09-20 US10867362B2 (en) | 2018-09-12 | 2018-09-12 | Methods and apparatus to improve operation of a graphics processing unit |
Country Status (1)
Country | Link |
---|---|
US (2) | US10867362B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023141370A1 (en) * | 2022-01-18 | 2023-07-27 | Commscope Technologies Llc | Optimizing total core requirements for virtualized systems |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108198124B (en) * | 2017-12-27 | 2023-04-25 | 上海联影医疗科技股份有限公司 | Medical image processing method, medical image processing device, computer equipment and storage medium |
US10949330B2 (en) * | 2019-03-08 | 2021-03-16 | Intel Corporation | Binary instrumentation to trace graphics processor code |
US11900123B2 (en) * | 2019-12-13 | 2024-02-13 | Advanced Micro Devices, Inc. | Marker-based processor instruction grouping |
US11605147B2 (en) * | 2020-03-27 | 2023-03-14 | Tata Consultancy Services Limited | Method and system for tuning graphics processing unit (GPU) parameters of a GPU kernel |
US20210117202A1 (en) * | 2020-12-03 | 2021-04-22 | Intel Corporation | Methods and apparatus to generate graphics processing unit long instruction traces |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6769054B1 (en) * | 2001-02-26 | 2004-07-27 | Emc Corporation | System and method for preparation of workload data for replaying in a data storage environment |
US20030135609A1 (en) * | 2002-01-16 | 2003-07-17 | Sun Microsystems, Inc. | Method, system, and program for determining a modification of a system resource configuration |
US10255088B2 (en) * | 2016-05-13 | 2019-04-09 | Red Hat Israel, Ltd. | Modification of write-protected memory using code patching |
US10162676B2 (en) * | 2016-08-15 | 2018-12-25 | International Business Machines Corporation | Social objectives-based workload resolution in a cloud environment |
US10817289B2 (en) * | 2017-10-03 | 2020-10-27 | Nvidia Corp. | Optimizing software-directed instruction replication for GPU error detection |
-
2018
- 2018-09-12 US US16/129,525 patent/US10867362B2/en active Active
-
2020
- 2020-11-12 US US17/096,590 patent/US20210192674A1/en not_active Abandoned
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023141370A1 (en) * | 2022-01-18 | 2023-07-27 | Commscope Technologies Llc | Optimizing total core requirements for virtualized systems |
Also Published As
Publication number | Publication date |
---|---|
US20190043158A1 (en) | 2019-02-07 |
US10867362B2 (en) | 2020-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210192674A1 (en) | Methods and apparatus to improve operation of a graphics processing unit | |
US20230281104A1 (en) | Methods and apparatus to perform instruction-level graphics processing unit (gpu) profiling based on binary instrumentation | |
US10559057B2 (en) | Methods and apparatus to emulate graphics processing unit instructions | |
Mey et al. | Score-P: A unified performance measurement system for petascale applications | |
US20190317880A1 (en) | Methods and apparatus to improve runtime performance of software executing on a heterogeneous system | |
WO2019241921A1 (en) | Systems and methods for automated compiling | |
US8745622B2 (en) | Standalone software performance optimizer system for hybrid systems | |
US8725461B2 (en) | Inferring effects of configuration on performance | |
US10331538B2 (en) | Information processing apparatus and program execution status display method | |
DE102020119519A1 (en) | METHODS AND DEVICES FOR ENABLING OUT-OF-ORDER PIPELINE EXECUTION OF STATIC REPLACEMENT OF A WORKLOAD | |
US10922779B2 (en) | Techniques for multi-mode graphics processing unit profiling | |
US11120521B2 (en) | Techniques for graphics processing unit profiling using binary instrumentation | |
KR20180096780A (en) | Method and apparatus for data mining from core trace | |
US20230418613A1 (en) | Methods and apparatus to insert profiling instructions into a graphics processing unit kernel | |
KR20210021261A (en) | Methods and apparatus to configure heterogenous components in an accelerator | |
EP4009176A1 (en) | Methods and apparatus to generate graphics processing unit long instruction traces | |
Zhang et al. | Understanding the performance of GPGPU applications from a data-centric view | |
CN108986012B (en) | Shader parser | |
US10198784B2 (en) | Capturing commands in a multi-engine graphics processing unit | |
US20220100512A1 (en) | Deterministic replay of a multi-threaded trace on a multi-threaded processor | |
US20210232969A1 (en) | Methods and apparatus to process a machine learning model in a multi-process web browser environment | |
Jiang et al. | Moneo: Monitoring fine-grained metrics nonintrusively in AI infrastructure | |
JPWO2017135219A1 (en) | Design support apparatus, design support method, and design support program | |
JP2013101563A (en) | Program conversion apparatus, program conversion method and conversion program | |
Hu et al. | ALTIS: Modernizing GPGPU Benchmarking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTEL CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEVIT-GUREVICH, KONSTANTIN;BEREZALSKY, MICHAEL;ITZHAKI, NOAM;AND OTHERS;REEL/FRAME:055674/0417 Effective date: 20180912 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |