EP3465428B1 - Sample driven profile guided optimization with precise correlation - Google Patents

Sample driven profile guided optimization with precise correlation Download PDF

Info

Publication number
EP3465428B1
EP3465428B1 EP17727442.0A EP17727442A EP3465428B1 EP 3465428 B1 EP3465428 B1 EP 3465428B1 EP 17727442 A EP17727442 A EP 17727442A EP 3465428 B1 EP3465428 B1 EP 3465428B1
Authority
EP
European Patent Office
Prior art keywords
block
program
count
basic block
basic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP17727442.0A
Other languages
German (de)
French (fr)
Other versions
EP3465428A1 (en
Inventor
Wenlei He
Ten Tzen
Pratap Joseph CHANDAR
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Publication of EP3465428A1 publication Critical patent/EP3465428A1/en
Application granted granted Critical
Publication of EP3465428B1 publication Critical patent/EP3465428B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/443Optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring

Definitions

  • the present embodiments relate to techniques for compiling software programs for execution on computing systems and more particularly, to methods, devices, and systems that create profile data based on sampled instructions for use in profile guided optimization.
  • Morph is a combination of operating system and compiler technology that provides a practical framework for the advanced compiler optimizations needed to support continued improvements in application performance.
  • the Morph Monitor is an operating system kernel component that implements continuous, low overhead profiling and program monitoring.
  • the Morph Editor is a compiler component that implements re-optimization, transforming, compiler intermediate form into an executable.
  • the Morph Manager is a system component that manages profile information, including the automatic invocation of re-optimization.
  • US 6 275 981 B1 relates to a technique for relating profile data generated by monitoring the execution of an optimized machine code computer program back to the source language description of the computer program.
  • Logical line numbers are associated with the basic blocks of the intermediate code representation of the computer program and actual line numbers are associated with each instruction of the intermediate code representation of the computer program.
  • the logical line numbers remain fixed to basic blocks, while actual line numbers remain fixed to intermediate code instructions.
  • US 2009/0094590 A1 relates to techniques for generating an optimization insensitive behavior profile.
  • a source identifier is assigned to each instruction in an original control flow graph representing a program code prior to optimization.
  • the identifiers identify a basic block associated with the instruction or a group of basic blocks.
  • a source identifier in the set of source identifiers is assigned to instructions in an optimized control flow graph representing the program code after optimizing the program code.
  • the instructions in the optimized control flow graph are mapped to the original control flow graph using the set of source identifiers to form a mapping transformation.
  • Behaviour profile data associated with the optimized program code is moved to basic blocks in the original control flow graph using the mapping transformation to form the optimization insensitive behaviour profile.
  • Compiler optimizations are used to generate executable code that runs efficiently and makes efficient use of resources. These optimizations benefit from the knowledge of how the program will execute with production inputs.
  • Profile guided optimization obtains information or profile data generated from sample runs of the program to guide the compiler in performing optimizations that better suit a target environment. This profile data can include a count of the number of times instructions associated with a basic block are executed. In this manner, the compiler can optimize the more frequently executed areas of the program more aggressively than less frequently executed areas.
  • correlation data is generated which maps a processor instruction sampled from a sample run with a basic block associated with the source code corresponding to the processor instruction.
  • the correlation data includes a control flow representation for each function of the program which identifies the basic blocks of the function.
  • Each basic block contains a range of relative virtual addresses corresponding to instructions associated with the basic block in the source code control flow.
  • the block count from the sample run is then used to update the block count of the corresponding basic block.
  • the correlation data with the block counts is then considered the sample profile data which is then used to derive edge counts which are used to determine the optimizations to perform on certain portions of the program and the degree of optimization.
  • a user may visualize the sample profile data, edit the source code, run additional sample runs to obtain more block counts, and recompile the program, iteratively in any sequence, in order to obtain the desired level of optimization for an intended use.
  • a compiler typically performs code optimizations in order to generate code that executes faster and uses memory efficiently. These code optimizations can include inlining, code hoisting, dead storage elimination, eliminating common sub-expression, loop unrolling, code motion, induction variable elimination, reduction in strength, and so forth.
  • a traditional compiler performs these code optimizations based on static source code files without knowledge of the input that will be used at runtime which cannot be obtained from the static source code files.
  • Profile guided optimization is a technique that uses information generated from sample runs of a program to guide the compiler in performing optimizations to better suit the intended target environment.
  • the information collected during the sample runs is referred to as profile data.
  • the sample runs can be based on test scenarios with real word inputs.
  • the compiler can be guided to decide which optimizations are best suited for the target environment and how to perform the optimizations. For example, the knowledge of which sections of the program are most heavily executed allows the compiler to optimize the more heavily executed sections than those areas of the program that are used less frequently.
  • Instrumentation PGO is one type of PGO technique.
  • probes or special instructions are inserted into a binary representation of the program. The probes are used to collect information during execution of the program.
  • the instrumented binary is then compiled to form an instrumented executable that is run with test input.
  • the profile data resulting from these sample runs is then used in a subsequent compilation of the program, without the instrumented code, to optimize the program.
  • instrumentation-based PGO requires that modifications be made to the program in order to instrument the program to collect the profile data. The additional code in the instrumented program increases the execution time of the program and alters the quality of the profile data.
  • Hardware-event sampling is another PGO technique that has the advantage of not requiring any modifications to the program.
  • Certain processors have hardware performance counters that are used to count events or actions that occur during the operation of the processor. For example, some processors include counters for counting memory accesses, cache misses, instructions that are executed, instructions that are not executed, branch mis-predictions, clock cycles and so forth.
  • a performance monitoring unit (PMU) is often coupled to a processor and used to read a counter when the counter overflows. The PMU generates an interrupt to the program when the counter overflows and records the address of currently executed instruction and the value of one or more counters.
  • PMU performance monitoring unit
  • the subject matter disclosed herein overcomes this limitation by generating correlation data that maps the address of the sampled instruction to a basic block of the program.
  • This mapping utilizes a relative virtual address (RVA) range that is associated with each basic block and the sampled instruction.
  • RVA relative virtual address
  • the block counts are then used to derive edge counts and identify which areas in the program are more heavily traveled than other less frequently used areas. This information is crucial for the compiler in optimizing the program for faster execution with efficient memory usage.
  • the correlation data includes a control flow representation of each function of the program along with a block-to-RVA map.
  • the block-to-RVA map identifies ranges of RVAs that are associated with a particular basic block.
  • the identity of the basic blocks in a program is not known until the compiler performs control flow analysis.
  • the intermediate code representation (IR) statements used by the compiler are not associated with an address.
  • the compiler Before code linkage, the compiler generates an offset for each IR statement that is relative to a position within a function. This offset is part of the range of offsets associated with a basic block.
  • the start RVA of a function is known so the linker replaces the offset with a relative virtual address that is based from the start RVA of its associated function.
  • the present technique has several benefits over prior solutions.
  • the correlation data is more precise since the correlation data is generated on compiler generated code with a direct mapping from the compiler generated code to the basic block in the source code control flow representation, rather than relying on source code line number information.
  • the correlation data is more efficient since there is a direct mapping from a hardware instruction to a block through the RVA. This is more efficient than other techniques that may utilize debugging information to map to a line of source code since it is a direct mapping that does not rely on intermediate references, such as debugging information.
  • mapping to a basic block produces a more accurate block count since multiple basic blocks may be associated with a line of source code.
  • Compiler optimizations change the code such that the instructions are spread out and may belong to different disjoint RVA ranges. This is attributable to optimizations such as control flow optimization and code duplication that move the instructions to multiple areas in the program or in the case of macro expansion in the source code.
  • Mapping to a source code line number fails to differentiate multiple blocks that utilize a particular source code statement.
  • mapping e.g., SPD file
  • the compiler allows the compiler to store hints about how the raw sampled counts should be adjusted in the control flow representation of the correlation data.
  • a hint can be based on how an optimization transformed the program and can be used later to more accurately recover from the count any inconsistencies from the sampling.
  • FIG. 1 shows an exemplary configuration of a system that generates the correlation data 102, collects samples to generate the sample profile data 104, and compiles the program with the sample profile data 106.
  • correlation data 122 is generated from an initial compilation (compiler 110) of the program.
  • a corresponding object file 112 is generated which includes object code 116 and correlation data 118.
  • the correlation data 118 includes a control flow representation for each function of the source code 108 along with a block-to-offset map for each basic block in the control flow.
  • the object code files 112 are then linked (linker 114) and a corresponding image file 120 or executable is generated.
  • the linker 114 updates the correlation data 122 with the relative virtual addresses (i.e., virtual addresses) associated with each basic block that is based from the start RVA of its associated function.
  • the image file 120 is executed during one or more sample runs 124 using various inputs and outputs hardware instruction traces from each sample run 124.
  • the hardware instruction traces or sample data 126 from the multiple sample runs 124 are collected and formatted into sample profile trace (“SPT") data 130 by collector 128.
  • SPT data 130 contains a series of opcode, RVA, and count triplets associated with the sampled instructions.
  • the SPT data 130 is then correlated with the correlation data 122 to generate a sample program data ("SPD”) file 134.
  • the SPD file 134 includes the correlation data with block counts.
  • a block count represents the number of times the hardware instructions of a basic block are executed. The block count is based on the counts from the instruction traces.
  • the SPD file 134 is then used in a subsequent profile optimization compilation (compiler 136) of the source code files 108, 140. This compilation also updates the data in the SPD file 134 generating an updated SPD 144.
  • the updated SPD 144 contains correlation data that reflects the current source code and also contains counts carried over from the input SPD for functions that have not been edited.
  • the linker 138 receives the object code files 142 and the updated SPD file 144 from this compilation and forms an image file 148 suitable for execution (block 152).
  • a developer may update the SPD file (block 154) with additional sample runs using different inputs.
  • the SPD file may be a rolling profile where counts expire over time when additional sample data is generated.
  • the block counts may be weighted by date and the older counts may be retired when more recent sample data is generated.
  • the existing block counts may be discarded and replaced with the counts from the new sample data.
  • Block counts that are older than a predetermined threshold may be discarded and replaced with the counts from the new sample data.
  • a weight can be associated with the sample data based on a time the sample data is obtained and a block count may be based on a weighted average of the counts.
  • the developer may edit the source code (block 156) which may be re-optimized using the SPD file 134 having the additional inputs.
  • the developer may visualize the SPD file 146 (block 150) for further analysis and then update the SPD file (block 154) with additional sample runs.
  • the system is not constrained to any particular sequence of actions and is configured so that the developer may repeatedly generate a SPD file, optimize the program with a specific version of the SPD, execute the program, visualize the SPD file, edit the source code, and re-optimized the program in any intended manner.
  • FIG. 1 depicts an exemplary method 200 for compiling each source code file 202 of a program. It should be noted that the method 200 may be representative of some or all of the operations executed by one or more aspects described herein and that the method can include more or less operations than that which is described in Fig. 2 .
  • Compilation is the process of transforming source code into executable code.
  • the compilation is performed in two phases by a front end compiler 220 and a back end compiler 224.
  • the front end compiler 220 transforms the source code file 202 into an intermediate code representation 210 and the back end compiler 224 transforms the intermediate code representation 210 into an executable, such as an object file 218.
  • the front end compiler 220 may operate in phases which include lexical analysis (block 204), semantic and syntactic analysis (block 206) and intermediate code generation (block 208).
  • Lexical analysis (block 204) analyzes a stream of characters into tokens.
  • Syntactic analysis (block 206) or parsing takes the tokens and checks if the arrangement of tokens adheres to the grammar of the underlying programming language.
  • the syntactic analysis (block 206) outputs a parse tree or syntax tree.
  • Semantic analysis (block 206) checks the parse tree for semantic correctness, such as performing type checking and checking if the number of arguments in a function call is correct, and the like.
  • Intermediate code generation (block 208) generates the intermediate code representation which is a machine and language independent version of the source code.
  • the intermediate code representation may be represented in bytecodes, common intermediate language (“CIL”), GNU register transfer language (“GNU RTL”), parse tree, tree representation, or any other type of data structure or format used by a compiler or language virtual machine.
  • the back end compiler 224 receives the intermediate code representation 210 and scans this code to generate a control flow representation for each function in the program.
  • a control flow representation is a data structure abstracting the control flow behavior of a function.
  • a function is a group of source code statements that performs a specific task.
  • a function can be from a built-in library of functions or user-defined in the source code.
  • the control flow representation is a data structure that represents the flow through a program by grouping the IR statements into basic blocks where each basic block contains sequentially-ordered IR statements that have only one entry point and only one exit point.
  • the basic blocks in a control flow representation represent a sequence of instructions with a single entry and a single exit point and, the edges connecting the basic blocks represent the flow of control within a function from one basic block to another basic block.
  • the back end compiler 224 also generates the correlation data which includes a control flow representation of a function and a block-to-offset map.
  • the block-to-offset map associates each basic block of a function with a range of offsets. Each offset is associated with an instruction that is part of the basic block and is relative to the start of the function.
  • the back end compiler 224 then performs various optimizations on the intermediate code representation 210 which may include one or more of the following, without limitation: inlining; code hoisting; dead code elimination; eliminating common sub-expression; loop unrolling; induction variable elimination; reduction in strength; and so forth.
  • Inlining expands the body of a function with the source code of the function in order to eliminate the overhead associated with calling and returning from the function.
  • Code hoisting moves loop-invariant source code statements out of a loop so that the statement is only executed once rather than at each loop iteration.
  • Dead code elimination refers to eliminating code that is unreachable or is not executed.
  • Common sub-expression elimination refers to eliminating an expression that was previously computed and where the values of the expression have not changed since the previous computation.
  • Loop unrolling reduces the number of iterations in a loop by replicating the source code of the body of the loop.
  • Induction variable elimination combines multiple induction variables into a single induction variable. Reduction in strength replaces a computationally intensive operation with a less computationally intensive operation.
  • the final phase of the compilation is the code generation phase (block 216).
  • the back end compiler 224 assigns variables to registers (i.e., register allocation) and generates machine instructions for the target processing unit that is output to an object code file 218.
  • the object code file 218 includes the object code and correlation data. This combined object code file and correlation data 218 is sent to a linker for further processing.
  • a linker 308 receives each of the object code files 302 and links them into a single executable or image file 310 (block 304).
  • the linker 308 assembles the correlation data in each of the object files into a single correlation data file 312 (block 306).
  • the linker 308 replaces the offset ranges in each function's basic block with RVA ranges (block 306).
  • the start RVA of each function may be stored in the SPD.
  • the RVA ranges are relative to the start RVA for the corresponding function.
  • the image file 120 is executed with real world inputs representing different scenarios in various sample runs 124 to generate sample data 126.
  • samples of hardware instructions i.e., sample data 126) are generated.
  • performance monitoring tools that can be used to generate the sample data 126, such as without limitation, Microsoft's xperf, Intel's Vtune, Oracle's Hardware Activity Reporter (HAR), Event Tracing for Windows (ETW), and the like.
  • the format of the data that is output from these different performance monitoring tools differs.
  • the collector 128 receives the sample data 126 in the different formats and converts them into a uniform format, Sample Data Trace ("SPT") format.
  • SPT format includes, at least, an opcode, a RVA, and a count.
  • the SPT data 130 is then used by the SPD converter 132 to generate the SPD 134 along with the correlation data 122.
  • Fig. 4 illustrates an exemplary method 400 of the SPD converter 132.
  • the SPD converter 132 receives the SPT data 130 and the correlation data 122.
  • the correlation data 122 is an SPD file without block counts.
  • the SPD converter 132 reads each SPT record (block 402) and searches for a control flow representation in the correlation data 122 that matches the SPT record (block 404). This is done by matching the RVA in the SPT record with a control flow representation having a RVA range that includes the RVA in the SPT record (block 404).
  • the corresponding block identifier is found based on the RVA in the SPT record being within the RVA range of the matching block (block 406).
  • the SPD converter 132 determines if the RVA range of the sample includes an inlinee.
  • the correlation data includes the block-to-RVA map of each inlined function, one map for each inlined instance. If so (block 408-yes), the linee's control flow representation is found in the correlation data (block 410) and the process continues (block 406) to search the inlinee's control flow representation until it reaches a block that does not have an inlinee (block 408-no). Once found (block 408-no), the block count for the block is updated with the count from the SPT record (block 412). If the sample does not belong to an inline range (block 408-no), then the block count for the block is updated with the count from the SPT record (block 412). This process is repeated until all the SPT records are processed (block 402).
  • An edge count is the number of times an edge into (i.e., incoming edge) or out (i.e., outgoing edge) of a basic block of a control flow representation is traveled or executed.
  • An incoming edge count of a block is the number of times execution transferred from the source block of the edge (i.e. a predecessor block) to the basic block and an outgoing edge count of a basic block is the number of times execution transferred from the basic block to the sink block of the edge (i.e. a successor block).
  • the count of all incoming edges of a basic block should equal the count of all outgoing edges for that block.
  • the post processing heuristically adjusts the block counts and edge counts so that the block counts and edge counts meet the flow conservation rule, that is the sum of incoming edges' count equals the sum of outgoing edges' count for each basic block.
  • this post processing can alter an edge count to be the average of the incoming edge count and the outgoing edge count.
  • the technology described herein is not constrained to this particular heuristic and that other heuristics may be employed.
  • each source code file of the program is compiled using the SPD file to optimize the program.
  • the compilation 500 may be performed by a single compiler or using a front end and back end compiler and is considered a profile optimization compiler 136 since it utilizes the SPD file 134.
  • the compilation may be performed in multiple phases including a lexical analysis phase (block 504), a semantic and syntactic analysis phase (block 506), an intermediate code generation phase (block 508) as described previously.
  • the intermediate code generation phase (block 508) generates an intermediate code representation of the source code.
  • the optimization phase (block 514) performs the code optimizations and the code generation phase (block 516) generates the object code file 518.
  • the compiler Prior to the optimization phase, the compiler analyzes the SPD file 134 to determine which code optimizations to perform on which areas of the program (block 512). For example, the compiler may determine which functions of the program, and which blocks of a function are executed more frequently which require more optimizations. The block counts may be used to determine which functions of the program should be inlined. The code layout may be optimized by placing the more frequently executed code together and the less frequently code together.
  • a user may, at any time, want to create a new SPD file with different sample data or add additional sample data to an existing SPD file. In either situation, the SPD file may be updated accordingly. Additional sample runs may be performed (block 124) that generate new sample data 126 that the collector 128 formats into SPT data 130.
  • the SPD generator 132 may either merge the new SPT data with correlation data to create a new SPD file or merge the new SPT data with an existing SPD file.
  • a user may, at any time, edit the source code of the program (block 156) and recompile the program to further optimize the program (block 136).
  • FIGs. 6A-6D illustrates the creation of the SPD file for an exemplary program.
  • Figs. 6A there is shown an exemplary system and process 600 having an exemplary source code listing 602 for the function foo() before the initial compilation and linkage.
  • a control flow representation 604 for the function foo() There is shown an illustration of a control flow representation 604 for the function foo().
  • the control flow representation 604 includes five basic blocks labeled B1 - B5.
  • Basic block B1 represents the code statement int foo (int x) or the function prolog
  • basic block B2 represents the code statement if (x&1)
  • basic block B3 represents the code statements bar(x); bar(x + 1)
  • basic block B4 represents code statements apel(x); apel(x-1); and
  • basic block B5 represents code statement return 3.
  • Fig. 6B shows a second exemplary source code program 620 including the function main() which inlines the function foo() shown in Fig. 6A .
  • the corresponding control flow representation 622 includes four basic blocks labeled B1 - B4.
  • Basic block B1 represents the code statement main()
  • basic block B2 represents the code statement if foo(x) with the control flow representation for the function foo() inlined
  • basic block B3 represents the code statement printf("non-zero"
  • basic block B4 represents the code statement return 0.
  • Fig. 6C illustrates the correlation data 630 for the second exemplary source code program 620 main() shown in Fig. 6B .
  • the correlation data 630 for the function main() is shown as a table that includes a separate entry for each basic block B1 - B4 in main () 636, 638, 642, 644.
  • the correlation data 630 for main()'s basic block B2 includes the correlation data 606 for foo().
  • the correlation data is a hierarchical table where a basic block includes the correlation data for all inlined functions. For example, if foo() were to include inlined code for apel(), then the correlation data for keywords() would be part of the correlation table for foo().
  • Fig. 6C also shows the SPT data 632.
  • the SPD converter 132 obtains the SPT data 632 and merges the counts in the SPT data 632 with the block counts of the corresponding basic block in the correlation data 630.
  • the basic block corresponding to the SPT data record has a range of RVAs that match the RVA in the SPT data record.
  • the block counts in the correlation data 630 are zero.
  • the block counts appear in the correlation data thereby generating the SPD file 634.
  • the SPD file 634 includes the correlation data 630 and the block counts from the SPT data 632.
  • Fig. 6D there is shown the SPD file 634 without edge counts and the corresponding SPD file 646 with edge counts.
  • the SPD converter 132 adds in the edge counts for each edge in each basic block of each control flow representation.
  • edge counts 648 for the control flow representation of foo()'s inlined instance
  • edge counts 650 for the control flow representation of main().
  • the edge counts 648 for foo() 's inlined instance include an edge count of 55 for the edge B1->B2, an edge count of 15 for the edge B2->B3, an edge count of 40 for the edge B2->B4, an edge count of 15 for the edge B3->B5, and an edge count of 40 for the edge B4->B5.
  • the edge counts 650 for main() include an edge count of 55 for the edge B1->B2, an edge count of 45 for the edge B2->B3, an edge count of 10 for the edge B2->B4, and an edge count of 45 for the edge B3->B4.
  • the generation of the SPD file in the manner described above has several advantages.
  • a sampled instruction is not traced back to a particular source code line number or debug line number, rather it is traced back to the basic block that is associated with its intermediate code representation.
  • the block count considers instructions that are inlined and have relocated to different blocks of the program due to optimization thereby producing a more accurate block count.
  • the mapping from a hardware instruction to a basic block is a direct mapping that utilizes only the RVA.
  • the block counts are part of the control flow representation of the program, it is easier for the compiler's optimizer to access them since the optimizer utilizes a control flow graph to perform most optimizations.
  • a computer system can include one or more processors, a memory connected to one or more processors, one or more compilers, a linker and a SPD converter.
  • the one or more compilers include one or more modules that when loaded into the memory cause the one or more processors to compile and optimize a source code file and generate correlation data.
  • the correlation data includes a control flow graph of the source code in the form of a control flow representation and a block-to-offset map.
  • the control flow representation includes a plurality of basic blocks.
  • the linker includes one or more modules that when loaded into the memory causes the one or more processors to link the optimized object files of a program into an image file and to replace the offsets in the block-to-offset map with a corresponding relative virtual address (RVA).
  • RVA is a relative address from a start address of the function associated with a basic block in the image file.
  • the SPD converter includes one or more modules that when loaded into the memory causes the one or more processors to obtain sample data from sample runs which is converted into a SPT format and used to generate block and edge counts for each basic block of a function which is stored in a SPD file. Additional sample runs may be made and the sample data from those sample runs may be merged into the block and edge counts in the SPD file.
  • the old block counts may be expired if older than a threshold or replaced with the new counts.
  • a weight may be associated with the sample data based on a time the sample data was obtained and a weighted average of the counts may be used.
  • At least one of the one or more compilers is a profile optimization compiler that utilizes the block and/or edge counts of the sample profile data to optimize the program.
  • a method of using a system such as the system described above can include operations such as obtaining sample data from execution of a program that includes a relative virtual address of a hardware instruction and a count of a number of times the hardware instruction has been executed.
  • the relative virtual address of the hardware instruction is matched to a corresponding basic block of a source code control flow representation of the program.
  • a block count associated with the matching basic block is updated to include the count and the block count, and its associated edge counts are used in the optimization of the program.
  • Correlation data is generated during a compilation of the program to map the relative virtual address of the hardware instruction to its associated basic block.
  • the correlation data also includes the control flow representation of the program including the basic blocks and a block-to-offset map.
  • the block-to-offset map is converted into a block-to-RVA map based on a linked image of the program.
  • Sample data from sample runs is obtained and converted into a format that includes the RVA and a count associated with the hardware instruction.
  • the block count associated with a basic block considers each inlined function that is included in a basic block.
  • the block counts may be updated with sample data from additional sample runs. These block counts may merged into the existing block counts and the merged block counts may be used to re-optimize the program.
  • the program may be edited and re-optimized using the exiting block counts or the merged block counts.
  • a device can include one or more processors, a memory connected to at least one processor, and at least one module loaded into the memory causing the at least one processor to generate a control flow representation for at least one function of the program that includes at least one basic block having a range of virtual addresses associated with instructions that are part of a basic block and a block count.
  • the at least one module obtains a hardware instruction including a virtual address and a count.
  • the virtual address of the instruction sample is used to find a basic block of the control flow representation that is associated with the instruction sample.
  • the block count associated with the matching basic block is updated.
  • the program is optimized using the data from the control flow representation including the block counts.
  • Fig. 7 depicts a first exemplary operating environment 700 that includes an integrated development environment ("IDE”) 702 and a common language runtime (“CLR”) 704.
  • the IDE 702 e.g., Visual Studio, NetBeans, Eclipse, JetBrains, NetCode, etc.
  • the IDE 702 may allow a user (e.g., developer, programmer, designer, coder, etc.) to design, code, compile, test, run, edit, debug or build a program, set of programs, web sites, web applications, packages, and web services in a computing device.
  • Software programs include source code 710 created in one or more source code languages (e.g., Visual Basic, Visual J#, C++. C#, J#, Java Script, APL, COBOL, Pascal, Eiffel, Haskell, ML, Oberon, Perl, Python, Scheme, Smalltalk and the like).
  • the IDE 702 may provide a native code development environment or may provide a managed code development that runs on a language virtual machine or may provide a combination thereof.
  • the IDE 702 may provide a managed code development environment using the .NET framework that may include a user interface 706, a source code editor 708, a front-end compiler 712, a collector 716, a SPD converter 720, and a visualizer 722.
  • a user can create and/or edit the source code according to known software programming techniques and the specific logical and syntactical rules associated with a particular source language via the user interface 706 and the source code editor 708 in the IDE 702. Thereafter, the source code 710 can be compiled via a front end compiler 712, whereby an intermediate code representation 726 of the program and correlation data 714 is created.
  • the IDE 702 may generate SPD file 724 using a collector 716 that collects and aggregates sample data 718 from various sample runs of the program and a SPD converter 720 that generates block counts from the sample data into a SPD file 724 which can be viewed by a using through a visualizer 722.
  • Object code or native code 730A-730N is created using a language specific compiler or back end compiler 728 from the intermediate code representation 726 and the SPD file 724 when the program is executed. That is, when an intermediate code representation 726 is executed, it is compiled and linked (linker 732) while being executed into the appropriate machine language for the platform it is being executed on, thereby making the image file 734 portable across several platforms.
  • programs may be compiled to native code machine language (not shown) appropriate for its intended platform.
  • the IDE 702 may operate on a first computing device 740 and the CLR 704 may operate on a second computing device 736 that is distinct from the first computing device 740. In another aspect of the invention, the IDE 702 and CLR 704 may operate on the same computing device.
  • the computing devices 736, 740 may be any type of electronic device, such as, without limitation, a mobile device, a personal digital assistant, a mobile computing device, a smart phone, a cellular telephone, a handheld computer, a server, a server array or server farm, a web server, a network server, a blade server, an Internet server, a work station, a mini-computer, a mainframe computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, or combination thereof.
  • a mobile device such as, without limitation, a mobile device, a personal digital assistant, a mobile computing device, a smart phone, a cellular telephone, a handheld computer, a server, a server array or server farm, a web server, a network server, a blade server, an Internet server, a work station, a mini-computer, a mainframe computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, or combination
  • the first and second computing devices 736, 740 may be communicatively coupled through a communication framework 738.
  • the communication framework 738 facilitates communications between the computing devices.
  • the communications framework 738 may embody any well-known communication techniques, such as techniques suitable for use with packet-switched networks (e.g., public networks such as the Internet, private networks such as enterprise intranet, and so forth), circuit-switched networks (e.g., the public switched telephone network), or a combination of packet-switched networks and circuit-switched networks (with suitable gateways and translators).
  • FIG. 8 Attention now turns to Fig. 8 and a discussion of a second exemplary operating environment.
  • the operating environment 800 is exemplary and is not intended to suggest any limitation as to the functionality of the embodiments.
  • the embodiments may be applied to an operating environment 800 utilizing at least one computing device 802.
  • the computing device 802 may be any type of electronic device, such as, without limitation, a mobile device, a personal digital assistant, a mobile computing device, a smart phone, a cellular telephone, a handheld computer, a server, a server array or server farm, a web server, a network server, a blade server, an Internet server, a work station, a mini-computer, a mainframe computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, or combination thereof.
  • the operating environment 800 may be configured in a network environment, a distributed environment, a multi-processor environment, or a stand-alone computing device having access to remote or local storage devices.
  • the computing device 802 may include one or more processors 804, a communication interface 806, a storage device 808, one or more input devices 810, one or more performance monitoring units (PMU) 812, output devices 816, and a memory 814.
  • a processor 804 may be any commercially available processor and may include dual microprocessors and multi-processor architectures.
  • the communication interface 806 facilitates wired or wireless communications between the computing device 802 and other devices.
  • the storage device 808 may be computer-readable medium that does not contain propagating signals, such as modulated data signals transmitted through a carrier wave.
  • Examples of the storage device 808 include without limitation RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, all of which do not contain propagating signals, such as modulated data signals transmitted through a carrier wave.
  • the input devices 810 may include a keyboard, mouse, pen, voice input device, touch input device, etc., and any combination thereof.
  • the output devices 816 may include a display, speakers, printers, etc., and any combination thereof.
  • the PMU 812 includes a set of special purpose registers (e.g., hardware counters) that store counts of hardware-related activities and related hardware/software that reads these registers.
  • the memory 814 may be any non-transitory computer-readable storage media that may store executable procedures, applications, and data.
  • the computer-readable storage media does not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave. It may be any type of non-transitory memory device (e.g., random access memory, read-only memory, etc.), magnetic storage, volatile storage, non-volatile storage, optical storage, DVD, CD, floppy disk drive, etc. that does not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave.
  • the memory 814 may also include one or more external storage devices or remotely located storage devices that do not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave.
  • the memory 814 may contain instructions, components, and data.
  • a component is a software program that perform a specific function and is otherwise known as a module, application, and the like.
  • the memory may include an operating system 820, a front end compiler 822, a back end compiler 824, a linker 826, a collector 828, a SPD converter 830, a source code editor 832, a visualizer 834, source code files 836, object code files 838, sample data 840, SPT data 842, correlation data 844, SPD file 846, and various other applications, components, and data 848.

Description

    BACKGROUND Field
  • The present embodiments relate to techniques for compiling software programs for execution on computing systems and more particularly, to methods, devices, and systems that create profile data based on sampled instructions for use in profile guided optimization.
  • XIAOLAN ZHANG ET AL, "System support for automatic profiling and optimization", OPERATING SYSTEMS REVIEW, ACM, NEW YORK, NY, US, (19971001), vol. 31, no. 5, doi:10.1145/269005.266640, ISSN 0163-5980, pages 15 - 26 relates to a framework for automatic collection and management of profile information and application of profile driven optimization called Morph. Morph is a combination of operating system and compiler technology that provides a practical framework for the advanced compiler optimizations needed to support continued improvements in application performance. The Morph Monitor is an operating system kernel component that implements continuous, low overhead profiling and program monitoring. The Morph Editor is a compiler component that implements re-optimization, transforming, compiler intermediate form into an executable. The Morph Manager is a system component that manages profile information, including the automatic invocation of re-optimization.
  • US 6 275 981 B1 relates to a technique for relating profile data generated by monitoring the execution of an optimized machine code computer program back to the source language description of the computer program. Logical line numbers are associated with the basic blocks of the intermediate code representation of the computer program and actual line numbers are associated with each instruction of the intermediate code representation of the computer program. During optimization of the intermediate code, the logical line numbers remain fixed to basic blocks, while actual line numbers remain fixed to intermediate code instructions.
  • US 2009/0094590 A1 relates to techniques for generating an optimization insensitive behavior profile. A source identifier is assigned to each instruction in an original control flow graph representing a program code prior to optimization. The identifiers identify a basic block associated with the instruction or a group of basic blocks. A source identifier in the set of source identifiers is assigned to instructions in an optimized control flow graph representing the program code after optimizing the program code. The instructions in the optimized control flow graph are mapped to the original control flow graph using the set of source identifiers to form a mapping transformation. Behaviour profile data associated with the optimized program code is moved to basic blocks in the original control flow graph using the mapping transformation to form the optimization insensitive behaviour profile.
  • SUMMARY
  • It is the object of the present invention to improve prior art systems. This object is solved by the subject matter of the independent claims. Preferred embodiments are defined by the dependent claims.
  • This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
  • Compiler optimizations are used to generate executable code that runs efficiently and makes efficient use of resources. These optimizations benefit from the knowledge of how the program will execute with production inputs. Profile guided optimization obtains information or profile data generated from sample runs of the program to guide the compiler in performing optimizations that better suit a target environment. This profile data can include a count of the number of times instructions associated with a basic block are executed. In this manner, the compiler can optimize the more frequently executed areas of the program more aggressively than less frequently executed areas.
  • In order to correlate the counts with the source code, correlation data is generated which maps a processor instruction sampled from a sample run with a basic block associated with the source code corresponding to the processor instruction. The correlation data includes a control flow representation for each function of the program which identifies the basic blocks of the function. Each basic block contains a range of relative virtual addresses corresponding to instructions associated with the basic block in the source code control flow. The block count from the sample run is then used to update the block count of the corresponding basic block. The correlation data with the block counts is then considered the sample profile data which is then used to derive edge counts which are used to determine the optimizations to perform on certain portions of the program and the degree of optimization.
  • A user may visualize the sample profile data, edit the source code, run additional sample runs to obtain more block counts, and recompile the program, iteratively in any sequence, in order to obtain the desired level of optimization for an intended use.
  • These and other features and advantages will be apparent from a reading of the following detailed description and a review of the associated drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive of aspects as claimed.
  • BRIEF DESCRIPTION OF DRAWINGS
    • Fig. 1 illustrates an exemplary system for optimizing a software program utilizing sample driven profile guided optimization.
    • Fig. 2 is a flow diagram illustrating an exemplary method for performing a compilation to generate the correlation data.
    • Fig. 3 is a flow diagram illustrating an exemplary method for linking object files into an image file and updating the correlation data with relative virtual addresses.
    • Fig. 4 is a block diagram illustrating an exemplary method for generating the sample profile data with block and edge counts.
    • Fig. 5 is a block diagram illustrating an exemplary method for optimizing a program with the SPD file.
    • Figs. 6A - 6D illustrate an example of generating the sample profile data for an exemplary program.
    • Fig. 7 is a block diagram illustrating a first exemplary computing or operating environment.
    • Fig. 8 is a block diagram illustrating a second exemplary computing or operating environment.
    DETAILED DESCRIPTION Overview
  • A compiler typically performs code optimizations in order to generate code that executes faster and uses memory efficiently. These code optimizations can include inlining, code hoisting, dead storage elimination, eliminating common sub-expression, loop unrolling, code motion, induction variable elimination, reduction in strength, and so forth. A traditional compiler performs these code optimizations based on static source code files without knowledge of the input that will be used at runtime which cannot be obtained from the static source code files.
  • Profile guided optimization (PGO) is a technique that uses information generated from sample runs of a program to guide the compiler in performing optimizations to better suit the intended target environment. The information collected during the sample runs is referred to as profile data. The sample runs can be based on test scenarios with real word inputs. With the profile data based on real-world inputs, the compiler can be guided to decide which optimizations are best suited for the target environment and how to perform the optimizations. For example, the knowledge of which sections of the program are most heavily executed allows the compiler to optimize the more heavily executed sections than those areas of the program that are used less frequently.
  • Instrumentation PGO is one type of PGO technique. In this technique, probes or special instructions are inserted into a binary representation of the program. The probes are used to collect information during execution of the program. The instrumented binary is then compiled to form an instrumented executable that is run with test input. The profile data resulting from these sample runs is then used in a subsequent compilation of the program, without the instrumented code, to optimize the program. However, instrumentation-based PGO requires that modifications be made to the program in order to instrument the program to collect the profile data. The additional code in the instrumented program increases the execution time of the program and alters the quality of the profile data.
  • Hardware-event sampling is another PGO technique that has the advantage of not requiring any modifications to the program. Certain processors have hardware performance counters that are used to count events or actions that occur during the operation of the processor. For example, some processors include counters for counting memory accesses, cache misses, instructions that are executed, instructions that are not executed, branch mis-predictions, clock cycles and so forth. A performance monitoring unit (PMU) is often coupled to a processor and used to read a counter when the counter overflows. The PMU generates an interrupt to the program when the counter overflows and records the address of currently executed instruction and the value of one or more counters. However, there needs to be a way to map the address of the sampled instruction to its corresponding source location in order for the information from the counters to be of use for the compiler's optimization.
  • The subject matter disclosed herein overcomes this limitation by generating correlation data that maps the address of the sampled instruction to a basic block of the program. This mapping utilizes a relative virtual address (RVA) range that is associated with each basic block and the sampled instruction. In this manner, the count associated with the sampled instruction can be added to the block count of its corresponding basic block. The block counts are then used to derive edge counts and identify which areas in the program are more heavily traveled than other less frequently used areas. This information is crucial for the compiler in optimizing the program for faster execution with efficient memory usage.
  • The correlation data includes a control flow representation of each function of the program along with a block-to-RVA map. The block-to-RVA map identifies ranges of RVAs that are associated with a particular basic block. The identity of the basic blocks in a program is not known until the compiler performs control flow analysis. The intermediate code representation (IR) statements used by the compiler are not associated with an address. Before code linkage, the compiler generates an offset for each IR statement that is relative to a position within a function. This offset is part of the range of offsets associated with a basic block. At link time, the start RVA of a function is known so the linker replaces the offset with a relative virtual address that is based from the start RVA of its associated function.
  • The present technique has several benefits over prior solutions. The correlation data is more precise since the correlation data is generated on compiler generated code with a direct mapping from the compiler generated code to the basic block in the source code control flow representation, rather than relying on source code line number information.
  • The correlation data is more efficient since there is a direct mapping from a hardware instruction to a block through the RVA. This is more efficient than other techniques that may utilize debugging information to map to a line of source code since it is a direct mapping that does not rely on intermediate references, such as debugging information.
  • In addition, mapping to a basic block produces a more accurate block count since multiple basic blocks may be associated with a line of source code. Compiler optimizations change the code such that the instructions are spread out and may belong to different disjoint RVA ranges. This is attributable to optimizations such as control flow optimization and code duplication that move the instructions to multiple areas in the program or in the case of macro expansion in the source code. Mapping to a source code line number fails to differentiate multiple blocks that utilize a particular source code statement.
  • In addition, the mapping (e.g., SPD file) allows the compiler to store hints about how the raw sampled counts should be adjusted in the control flow representation of the correlation data. A hint can be based on how an optimization transformed the program and can be used later to more accurately recover from the count any inconsistencies from the sampling.
  • Sample Driven Profile Guided Optimization With Precise Correlation
  • Attention now turns to Figure 1 which illustrates an exemplary system and process 100 configured for sample driven profile guided optimization. Fig. 1 shows an exemplary configuration of a system that generates the correlation data 102, collects samples to generate the sample profile data 104, and compiles the program with the sample profile data 106. Initially, correlation data 122 is generated from an initial compilation (compiler 110) of the program. As each source code file 108 of a program is compiled, a corresponding object file 112 is generated which includes object code 116 and correlation data 118. The correlation data 118 includes a control flow representation for each function of the source code 108 along with a block-to-offset map for each basic block in the control flow. The object code files 112 are then linked (linker 114) and a corresponding image file 120 or executable is generated. The linker 114 updates the correlation data 122 with the relative virtual addresses (i.e., virtual addresses) associated with each basic block that is based from the start RVA of its associated function.
  • Next, the image file 120 is executed during one or more sample runs 124 using various inputs and outputs hardware instruction traces from each sample run 124. The hardware instruction traces or sample data 126 from the multiple sample runs 124 are collected and formatted into sample profile trace ("SPT") data 130 by collector 128. The SPT data 130 contains a series of opcode, RVA, and count triplets associated with the sampled instructions. The SPT data 130 is then correlated with the correlation data 122 to generate a sample program data ("SPD") file 134.
  • The SPD file 134 includes the correlation data with block counts. A block count represents the number of times the hardware instructions of a basic block are executed. The block count is based on the counts from the instruction traces. The SPD file 134 is then used in a subsequent profile optimization compilation (compiler 136) of the source code files 108, 140. This compilation also updates the data in the SPD file 134 generating an updated SPD 144. The updated SPD 144 contains correlation data that reflects the current source code and also contains counts carried over from the input SPD for functions that have not been edited. The linker 138 receives the object code files 142 and the updated SPD file 144 from this compilation and forms an image file 148 suitable for execution (block 152).
  • Although Fig. 1 depicts the system and process in a particular configuration, it should be noted that the developer is able to alter the steps shown in Fig. 1 in any intended manner. For example, a developer may update the SPD file (block 154) with additional sample runs using different inputs. In one aspect, the SPD file may be a rolling profile where counts expire over time when additional sample data is generated. For example, the block counts may be weighted by date and the older counts may be retired when more recent sample data is generated. Alternatively, the existing block counts may be discarded and replaced with the counts from the new sample data. Block counts that are older than a predetermined threshold may be discarded and replaced with the counts from the new sample data. A weight can be associated with the sample data based on a time the sample data is obtained and a block count may be based on a weighted average of the counts.
  • In addition, the developer may edit the source code (block 156) which may be re-optimized using the SPD file 134 having the additional inputs. In addition, the developer may visualize the SPD file 146 (block 150) for further analysis and then update the SPD file (block 154) with additional sample runs. The system is not constrained to any particular sequence of actions and is configured so that the developer may repeatedly generate a SPD file, optimize the program with a specific version of the SPD, execute the program, visualize the SPD file, edit the source code, and re-optimized the program in any intended manner.
  • Attention now turns to a further discussion of the compilation shown in Fig. 1 (compiler 110) which is used to generate the correlation data and object files. Turning to Fig. 2, there is shown an exemplary method 200 for compiling each source code file 202 of a program. It should be noted that the method 200 may be representative of some or all of the operations executed by one or more aspects described herein and that the method can include more or less operations than that which is described in Fig. 2.
  • Compilation is the process of transforming source code into executable code. In one aspect, the compilation is performed in two phases by a front end compiler 220 and a back end compiler 224. The front end compiler 220 transforms the source code file 202 into an intermediate code representation 210 and the back end compiler 224 transforms the intermediate code representation 210 into an executable, such as an object file 218.
  • As shown in Fig. 2, the front end compiler 220 may operate in phases which include lexical analysis (block 204), semantic and syntactic analysis (block 206) and intermediate code generation (block 208). Lexical analysis (block 204) analyzes a stream of characters into tokens. Syntactic analysis (block 206) or parsing takes the tokens and checks if the arrangement of tokens adheres to the grammar of the underlying programming language. The syntactic analysis (block 206) outputs a parse tree or syntax tree. Semantic analysis (block 206) checks the parse tree for semantic correctness, such as performing type checking and checking if the number of arguments in a function call is correct, and the like. Intermediate code generation (block 208) generates the intermediate code representation which is a machine and language independent version of the source code. The intermediate code representation may be represented in bytecodes, common intermediate language ("CIL"), GNU register transfer language ("GNU RTL"), parse tree, tree representation, or any other type of data structure or format used by a compiler or language virtual machine.
  • The back end compiler 224 receives the intermediate code representation 210 and scans this code to generate a control flow representation for each function in the program. A control flow representation is a data structure abstracting the control flow behavior of a function. A function is a group of source code statements that performs a specific task. A function can be from a built-in library of functions or user-defined in the source code. The control flow representation is a data structure that represents the flow through a program by grouping the IR statements into basic blocks where each basic block contains sequentially-ordered IR statements that have only one entry point and only one exit point. The basic blocks in a control flow representation represent a sequence of instructions with a single entry and a single exit point and, the edges connecting the basic blocks represent the flow of control within a function from one basic block to another basic block.
  • The back end compiler 224 also generates the correlation data which includes a control flow representation of a function and a block-to-offset map. The block-to-offset map associates each basic block of a function with a range of offsets. Each offset is associated with an instruction that is part of the basic block and is relative to the start of the function.
  • The back end compiler 224 then performs various optimizations on the intermediate code representation 210 which may include one or more of the following, without limitation: inlining; code hoisting; dead code elimination; eliminating common sub-expression; loop unrolling; induction variable elimination; reduction in strength; and so forth. Inlining expands the body of a function with the source code of the function in order to eliminate the overhead associated with calling and returning from the function. Code hoisting moves loop-invariant source code statements out of a loop so that the statement is only executed once rather than at each loop iteration. Dead code elimination refers to eliminating code that is unreachable or is not executed. Common sub-expression elimination refers to eliminating an expression that was previously computed and where the values of the expression have not changed since the previous computation. Loop unrolling reduces the number of iterations in a loop by replicating the source code of the body of the loop. Induction variable elimination combines multiple induction variables into a single induction variable. Reduction in strength replaces a computationally intensive operation with a less computationally intensive operation. It should be noted that the aspects of the invention described herein are not limited to the foregoing compiler optimizations. Other compiler optimizations may be utilized as well and the optimization described herein are not intended to suggest any limitation to the functionality of the aspects.
  • The final phase of the compilation is the code generation phase (block 216). The back end compiler 224 assigns variables to registers (i.e., register allocation) and generates machine instructions for the target processing unit that is output to an object code file 218. In one aspect, the object code file 218 includes the object code and correlation data. This combined object code file and correlation data 218 is sent to a linker for further processing.
  • Turning to Fig. 3, there is shown in further detail the actions of the linker 114 of Fig. 1. A linker 308 receives each of the object code files 302 and links them into a single executable or image file 310 (block 304). The linker 308 assembles the correlation data in each of the object files into a single correlation data file 312 (block 306). The linker 308 replaces the offset ranges in each function's basic block with RVA ranges (block 306). Alternatively, the start RVA of each function may be stored in the SPD. The RVA ranges are relative to the start RVA for the corresponding function.
  • Attention now turns back to Fig. 1 for a more detailed description of the manner in which the sample profile data is generated. The image file 120 is executed with real world inputs representing different scenarios in various sample runs 124 to generate sample data 126. During the execution of each sample run 124, samples of hardware instructions (i.e., sample data 126) are generated. There are various performance monitoring tools that can be used to generate the sample data 126, such as without limitation, Microsoft's xperf, Intel's Vtune, Oracle's Hardware Activity Reporter (HAR), Event Tracing for Windows (ETW), and the like. The format of the data that is output from these different performance monitoring tools differs. The collector 128 receives the sample data 126 in the different formats and converts them into a uniform format, Sample Data Trace ("SPT") format. The SPT format includes, at least, an opcode, a RVA, and a count. The SPT data 130 is then used by the SPD converter 132 to generate the SPD 134 along with the correlation data 122.
  • Attention now turns to Fig. 4 which illustrates an exemplary method 400 of the SPD converter 132. The SPD converter 132 receives the SPT data 130 and the correlation data 122. The correlation data 122 is an SPD file without block counts. The SPD converter 132 reads each SPT record (block 402) and searches for a control flow representation in the correlation data 122 that matches the SPT record (block 404). This is done by matching the RVA in the SPT record with a control flow representation having a RVA range that includes the RVA in the SPT record (block 404). The corresponding block identifier is found based on the RVA in the SPT record being within the RVA range of the matching block (block 406).
  • The SPD converter 132 then determines if the RVA range of the sample includes an inlinee. The correlation data includes the block-to-RVA map of each inlined function, one map for each inlined instance. If so (block 408-yes), the linee's control flow representation is found in the correlation data (block 410) and the process continues (block 406) to search the inlinee's control flow representation until it reaches a block that does not have an inlinee (block 408-no). Once found (block 408-no), the block count for the block is updated with the count from the SPT record (block 412). If the sample does not belong to an inline range (block 408-no), then the block count for the block is updated with the count from the SPT record (block 412). This process is repeated until all the SPT records are processed (block 402).
  • Once all the SPT records are processed, the block counts for each block is post-processed and then used to estimate edge counts (block 414). An edge count is the number of times an edge into (i.e., incoming edge) or out (i.e., outgoing edge) of a basic block of a control flow representation is traveled or executed. An incoming edge count of a block is the number of times execution transferred from the source block of the edge (i.e. a predecessor block) to the basic block and an outgoing edge count of a basic block is the number of times execution transferred from the basic block to the sink block of the edge (i.e. a successor block). Ideally, the count of all incoming edges of a basic block should equal the count of all outgoing edges for that block. The post processing heuristically adjusts the block counts and edge counts so that the block counts and edge counts meet the flow conservation rule, that is the sum of incoming edges' count equals the sum of outgoing edges' count for each basic block. For example, this post processing can alter an edge count to be the average of the incoming edge count and the outgoing edge count. However, it should be noted that the technology described herein is not constrained to this particular heuristic and that other heuristics may be employed.
  • Returning to Fig. 1, once the SPD file 134 is generated, each source code file of the program is compiled using the SPD file to optimize the program. This is described more in Fig. 5. The compilation 500 may be performed by a single compiler or using a front end and back end compiler and is considered a profile optimization compiler 136 since it utilizes the SPD file 134. The compilation may be performed in multiple phases including a lexical analysis phase (block 504), a semantic and syntactic analysis phase (block 506), an intermediate code generation phase (block 508) as described previously. The intermediate code generation phase (block 508) generates an intermediate code representation of the source code. The optimization phase (block 514) performs the code optimizations and the code generation phase (block 516) generates the object code file 518.
  • Prior to the optimization phase, the compiler analyzes the SPD file 134 to determine which code optimizations to perform on which areas of the program (block 512). For example, the compiler may determine which functions of the program, and which blocks of a function are executed more frequently which require more optimizations. The block counts may be used to determine which functions of the program should be inlined. The code layout may be optimized by placing the more frequently executed code together and the less frequently code together.
  • Turning back to Fig. 1, attention now turns to updating the SPD (block 154). A user may, at any time, want to create a new SPD file with different sample data or add additional sample data to an existing SPD file. In either situation, the SPD file may be updated accordingly. Additional sample runs may be performed (block 124) that generate new sample data 126 that the collector 128 formats into SPT data 130. The SPD generator 132 may either merge the new SPT data with correlation data to create a new SPD file or merge the new SPT data with an existing SPD file. Additionally, a user may, at any time, edit the source code of the program (block 156) and recompile the program to further optimize the program (block 136).
  • Attention now turns to Figs. 6A-6D which illustrates the creation of the SPD file for an exemplary program. Turning to Figs. 6A, there is shown an exemplary system and process 600 having an exemplary source code listing 602 for the function foo() before the initial compilation and linkage. There is shown an illustration of a control flow representation 604 for the function foo(). The control flow representation 604 includes five basic blocks labeled B1 - B5. Basic block B1 represents the code statement int foo (int x) or the function prolog, basic block B2 represents the code statement if (x&1), basic block B3 represents the code statements bar(x); bar(x+1); basic block B4 represents code statements baz(x); baz(x-1); and basic block B5 represents code statement return 3.
  • The correlation data 606 for the function foo() is shown as a table that includes a separate entry for each basic block, 608, 610, 612, 614, 616. Each entry includes a unique block identifier for each basic block in the function foo(), a block count, one or more RVA ranges where each RVA range includes a start offset and a length, a successors field which denotes the basic blocks that follow, and an inlinee field that denotes whether the function has inlined code. As shown in Fig. 6A, the correlation data 606 has no block counts (i.e., block count = 0) or inlined code (inline = Y).
  • Fig. 6B shows a second exemplary source code program 620 including the function main() which inlines the function foo() shown in Fig. 6A. The corresponding control flow representation 622 includes four basic blocks labeled B1 - B4. Basic block B1 represents the code statement main(), basic block B2 represents the code statement if foo(x) with the control flow representation for the function foo() inlined, basic block B3 represents the code statement printf("non-zero") and basic block B4 represents the code statement return 0.
  • Fig. 6C illustrates the correlation data 630 for the second exemplary source code program 620 main() shown in Fig. 6B. The correlation data 630 for the function main() is shown as a table that includes a separate entry for each basic block B1 - B4 in main () 636, 638, 642, 644. The correlation data 630 for main()'s basic block B2 includes the correlation data 606 for foo(). The inline field for B2 of main() indicates that B2 has an inlined function (i.e., inline = Y) and includes the correlation data for foo() 640. It should be noted that the correlation data is a hierarchical table where a basic block includes the correlation data for all inlined functions. For example, if foo() were to include inlined code for baz(), then the correlation data for baz() would be part of the correlation table for foo().
  • Fig. 6C also shows the SPT data 632. The SPD converter 132 obtains the SPT data 632 and merges the counts in the SPT data 632 with the block counts of the corresponding basic block in the correlation data 630. As noted above, the basic block corresponding to the SPT data record has a range of RVAs that match the RVA in the SPT data record. The block counts in the correlation data 630 are zero. However, after the SPT data 632 is merged into the correlation data 630, the block counts appear in the correlation data thereby generating the SPD file 634. The SPD file 634 includes the correlation data 630 and the block counts from the SPT data 632.
  • Turning to Fig. 6D, there is shown the SPD file 634 without edge counts and the corresponding SPD file 646 with edge counts. When the SPD converter 132 performs the post processing, the SPD converter 132 adds in the edge counts for each edge in each basic block of each control flow representation. As shown in SPD file 646, there are edge counts 648 for the control flow representation of foo()'s inlined instance and edge counts 650 for the control flow representation of main(). The edge counts 648 for foo()'s inlined instance include an edge count of 55 for the edge B1->B2, an edge count of 15 for the edge B2->B3, an edge count of 40 for the edge B2->B4, an edge count of 15 for the edge B3->B5, and an edge count of 40 for the edge B4->B5. The edge counts 650 for main() include an edge count of 55 for the edge B1->B2, an edge count of 45 for the edge B2->B3, an edge count of 10 for the edge B2->B4, and an edge count of 45 for the edge B3->B4.
  • The generation of the SPD file in the manner described above has several advantages. A sampled instruction is not traced back to a particular source code line number or debug line number, rather it is traced back to the basic block that is associated with its intermediate code representation. In this manner, the block count considers instructions that are inlined and have relocated to different blocks of the program due to optimization thereby producing a more accurate block count. The mapping from a hardware instruction to a basic block is a direct mapping that utilizes only the RVA. Furthermore, since the block counts are part of the control flow representation of the program, it is easier for the compiler's optimizer to access them since the optimizer utilizes a control flow graph to perform most optimizations.
  • In accordance with aspects of the subject matter described herein, a computer system can include one or more processors, a memory connected to one or more processors, one or more compilers, a linker and a SPD converter. The one or more compilers include one or more modules that when loaded into the memory cause the one or more processors to compile and optimize a source code file and generate correlation data. The correlation data includes a control flow graph of the source code in the form of a control flow representation and a block-to-offset map. The control flow representation includes a plurality of basic blocks. The linker includes one or more modules that when loaded into the memory causes the one or more processors to link the optimized object files of a program into an image file and to replace the offsets in the block-to-offset map with a corresponding relative virtual address (RVA). The RVA is a relative address from a start address of the function associated with a basic block in the image file. The SPD converter includes one or more modules that when loaded into the memory causes the one or more processors to obtain sample data from sample runs which is converted into a SPT format and used to generate block and edge counts for each basic block of a function which is stored in a SPD file. Additional sample runs may be made and the sample data from those sample runs may be merged into the block and edge counts in the SPD file. The old block counts may be expired if older than a threshold or replaced with the new counts. A weight may be associated with the sample data based on a time the sample data was obtained and a weighted average of the counts may be used. At least one of the one or more compilers is a profile optimization compiler that utilizes the block and/or edge counts of the sample profile data to optimize the program.
  • A method of using a system such as the system described above can include operations such as obtaining sample data from execution of a program that includes a relative virtual address of a hardware instruction and a count of a number of times the hardware instruction has been executed. The relative virtual address of the hardware instruction is matched to a corresponding basic block of a source code control flow representation of the program. A block count associated with the matching basic block is updated to include the count and the block count, and its associated edge counts are used in the optimization of the program. Correlation data is generated during a compilation of the program to map the relative virtual address of the hardware instruction to its associated basic block. The correlation data also includes the control flow representation of the program including the basic blocks and a block-to-offset map. The block-to-offset map is converted into a block-to-RVA map based on a linked image of the program. Sample data from sample runs is obtained and converted into a format that includes the RVA and a count associated with the hardware instruction. The block count associated with a basic block considers each inlined function that is included in a basic block. The block counts may be updated with sample data from additional sample runs. These block counts may merged into the existing block counts and the merged block counts may be used to re-optimize the program. The program may be edited and re-optimized using the exiting block counts or the merged block counts.
  • A device can include one or more processors, a memory connected to at least one processor, and at least one module loaded into the memory causing the at least one processor to generate a control flow representation for at least one function of the program that includes at least one basic block having a range of virtual addresses associated with instructions that are part of a basic block and a block count. The at least one module obtains a hardware instruction including a virtual address and a count. The virtual address of the instruction sample is used to find a basic block of the control flow representation that is associated with the instruction sample. The block count associated with the matching basic block is updated. The program is optimized using the data from the control flow representation including the block counts.
  • Examples of Suitable Computing Environments
  • Attention now turns to a discussion of exemplary operating environments. Fig. 7 depicts a first exemplary operating environment 700 that includes an integrated development environment ("IDE") 702 and a common language runtime ("CLR") 704. The IDE 702 (e.g., Visual Studio, NetBeans, Eclipse, JetBrains, NetCode, etc.) may allow a user (e.g., developer, programmer, designer, coder, etc.) to design, code, compile, test, run, edit, debug or build a program, set of programs, web sites, web applications, packages, and web services in a computing device. Software programs include source code 710 created in one or more source code languages (e.g., Visual Basic, Visual J#, C++. C#, J#, Java Script, APL, COBOL, Pascal, Eiffel, Haskell, ML, Oberon, Perl, Python, Scheme, Smalltalk and the like).
  • The IDE 702 may provide a native code development environment or may provide a managed code development that runs on a language virtual machine or may provide a combination thereof. The IDE 702 may provide a managed code development environment using the .NET framework that may include a user interface 706, a source code editor 708, a front-end compiler 712, a collector 716, a SPD converter 720, and a visualizer 722. A user can create and/or edit the source code according to known software programming techniques and the specific logical and syntactical rules associated with a particular source language via the user interface 706 and the source code editor 708 in the IDE 702. Thereafter, the source code 710 can be compiled via a front end compiler 712, whereby an intermediate code representation 726 of the program and correlation data 714 is created.
  • Additionally, the IDE 702 may generate SPD file 724 using a collector 716 that collects and aggregates sample data 718 from various sample runs of the program and a SPD converter 720 that generates block counts from the sample data into a SPD file 724 which can be viewed by a using through a visualizer 722. Object code or native code 730A-730N is created using a language specific compiler or back end compiler 728 from the intermediate code representation 726 and the SPD file 724 when the program is executed. That is, when an intermediate code representation 726 is executed, it is compiled and linked (linker 732) while being executed into the appropriate machine language for the platform it is being executed on, thereby making the image file 734 portable across several platforms. Alternatively, in other embodiments, programs may be compiled to native code machine language (not shown) appropriate for its intended platform.
  • In one aspect of the invention, the IDE 702 may operate on a first computing device 740 and the CLR 704 may operate on a second computing device 736 that is distinct from the first computing device 740. In another aspect of the invention, the IDE 702 and CLR 704 may operate on the same computing device. The computing devices 736, 740 may be any type of electronic device, such as, without limitation, a mobile device, a personal digital assistant, a mobile computing device, a smart phone, a cellular telephone, a handheld computer, a server, a server array or server farm, a web server, a network server, a blade server, an Internet server, a work station, a mini-computer, a mainframe computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, or combination thereof.
  • The first and second computing devices 736, 740 may be communicatively coupled through a communication framework 738. The communication framework 738 facilitates communications between the computing devices. The communications framework 738 may embody any well-known communication techniques, such as techniques suitable for use with packet-switched networks (e.g., public networks such as the Internet, private networks such as enterprise intranet, and so forth), circuit-switched networks (e.g., the public switched telephone network), or a combination of packet-switched networks and circuit-switched networks (with suitable gateways and translators).
  • Although the foregoing operating environment has been described with respect to the .NET framework, the technology described herein is not constrained to any particular software framework, programming language, compiler collection, operating system, operating system platform, compiler infrastructure project, and the like. The techniques described herein can be employed in the GNU compiler collection (GCC) and the Low-Level Virtual Machine (LLVM) compiler infrastructure and other compiler and operating systems.
  • Attention now turns to Fig. 8 and a discussion of a second exemplary operating environment. It should be noted that the operating environment 800 is exemplary and is not intended to suggest any limitation as to the functionality of the embodiments. The embodiments may be applied to an operating environment 800 utilizing at least one computing device 802. The computing device 802 may be any type of electronic device, such as, without limitation, a mobile device, a personal digital assistant, a mobile computing device, a smart phone, a cellular telephone, a handheld computer, a server, a server array or server farm, a web server, a network server, a blade server, an Internet server, a work station, a mini-computer, a mainframe computer, a supercomputer, a network appliance, a web appliance, a distributed computing system, multiprocessor systems, or combination thereof. The operating environment 800 may be configured in a network environment, a distributed environment, a multi-processor environment, or a stand-alone computing device having access to remote or local storage devices.
  • The computing device 802 may include one or more processors 804, a communication interface 806, a storage device 808, one or more input devices 810, one or more performance monitoring units (PMU) 812, output devices 816, and a memory 814. A processor 804 may be any commercially available processor and may include dual microprocessors and multi-processor architectures. The communication interface 806 facilitates wired or wireless communications between the computing device 802 and other devices. The storage device 808 may be computer-readable medium that does not contain propagating signals, such as modulated data signals transmitted through a carrier wave. Examples of the storage device 808 include without limitation RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, all of which do not contain propagating signals, such as modulated data signals transmitted through a carrier wave. The input devices 810 may include a keyboard, mouse, pen, voice input device, touch input device, etc., and any combination thereof. The output devices 816 may include a display, speakers, printers, etc., and any combination thereof. The PMU 812 includes a set of special purpose registers (e.g., hardware counters) that store counts of hardware-related activities and related hardware/software that reads these registers.
  • The memory 814 may be any non-transitory computer-readable storage media that may store executable procedures, applications, and data. The computer-readable storage media does not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave. It may be any type of non-transitory memory device (e.g., random access memory, read-only memory, etc.), magnetic storage, volatile storage, non-volatile storage, optical storage, DVD, CD, floppy disk drive, etc. that does not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave. The memory 814 may also include one or more external storage devices or remotely located storage devices that do not pertain to propagated signals, such as modulated data signals transmitted through a carrier wave.
  • The memory 814 may contain instructions, components, and data. A component is a software program that perform a specific function and is otherwise known as a module, application, and the like. The memory may include an operating system 820, a front end compiler 822, a back end compiler 824, a linker 826, a collector 828, a SPD converter 830, a source code editor 832, a visualizer 834, source code files 836, object code files 838, sample data 840, SPT data 842, correlation data 844, SPD file 846, and various other applications, components, and data 848.
  • Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (6)

  1. A computer-implemented method, comprising:
    during a compilation of a program, generating (102) correlation data (118) including a control flow representation of the program including one or more basic blocks and a block-to-offset map, wherein the block-to-offset map associates each basic block of a function with a range of offsets and wherein each offset is associated with an instruction that is part of the basic block and is relative to the start of the function;
    converting the block-to-offset map into a block to relative virtual address map based on a linked image of the program, wherein a relative virtual address is a relative address from a start address of a function associated with a basic block in the image;
    obtaining (104) sample data (126) from at least one execution of the program, the sample data including a relative virtual address of a hardware instruction and a count, the count indicating a number of times the hardware instruction was executed;
    matching the relative virtual address of the hardware instruction with at least one basic block of the program that is associated with the relative virtual address of the hardware instruction;
    updating a block count of the matching basic block with the count; and
    optimizing the program based on the block count.
  2. The method of claim 1, wherein updating a block count of the matching basic block further comprises:
    identifying an inlined function associated with the matching basic block;
    traversing a control flow representation of the matching basic block to a control flow representation of the inlined function that has no further inlined functions; and
    updating a block count of a basic block associated with the inlined function.
  3. The method of claim 1, further comprising:
    obtaining additional sample data from a plurality of sample runs;
    merging counts from the additional sample data into existing block counts; and
    re-optimizing the program based on the merged block counts.
  4. A system configured to perform a computer-implemented method, the method comprising:
    during a compilation of a program, generating (102) correlation data (118) including a control flow representation of the program including one or more basic blocks and a block-to-offset map, wherein the block-to-offset map associates each basic block of a function with a range of offsets and wherein each offset is associated with an instruction that is part of the basic block and is relative to the start of the function;
    converting the block-to-offset map into a block to relative virtual address map based on a linked image of the program, wherein a relative virtual address is a relative address from a start address of a function associated with a basic block in the image;
    obtaining (104) sample data (126) from at least one execution of the program, the sample data including a relative virtual address of a hardware instruction and a count, the count indicating a number of times the hardware instruction was executed;
    matching the relative virtual address of the hardware instruction with at least one basic block of the program that is associated with the relative virtual address of the hardware instruction;
    updating a block count of the matching basic block with the count; and
    optimizing the program based on the block count.
  5. The system of claim 4, wherein the system is further configured to obtain new sample data from additional sample runs and to expire block counts from prior sample runs that are older than a threshold.
  6. The system of claim 4, wherein the system is configured to associate a weight to the sample data based on a time the sample data is obtained and compute a block count for a basic block based on a weighted average of counts from the sample data.
EP17727442.0A 2016-05-25 2017-05-17 Sample driven profile guided optimization with precise correlation Active EP3465428B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/163,684 US11003428B2 (en) 2016-05-25 2016-05-25 Sample driven profile guided optimization with precise correlation
PCT/US2017/032990 WO2017205118A1 (en) 2016-05-25 2017-05-17 Sample driven profile guided optimization with precise correlation

Publications (2)

Publication Number Publication Date
EP3465428A1 EP3465428A1 (en) 2019-04-10
EP3465428B1 true EP3465428B1 (en) 2020-10-07

Family

ID=58993203

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17727442.0A Active EP3465428B1 (en) 2016-05-25 2017-05-17 Sample driven profile guided optimization with precise correlation

Country Status (4)

Country Link
US (1) US11003428B2 (en)
EP (1) EP3465428B1 (en)
CN (1) CN109863473B (en)
WO (1) WO2017205118A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10248554B2 (en) * 2016-11-14 2019-04-02 International Business Machines Corporation Embedding profile tests into profile driven feedback generated binaries
US20190108006A1 (en) 2017-10-06 2019-04-11 Nvidia Corporation Code coverage generation in gpu by using host-device coordination
US10503626B2 (en) 2018-01-29 2019-12-10 Oracle International Corporation Hybrid instrumentation framework for multicore low power processors
US11468881B2 (en) * 2019-03-29 2022-10-11 Samsung Electronics Co., Ltd. Method and system for semantic intelligent task learning and adaptive execution
US11221835B2 (en) * 2020-02-10 2022-01-11 International Business Machines Corporation Determining when to perform and performing runtime binary slimming
CN111427582B (en) * 2020-03-30 2023-06-09 飞腾信息技术有限公司 RTL code management method, device, equipment and computer readable storage medium
CN111538505B (en) * 2020-04-23 2023-04-25 保定康强医疗器械制造有限公司 Slitter editing and grammar checking system
US20220067538A1 (en) * 2020-09-03 2022-03-03 Intuit Inc. Methods and systems for generating knowledge graphs from program source code
US11947966B2 (en) 2021-10-11 2024-04-02 International Business Machines Corporation Identifying computer instructions enclosed by macros and conflicting macros at build time
US11481200B1 (en) * 2021-10-11 2022-10-25 International Business Machines Corporation Checking source code validity at time of code update
US20230161684A1 (en) * 2021-11-23 2023-05-25 International Business Machines Corporation Identification of program path profile for program optimization
US11593080B1 (en) 2021-12-17 2023-02-28 International Business Machines Corporation Eliminating dead stores
US11886847B2 (en) 2022-01-31 2024-01-30 Next Silicon Ltd Matching binary code to intermediate representation code

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0689141A3 (en) 1994-06-20 1997-10-15 At & T Corp Interrupt-based hardware support for profiling system performance
US6374367B1 (en) 1997-11-26 2002-04-16 Compaq Computer Corporation Apparatus and method for monitoring a computer system to guide optimization
US6275981B1 (en) * 1998-11-12 2001-08-14 Hewlett-Packard Company Method and system for correlating profile data dynamically generated from an optimized executable program with source code statements
US6971091B1 (en) 2000-11-01 2005-11-29 International Business Machines Corporation System and method for adaptively optimizing program execution by sampling at selected program points
US7036116B2 (en) * 2001-03-23 2006-04-25 International Business Machines Corporation Percolating hot function store/restores to colder calling functions
US7827543B1 (en) 2004-02-28 2010-11-02 Oracle America, Inc. Method and apparatus for profiling data addresses
US7721274B1 (en) * 2004-05-05 2010-05-18 Sun Microsystems, Inc. Intelligent processing of external object references for dynamic linking
US8234638B2 (en) * 2004-12-28 2012-07-31 Hercules Software, Llc Creating a relatively unique environment for computing platforms
US20070079294A1 (en) 2005-09-30 2007-04-05 Robert Knight Profiling using a user-level control mechanism
US20070150866A1 (en) * 2005-12-22 2007-06-28 International Business Machines Corporation Displaying parameters associated with call statements
US8479174B2 (en) * 2006-04-05 2013-07-02 Prevx Limited Method, computer program and computer for analyzing an executable computer file
US8176475B2 (en) 2006-10-31 2012-05-08 Oracle America, Inc. Method and apparatus for identifying instructions associated with execution events in a data space profiler
US8370821B2 (en) * 2007-08-21 2013-02-05 International Business Machines Corporation Method for enabling profile-based call site tailor-ing using profile gathering of cloned functions
US8214817B2 (en) 2007-10-09 2012-07-03 International Business Machines Corporation Detecting change in program behavior for adaptive code optimization
US8423980B1 (en) 2008-09-18 2013-04-16 Google Inc. Methods for handling inlined functions using sample profiles
US8387026B1 (en) 2008-12-24 2013-02-26 Google Inc. Compile-time feedback-directed optimizations using estimated edge profiles from hardware-event sampling
US8566559B2 (en) 2011-10-10 2013-10-22 Microsoft Corporation Runtime type identification of native heap allocations
CN104238998B (en) * 2013-06-18 2018-01-19 华为技术有限公司 Command processing method and device
US9454659B1 (en) * 2014-08-15 2016-09-27 Securisea, Inc. Software vulnerabilities detection system and methods
US9582251B2 (en) * 2014-11-14 2017-02-28 Cavium, Inc. Algorithm to achieve optimal layout of decision logic elements for programmable network devices

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
US11003428B2 (en) 2021-05-11
US20170344349A1 (en) 2017-11-30
CN109863473A (en) 2019-06-07
CN109863473B (en) 2022-05-17
EP3465428A1 (en) 2019-04-10
WO2017205118A1 (en) 2017-11-30

Similar Documents

Publication Publication Date Title
EP3465428B1 (en) Sample driven profile guided optimization with precise correlation
CN109564540B (en) System, method, and apparatus for debugging of JIT compiler
US9170787B2 (en) Componentization of compiler functionality
WO2022042685A1 (en) Deriving profile data for compiler optimization
US8458679B2 (en) May-constant propagation
JP2018510445A (en) Domain-specific system and method for improving program performance
US11599478B2 (en) Reduced instructions to generate global variable addresses
Novillo Samplepgo-the power of profile guided optimizations without the usability burden
Dot et al. Analysis and optimization of engines for dynamically typed languages
US10853041B2 (en) Extensible instrumentation
Sharygin et al. Runtime specialization of PostgreSQL query executor
US20240103821A1 (en) Optimising computer program code
US20090037690A1 (en) Dynamic Pointer Disambiguation
Ortin et al. Cnerator: A Python application for the controlled stochastic generation of standard C source code
JP7344259B2 (en) Pattern transformation methods, apparatus, electronic devices, computer storage media and computer program products in deep learning frameworks
JP2023016738A (en) Method, computer program and computer for improving technological process of programming computer using dynamic programming language (type inference in dynamic languages)
Nguyen et al. Retargetable optimizing compilers for quantum accelerators via a multilevel intermediate representation
Patel et al. Recent trends in embedded system software performance estimation
Aguilar et al. Towards parallelism extraction for heterogeneous multicore android devices
Liva et al. Semantics-driven extraction of timed automata from Java programs
Lameed et al. Optimizing MATLAB feval with dynamic techniques
Singh et al. Fast And Automatic Floating Point Error Analysis With CHEF-FP
Engelke et al. Compile-Time Analysis of Compiler Frameworks for Query Compilation
Zhong et al. Py2Cy: a genetic improvement tool to speed up python
Nguyen et al. Enabling Retargetable Optimizing Compilers for Quantum Accelerators via a Multi-Level Intermediate Representation

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20181121

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602017025014

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G06F0009450000

Ipc: G06F0008410000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 8/41 20180101AFI20200326BHEP

Ipc: G06F 11/34 20060101ALI20200326BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20200430

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 1321873

Country of ref document: AT

Kind code of ref document: T

Effective date: 20201015

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602017025014

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20201007

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1321873

Country of ref document: AT

Kind code of ref document: T

Effective date: 20201007

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210108

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210208

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210107

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210207

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210107

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602017025014

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

26N No opposition filed

Effective date: 20210708

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210517

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210531

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210531

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20210531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210517

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210207

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210531

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230505

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201007

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20170517

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230420

Year of fee payment: 7

Ref country code: DE

Payment date: 20230419

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230420

Year of fee payment: 7