WO2007131089A2 - Code translation and pipeline optimization - Google Patents

Code translation and pipeline optimization Download PDF

Info

Publication number
WO2007131089A2
WO2007131089A2 PCT/US2007/068110 US2007068110W WO2007131089A2 WO 2007131089 A2 WO2007131089 A2 WO 2007131089A2 US 2007068110 W US2007068110 W US 2007068110W WO 2007131089 A2 WO2007131089 A2 WO 2007131089A2
Authority
WO
WIPO (PCT)
Prior art keywords
translated
target application
application code
code block
block
Prior art date
Application number
PCT/US2007/068110
Other languages
French (fr)
Other versions
WO2007131089A3 (en
Inventor
Victor Suba
Stewart Saragaison
Brian Watson
Original Assignee
Sony Computer Entertainment Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/740,636 external-priority patent/US7568189B2/en
Application filed by Sony Computer Entertainment Inc. filed Critical Sony Computer Entertainment Inc.
Publication of WO2007131089A2 publication Critical patent/WO2007131089A2/en
Publication of WO2007131089A3 publication Critical patent/WO2007131089A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/445Exploiting fine grain parallelism, i.e. parallelism at instruction level
    • G06F8/4451Avoiding pipeline stalls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45504Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
    • G06F9/45516Runtime code conversion or optimisation

Definitions

  • the invention is related to emulation software for executing applications on a computer or information processing device other than the one for which the applications were originally written.
  • Applications are typically developed to be executed by computer systems of a particular type or that meet certain specifications. Developers specify the functions of an application as source code expressed in one or more programming languages. Source code is typically designed to be easily written and understood by human developers. Development applications, such as compilers, assemblers, linkers, and interpreters, convert an application expressed as source code into binary code or object code modules, which are in a format capable of being executed by the intended computer system.
  • the binary code or object code format typically is adapted to the architecture of the intended computer system, including the number and type of microprocessors; the arrangement of memory and storage; and the audio, video, networking, and other input and output subsystems.
  • the computer system originally intended to execute an application is referred to as a target computer system.
  • a target computer system Often, it is desirable to be able to execute applications on different types of computer systems other than the one for which the applications were originally written. For example, users with a new computer system, such as a video game console, may still wish to use applications previously purchased for other types of computer systems, such as older video game consoles.
  • a computer system that is of a different type than the target computer system originally intended for an application is referred to as a host computer system.
  • Emulation is another solution for executing applications on host computer systems.
  • Emulation software and/or hardware enables the host computer system to mimic the functionality of the target computer system.
  • a host computer system using the appropriate emulation will ideally respond to an application's binary code in the same or similar way as the target computer system.
  • One of the simplest types of emulation is a software interpreter that sequentially analyzes each instruction in an application's binary code modules, creates one or more equivalent instructions for the host computer system, and then executes the equivalent instructions.
  • the emulator also typically includes data structures adapted to represent the state of the emulated target computer system.
  • the emulator also may include software virtual machine functions or modules adapted to mimic the hardware functions of the emulated target computer system and to interface hardware resources of the host computer system with the application.
  • a more complicated type of emulation employs binary translation to convert large portions of an application's binary code modules into corresponding portions of host computer system instructions prior to execution. Binary translation can be performed statically, i.e. prior to the execution of the application by the host computer system, or dynamically, i.e. during the execution of other portions of the application by the host computer system.
  • Translated portions, or blocks, of the application can be cached, thereby amortizing the performance penalty associated with emulation for frequently executed portions of the application, such as loops, functions, and subroutines.
  • Translated blocks of the application can also be optimized for execution by host computer system, taking advantage of application information known in advance or determined while running portions of the application.
  • emulators it thus is desirable for emulators to provide improved performance when executing applications on a host computer system. It is further desirable for emulators to optimize translated code to take advantage of unique hardware features of the host computer system.
  • Embodiments in accordance with the present invention include an emulator using code translation and recompilation to execute target computer system applications on a host computer system.
  • application code is partitioned into application code blocks of related instructions. Function calls and returns, jump table calls, and conditional branches can delineate boundaries between application code blocks.
  • application code block groups are sized to comply with branch instruction restrictions. When an application code block group is selected for execution, a cache tag of the application code block group is used to determine if a corresponding translated code block group is available and valid. If not, the application code block is translated into a corresponding translated code block and executed.
  • sequentially executed translated code blocks are located in adjacent portions of memory to improve performance when switching between translated code blocks.
  • the emulator uses a link register of the host computer system to prefetch instructions and data from the second translated code block.
  • the emulator verifies the function return address with a return address stored by the target virtual machine in case a function modifies its return address.
  • the emulator when translating application code blocks, the emulator takes into account structural hazards such as updates to status flag and other registers lagging behind their respective instructions. Code analysis is used to identify instructions susceptible to structural hazards due to dependence on a value set by a preceding instruction. The emulator then identifies the preceding instruction creating the value in question, and adds instructions preserving or recreating this value until accessed. The added instructions may modify a status flag value of the host computer system to match the behavior of the status flag register of the target computer system.
  • Figure 1 illustrates a method of translating and executing application code in an emulator according to an embodiment of the invention
  • Figure 2 illustrates an example partitioning of application code into translated code blocks according to an embodiment of the invention
  • Figure 3 illustrates a method of sizing translated code blocks according to an embodiment of the invention
  • Figures 4A-4B illustrate an example method of mapping function calls from application code to an optimal format for the host computer system according to an embodiment of the invention
  • Figure 5 illustrates a method of compensating for status flag differences according to an embodiment of the invention
  • Figure 6 illustrates an example hardware system suitable for implementing an embodiment of the invention
  • Figure 7 illustrates an example processor suitable for implementing an embodiment of the invention
  • Figure 8 illustrates an example target computer system capable of being emulated using embodiments of the invention.
  • Figure 9 illustrates an example emulator architecture on a host computer system capable of emulating the target computer system of Figure 8.
  • Figure 1 illustrates a method 100 of translating and executing application code in an emulator in accordance with one embodiment of the present invention.
  • the emulator partitions the application code into blocks of related instructions. Groups of related blocks, such as blocks from the same function, are chained together to form block groups. Each block group is translated or recompiled to a format capable of execution by the host computer system.
  • Method 100 begins at step 105, which sets the start of a block of application code to be translated to the beginning of the application code or any other application entry point, such as the beginning of a function.
  • Step 110 traces forward through the application code from the block start point to identify one or more block end points.
  • block end points are indicated by application code instructions that changes the control flow of the application, such as a branch instruction, a function call, a function return, or a jump table call.
  • Step 115 translates the set of application code instructions defined from the block start point to the block end points into a format capable of being executed by the host computer system.
  • Embodiments of step 115 can use any code translation or recompilation technique known in the art to accomplish this task.
  • Step 120 caches the translated code block groups.
  • the blocks of a block group are chained or linked together according to the control flow of the application.
  • step 120 computes a cache tag for each translated code block or alternatively, a single cache tag for an entire block group of translated code blocks. The cache tag is used to determine whether the cached translated code block is still a valid translation.
  • the cache tag of a translated code block or block group is a checksum based upon its corresponding untranslated application code blocks.
  • the cache tag is or is derived from an effective memory address of corresponding untranslated application code blocks.
  • these types of cache tags can be used to match application code blocks with corresponding cached translated code blocks, regardless of the memory location of the application code block.
  • the cache tag is, or is derived from, the memory address of the corresponding untranslated application code blocks.
  • Step 125 executes the translated code block group.
  • Embodiments of the emulator execute translated code blocks on the same processor or on a different processor or processor core element that executes method 100. As discussed above, multiple blocks of a block group may be chained or linked together according to the control flow of the application.
  • the end of a translated code block includes a conditional or unconditional branch instruction used to select the next translated block in the block group to be executed.
  • the host system follows these instructions to execute the translated code blocks of a block group in the sequence specified by the control flow of the application.
  • the end of a translated block can include an instruction calling the emulator or code translation application at the end of the block group, allowing the host system to continue executing the steps of method 100.
  • Step 130 determines the location of the next block group of application code to be executed.
  • static code analysis techniques can be used to identify the next block of application code to be executed in advance of runtime.
  • dynamic code analysis techniques are used to monitor the execution of a translated code block group to determine the next block group of application code at runtime.
  • step 130 makes this determination when the execution of the current translated code block is complete.
  • step 130 determines the block start location of the next block group of application code from static or dynamic code analysis of the most recently executed translated code block. Step 130 then traces forward through the application code to identify one or more ends of code blocks in the block group, similar to step 110.
  • Step 135 determines whether the next block group has already been translated and stored in the translated code block cache. In an embodiment, step 135 determines a cache tag value, such as a checksum, effective memory address, or actual memory address, of the next application code block group. Step 135 then compares this cache tag value with the cache tag previously stored in associated with translated code blocks in the translated code block cache. If the two cache tag values match, then the cached translated code block group is a valid representation of the application code block. Step 140 then selects the translated code block group from the translated code block cache. Method 100 then proceeds to step 125 to execute the selected translated code block group.
  • step 145 sets the block start and end points to the boundaries of the next block group of application code.
  • Method 100 then proceeds to step 115 to translate the next block of application code into a corresponding translated code block and cache and execute the newly translated code block. Steps 115 through 145 may be similarly repeated for each block of application code as the emulator processes and executes the application.
  • an embodiment of the emulator caches translated code blocks. Before executing a cached code block, a cache tag value of the virtual machine memory storing application code is compared with the cache tag of the corresponding cached translated code block. This ensures that the cached translated code block is a valid representation of the application code at the time of execution.
  • some applications employ relocatable code, which can be positioned at different places in memory. If the cache tag for evaluating the validity of a cached translated code block group is derived from a fixed memory address or a checksum of a fixed range of memory, the cache tag value for the code block group will change each time the relocatable code is moved to a different part of memory, even if the relocatable code itself doesn't change. Thus, even though the translated code block cache may already include a translated version of the relocatable code, a cache miss will occur and the emulator will retranslate the same application code each time it is moved to a new location. As a result, the emulator performance degrades substantially.
  • cache tag values are determined for code block groups based on application code block group boundaries, rather than fixed ranges of memory addresses.
  • a checksum of this application code block group is created. This checksum is compared with the checksums previously stored in association with the translated code block cache. If the application code block group checksum matches a checksum associated with a cached translated code block, this translated code block is executed.
  • the cache tag is based on an effective or source memory address of the application code block group.
  • an application might copy a block group of relocatable code from a fixed location in main memory into different locations in a scratchpad or execution memory.
  • the effective address of the block group is the memory address in main memory, which does not change.
  • Figure 2 illustrates an example partitioning 200 of application code into translated code blocks according to an embodiment of the invention.
  • the original application code 205 is partitioned into code blocks along boundaries defined by control flow instructions, such as conditional branch instructions, jump tables, function calls, and function returns.
  • Related application code blocks are then chained together to form a block group.
  • application code 205 represents function code of an application.
  • Block group 210 comprises code block 215B, corresponding with portion 215 A of the application code; code block 220B, corresponding with portion 220A of the application code; code block 225B, corresponding with portion 225A of the application code; and code block 230B, corresponding with portion 230A of the application code.
  • the code blocks of block group 210 are chained together according to the control flow of the application code 205.
  • the conditional branch at the end of block 215B can direct the host computer system to execute either code block 220B or 225B.
  • the application code blocks are translated from a target computer system format into a set of corresponding translated code blocks capable of being executed by the host computer system.
  • Some types of host computer systems have restrictions on the distance in address space between a conditional branch or other control flow instruction and the branch destination or destination address. Complying with the restrictions can be made more difficult because translated code blocks are often larger than their corresponding portions of target computer system code. Thus, the translated code block groups should be sized so that the host computer system restrictions are not violated.
  • Figure 3 illustrates a method 300 of sizing translated code blocks in accordance with one embodiment.
  • Step 305 selects a candidate translated code block group for potential inclusion in a block group and specifies a possible location in the translated code block group for the candidate translated code block.
  • Step 310 evaluates the translated block group including the selected candidate translated code block to determine if all of the branch instructions comply with branch size restrictions of the host computer system.
  • step 310 compares the size of the translated block group including the candidate code block to a maximum size limit.
  • step 310 uses static or dynamic code analysis to determine the potential destination addresses for each branch or control flow instruction. These destination addresses are then individually compared with their respective instructions to determine if the maximum size limit is violated.
  • step 320 starts a new translated code block group and adds the candidate code block to this new block group. Method 300 then proceeds back to step 305 to select another candidate code block for inclusion in the new translated code block group.
  • step 315 adds the candidate code block to translated code block group.
  • Method 300 then proceeds back to step 305 to select another candidate code block for possible inclusion in the translated code block group.
  • the translated code blocks of the translated code block group may be rearranged to comply with the branch size restrictions of the host computer system.
  • multiple branch instructions can be chained together to allow for larger distances between source and destination addresses.
  • the emulator attempts to store translated code blocks corresponding to adjacent application code blocks in adjacent portions of memory. Preserving adjacency between translated code blocks in block groups can improve branching performance for some types of host computer systems.
  • Figures 4A and 4B illustrate an example application of this embodiment of the invention.
  • Figure 4A illustrates a function call and return mechanism 400 in an application for example target computer system.
  • the target application 405 includes code for an example function X 407 and an example function Y 409.
  • function X 407 includes a function call instruction 412, which directs the target computer system 415 to execute function Y 409.
  • the target computer system 415 stores the return address 416 for the function call in return address register 417.
  • the return address is typically the address of the instruction immediately following the function call instruction 412.
  • some types of target computer systems and function call instructions set the return address to the location of a different instruction.
  • the return address is stored in a stack or other memory instead of a register 417.
  • the previous value of the return address register 417 is stored in a stack or other memory to allow for multiple levels of function calls and function recursion.
  • the target computer system 415 After storing the appropriate return address 416, the target computer system 415 begins to execute function Y 409. When this is complete, a function return instruction 420 directs the target computer system 415 to resume execution of function X 407 beginning with the instruction at the previously stored return address. In response to the function return instruction 420, the target computer system 415 retrieves 422 the previously stored return address from the return address register 417. Using this return address, the target computer system 415 resumes execution 424 of function X 407 at the appropriate location.
  • Figure 4B illustrates a corresponding function call and return mechanism 430 for a translated application executed by a host computer system according to an embodiment of the invention.
  • a host computer system 435 executes a translated target computer system application 432 corresponding with application 405 discussed above.
  • Translated application 432 includes translated block group X' 437 and translated block group Y', which correspond with functions X 407 and Y 409 of the target computer system application, respectively.
  • Translated block group X' 437 includes a translated code blocks 440 and 445.
  • the target application code is partitioned into code blocks by control flow functions, such as the translated function call instruction 442, which corresponds with untranslated function call instruction 412.
  • the emulator attempts to store translated code blocks 440 and 445 in adjacent portions of memory to facilitate the transfer of execution between translated application code blocks.
  • translated code block 440 ends with one or more translated function call instructions 442 that direct the host computer system to execute block group Y' 439, which corresponds with the function Y 409 of the original untranslated application.
  • the host computer system 435 stores 448 the function return address in the host link register 450.
  • the host link register 450 is a specialized register of the host computer 435 adapted to store function return addresses. Often, the host computer system 435 is adapted to prefetch one or more instructions beginning at the function return address stored in a link register. This reduces or eliminates pipeline stalls upon returning from a function.
  • the host computer system 435 stores 448 the address of the first instruction following the translated function call instruction 442 in the host link register 450.
  • this return address corresponds with the first instruction of translated code block 445.
  • translated code blocks 440 and 445 cannot be stored in adjacent portions of host computer system memory, an additional instruction must be added to translated code block 440 following the translated function call to jump to translated code block 445.
  • the host computer system 435 In addition to storing the return address in the host link register 450, the host computer system 435 also stores 452 a target memory space return address value in a target virtual machine return address register 455.
  • the target memory space return address value stored in the target virtual machine return address register 455 corresponds with the return address value that would have been stored by the target computer system 415 in its return address register 417 in response to the function call instruction 412.
  • the target virtual machine return address register 455 is a portion of the emulator virtual machine mimicking the state and functions of return address register 417 of the target computer system 415.
  • the target virtual machine return address register 455 can be mapped directly to a register of the host computer system 435 or assigned to a location in the host computer system 435 memory. Additional virtual machine software code can be associated with the target virtual machine return address register 455 to mimic the state and functions of the return address register 417 of the target computer system 415.
  • the host computer system 435 After storing the return address for the translated application code block in the host link register 450 and the corresponding target memory space return address in target virtual machine return address register 455, the host computer system 435 begins to execute translated block group Y' 439, corresponding to the function Y 409 in the target application.
  • the host computer system 435 executes the one or more translated code blocks 460 of block group Y' 439 to perform the same or equivalent operations as function Y 409.
  • one or more translated function return instructions 465 directs the host computer system 435 to resume execution of translated block group X' 437.
  • Some target computer applications may overwrite the return address stored in the return address register 417 with a different address. This may be done so that a function returns to a different location in an application than it was initially called from.
  • an embodiment of the emulator directs the host computer system 435 to retrieve 467 the target memory space return address previously stored in the target virtual machine return address register 455 in response to the translated function return instruction 465.
  • the retrieved target memory space return address is converted to a corresponding memory address in the host computer system.
  • the host computer system 435 then writes 469 the converted return address to the host link register 450.
  • the host computer system 435 prefetches instructions and data starting at the address stored in the link register to avoid a pipeline stall when branching between translated code blocks. In this example, these prefetched instructions and data are part of translated code block 445. If the converted return address is the same as the return address previously stored in the host link register 450 by the translated function call 442, the host computer system 435 ignores the write 469 to the host link register 450 and retains the prefetched instructions and data of translated code block 445. The host computer system 435 can then begin executing the translated code block 445 of translated block group X' 437. Under this condition, the host computer system 435 avoids a pipeline stall and its associated performance penalty when jumping from the execution of translated block group Y' 439 to translated code block 445 of block group X' 437.
  • the host computer system 435 discards the prefetched instructions and data and executes translated code blocks beginning at the return address specified by the target virtual machine return address register 455. This condition may occur if the target computer application overwrites the return address stored in the return address register 417 with a different address. Under this condition, the host computer system 435 will experience a pipeline stall and its associated performance penalty when jumping from the execution of translated block group Y' 439 to translated code block 445 of block group X' 437. However, applications with this behavior are relatively rare compared to the default function call and return mechanism.
  • Embodiments of the invention can include variations of the above described behavior depending upon the type of target computer system, host computer system, and translated target applications. For example, if the target application never modifies the contents of the target computer system return address register 417 (or if the target computer system prohibits this behavior), then the host computer system may omit writing the contents of the counterpart target virtual machine return address register 455 to the host link register 450 prior to returning from a translated function call. Moreover, if the target application never reads the contents of the target computer system return address register 417, except when returning from a function call, then the target virtual machine return address register 455 can be omitted entirely.
  • the target virtual machine return address register 455 can store the return address expressed in host address space, rather than the target address space. Additional functions associated with the target virtual machine return address register 455 can translate this return address between the host address space and the target address space as needed. This may improve performance of the emulator if the translated target application infrequently accesses the virtual machine return address register 455.
  • step 410 uses a modified translated instruction to push the correct starting address for the next translated code block into the link register.
  • Some target computer systems have unique structural characteristics that need to be taken into account for emulation to operate correctly on the host computer system. For example, the value of a target computer system status flag register, which stores status flags such as the sign, zero, overflow, underflow, divide by zero, and carry bits, may lag its corresponding instruction by several processor cycles due to pipelining and other characteristics. In this example, if an add instruction would cause a status flag value to be set, such as the carry bit flag being set to "1 ", this status flag value would not appear in the status flag register until several processor cycles after the add instruction was executed.
  • a target computer system status flag register which stores status flags such as the sign, zero, overflow, underflow, divide by zero, and carry bits, may lag its corresponding instruction by several processor cycles due to pipelining and other characteristics.
  • an add instruction would cause a status flag value to be set, such as the carry bit flag being set to "1 ", this status flag value would not appear in the status flag register until several processor cycles after the add instruction was executed.
  • the emulator must compensate to ensure that the correct status flag register values are synchronized with the appropriate instructions.
  • One approach is to copy the status flag values from the status flag register of the host computer system to a buffer after every translated instruction. The buffer values can then be synchronized with the appropriate translated instructions.
  • this approach is very time-consuming and can decrease emulator performance.
  • the host computer system status flags may behave differently than their counterparts in the target computer system. Some types of host computer systems may incur large performance penalties in accessing its status flags. Moreover, some host computer systems may not even have counterparts to some or all of the status flags of the target computer system.
  • FIG. 5 illustrates a method 500 of compensating for status flag differences according to an embodiment of the invention.
  • Step 505 identifies an application code instruction accessing a status flag register value.
  • the identified instruction can be an instruction that reads a value from the status flag register of the target computer system or an instruction that behaves differently based on a value from the target computer system status flag register, such as some types of conditional branch instruction.
  • Step 510 traces back in the application code to identify one or more instructions potentially generating the status flag value accessed by the instruction identified in step 505.
  • step 510 takes into account any lag in the target computer system between the time when an instruction is executed and when the status flag register is updated with the appropriate value. Some status flags are "sticky", in the sense that once they are set, they remain at that value until read or reset by the target computer system. For these types of status flags, step 510 identifies one or more instructions potentially responsible for setting the status flag value.
  • Step 515 analyzes one or more translated instructions corresponding with the application code instructions identified in steps 505 and 510.
  • step 515 modifies the translated code block.
  • step 515 adds instructions to the translated code block to preserve a status flag value of the host computer system in a register or memory for later use by the translated application code. Additionally, step 515 modifies the translated instruction accessing the status flag value to refer to the stored status flag values, rather than the current values of the status flag register.
  • step 515 adds instructions to the translated code block to correct for differences in setting status flag values. For example, if an instruction executed on the target computer system would set a status flag value, such as a sign bit, but its corresponding translated instruction does not do the same thing in the status flag register of the host computer system, then step 515 can add instructions to compensate for this behavior.
  • step 515 adds instructions to the translated code block to recreate the status flag values expected by the translated target application. This may be required if the host computer system does not have a status flag corresponding to a status flag of the target computer system. Additionally, this embodiment of step 515 may be used if accessing status flags in the host computer system decreases performance more than simply recreating the status flag value with additional instructions.
  • Figure 6 illustrates an example hardware system suitable for implementing an embodiment of the invention.
  • Figure 6 is a block diagram of a computer system 1000, such as a personal computer, video game console, personal digital assistant, or other digital device, suitable for practicing an embodiment of the invention.
  • Computer system 1000 includes a central processing unit (CPU) 1005 for running software applications and optionally an operating system.
  • CPU 1005 may be comprised of one or more processing cores.
  • Memory 1010 stores applications and data for use by the CPU 1005.
  • Storage 1015 provides nonvolatile storage for applications and data and may include fixed disk drives, removable disk drives, flash memory devices, and CD-ROM, DVD-ROM, Blu-ray, HD-DVD, or other optical storage devices.
  • User input devices 1020 communicate user inputs from one or more users to the computer system 1000, examples of which may include keyboards, mice, joysticks, touch pads, touch screens, still or video cameras, and/or microphones.
  • Network interface 1025 allows computer system 1000 to communicate with other computer systems via an electronic communications network, and may include wired or wireless communication over local area networks and wide area networks such as the Internet.
  • An audio processor 1055 is adapted to generate analog or digital audio output from instructions and/or data provided by the CPU 1005, memory 1010, and/or storage 1015.
  • the components of computer system 1000, including CPU 1005, memory 1010, data storage 1015, user input devices 1020, network interface 1025, and audio processor 1055 are connected via one or more data buses 1060.
  • a graphics subsystem 1030 is further connected with data bus 1060 and the components of the computer system 1000.
  • the graphics subsystem 1030 includes a graphics processing unit (GPU) 1035 and graphics memory 1040.
  • Graphics memory 1040 includes a display memory (e.g., a frame buffer) used for storing pixel data for each pixel of an output image.
  • Graphics memory 1040 can be integrated in the same device as GPU 1035, connected as a separate device with GPU 1035, and/or implemented within memory 1010. Pixel data can be provided to graphics memory 1040 directly from the CPU 1005.
  • CPU 1005 provides the GPU 1035 with data and/or instructions defining the desired output images, from which the GPU 1035 generates the pixel data of one or more output images.
  • the data and/or instructions defining the desired output images can be stored in memory 1010 and/or graphics memory 1040.
  • the GPU 1035 includes 3D rendering capabilities for generating pixel data for output images from instructions and data defining the geometry, lighting, shading, texturing, motion, and/or camera parameters for a scene.
  • the GPU 1035 can further include one or more programmable execution units capable of executing shader programs.
  • the graphics subsystem 1030 periodically outputs pixel data for an image from graphics memory 1040 to be displayed on display device 1050.
  • Display device 1050 is any device capable of displaying visual information in response to a signal from the computer system 1000, including CRT, LCD, plasma, and OLED displays.
  • Computer system 1000 can provide the display device 1050 with an analog or digital signal.
  • CPU 1005 is one or more general-purpose microprocessors having one or more processing cores. Further embodiments of the invention can be implemented using one or more CPUs with microprocessor architectures specifically adapted for highly parallel and computationally intensive applications, such as media and interactive entertainment applications.
  • Figure 7 illustrates an example processor 2000 suitable for implementing an embodiment of the invention.
  • Processor 2000 includes a number of processor elements, each capable of executing independent programs in parallel.
  • Processor 2000 includes PPE processor element 2005.
  • PPE processor element is a general-purpose processor of CISC, RISC, or other type of microprocessor architecture known in the art.
  • PPE processor element 2005 is a 64-bit, multithreaded RISC architecture microprocessor, such as the PowerPC architecture.
  • PPE processor element 2005 can include a cache memory 2007 partitioned into one, two, or more levels of caches temporarily holding data and instructions to be executed by PPE processor element 2005.
  • processor 2000 includes a number of SPE processor elements 2010.
  • processor 2000 includes eight SPE processor elements 2010A-2010H; however, other example processors can include different number of SPE processor elements.
  • SPE processor elements 2010 are adapted for stream processing of data. In stream processing, a program is executed repeatedly on each item in a large set of data.
  • the SPE processor elements 2010 may include instruction execution units capable of executing SIMD instructions on multiple data operands simultaneously.
  • SPE processor elements 2010 may also include instruction units capable of executing single-instruction, single data (SISD) for more general processing tasks.
  • SISD single-instruction, single data
  • Each SPE processor element such as SPE processor element 2010A, includes local data and instruction storage 2012 A. Data and instructions can be transferred to and from the local data and instruction storage 2012A via DMA unit 2014A.
  • the DMA units, such as unit 2014A, are capable of transferring data to and from each of the SPE processor elements 2010 without processor supervision, enabling the SPE processor elements 2010 to process data continuously without stalling.
  • Data and instructions are input and output by the processor 2000 via memory and I/O interfaces 2015. Data and instructions can be communicated between the memory and I/O interfaces 2015, the PPE processor element 2005, and SPE processor elements 2010 via processor bus 2020.
  • Embodiments of the invention can be used to improve emulator performance and compatibility for a variety of different types of target computer systems, including general computer system 1000 shown above.
  • Figure 8 illustrates another example target computer system 3000 capable of being emulated using embodiments of the invention.
  • Target computer system 3000 illustrates the hardware architecture of the Sony Playstation 2 video game console.
  • Target computer system 3000 includes a variety of components connected via a central data bus 3002. These components include a CPU core 3005; a pair of vector processing units, VPO 3010 and VPl 3015; a graphics processing unit interface 3020; an image processing unit 3030; an I/O interface 3035; a DMA controller 3040; and a memory interface 3045.
  • target computer system 3000 includes a private bus 3007 between CPU core 3005 and vector processing unit VPO 3010 and a private bus 3019 between vector processing unit VPU 1 3015 and graphics processing unit interface 3020.
  • components 3005, 3010, 3015, 3020, 3030, 3035, 3040 and 3045 are included within a processor chip 3060.
  • Processor chip 3060 is connected with graphics processing unit 3025 via graphics bus 3022 and with memory 3050 via memory bus 3055. Additional external components, such as sound and audio processing components, network interfaces, and optical storage components 3065, are omitted from Figure 8 for clarity.
  • Figure 9 illustrates an example emulator architecture 4000 on a host computer system capable of emulating the target computer system 3000 of Figure 8.
  • emulator architecture 4000 is implemented on a host computer system including a processor similar to processor 2000 of Figure 7.
  • emulator architecture 4000 PPE processor element 4005 executes one or more emulator threads that provide functions including emulator control; device drivers; a vector processing unit VPUl code translator; CPU core emulation including code interpreters and translators; and vector processing unit VPUO emulation.
  • SPE processor element 4010A executes one or more emulation threads that provide functions including DMA controller emulation; vector processing unit VPUl interface emulation; and graphics processing unit interface arbitration.
  • SPE processor element 4010B executes one or more emulation threads that execute the translated or recompiled vector processing unit VPUl code.
  • SPE processor element 4010C executes one or more emulation threads that emulate the image processing unit.
  • SPE processor element 4010D executes one or more emulation threads that emulate the I/O interface functions.
  • SPE processor element 4010E executes one or more emulation threads that emulate the functions of sound and audio processors.
  • SPE processor element 4010F executes one or more emulation threads that emulate the functions of the graphics processing unit interface.
  • additional emulation threads executed by PPE processor element 4005 and/or SPE processor elements can emulate the functionality of the graphics processing unit of the target computer system or translate graphics processing instructions to a format compatible with the graphics processing unit of the host computer system (omitted for clarity from Figure 9).
  • the host computer system can include a graphics processing unit similar to or compatible with the graphics processing unit of the target computer system.
  • embodiments of the invention can be utilized to improve the performance of multithreaded emulation and virtual machine applications.
  • embodiments of the invention can be used to emulate video game consoles such as the Playstation, Playstation 2, and PSP systems; x86-based computer and video game systems; PowerPC-based computer and video game systems; and Java, .NET, and other virtual machine and runtime environments.

Abstract

An emulator uses code translation and recompilation to execute target computer system applications on a host computer system. Target application code is partitioned into target application code blocks, and related target application code blocks are combined into block groups and translated. Translated application code block groups are sized to comply with restrictions on branch instruction size. Upon selecting an application code block group for execution, a cache tag is used to determine if a corresponding translated code block group is available and valid. If not, the block group is translated and executed. Sequentially executed translated code blocks are located in adjacent portions of memory to improve performance when switching between translated code blocks. The emulator may use a link register of the host computer system to prefetch instructions and data from translated code blocks. The emulator also takes into account structural hazards in translating instructions.

Description

CODE TRANSLATIONAND PIPELINE OPTIMIZATION
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent Application No. 60/797,761, filed May 3, 2006, entitled "Code Translation and Pipeline Optimization," Attorney Docket No. 026340-004900US; which is related to U.S. Provisional Patent Application No. 60/763,568, filed January 30, 2006, entitled "Branch Prediction Thread Management," Attorney Docket No. 026340-004700US; U.S. Provisional Patent Application No. 60/797,435, filed May 3, 2006, entitled "DMA and Graphics Interface Emulation," Attorney Docket No. 026340-004800US; U.S. Provisional Patent Application No. 60/797,762, filed May 3, 2006, entitled "Stall Prediction Thread Management," Attorney Docket No. 026340-00471 OUS; U.S. Provisional Patent Application No. 60/746,267, filed May 3, 2006, entitled "Translation Block Invalidation Prehints in Emulation of a Target System on a Host System;" and U.S. Provisional Application No. 60/746,268, filed May 3, 2006, entitled "Register Mapping in Emulation of a Target System on a Host System;" U.S. Provisional Patent Application No. 60/746,273, filed May 3, 2006, entitled "Method and Apparatus for Resolving Clock Management Issue in Emulation Involving Both Interpreted and Translated Code, " all of which are hereby incorporated herein by reference for all purposes.
BACKGROUND OF THE INVENTION
[0002] The invention is related to emulation software for executing applications on a computer or information processing device other than the one for which the applications were originally written. Applications are typically developed to be executed by computer systems of a particular type or that meet certain specifications. Developers specify the functions of an application as source code expressed in one or more programming languages. Source code is typically designed to be easily written and understood by human developers. Development applications, such as compilers, assemblers, linkers, and interpreters, convert an application expressed as source code into binary code or object code modules, which are in a format capable of being executed by the intended computer system. The binary code or object code format typically is adapted to the architecture of the intended computer system, including the number and type of microprocessors; the arrangement of memory and storage; and the audio, video, networking, and other input and output subsystems. The computer system originally intended to execute an application is referred to as a target computer system. [0003] Often, it is desirable to be able to execute applications on different types of computer systems other than the one for which the applications were originally written. For example, users with a new computer system, such as a video game console, may still wish to use applications previously purchased for other types of computer systems, such as older video game consoles. A computer system that is of a different type than the target computer system originally intended for an application is referred to as a host computer system.
[0004] One solution for executing applications on host computer systems, i.e. types of computer systems other than the one for which the applications were originally written, is to modify the application. Application source code can be modified, or ported, to a different type of computer system. However, this is difficult, time-consuming, and expensive if there are substantial differences between the target computer system and the host computer system.
[0005] Emulation is another solution for executing applications on host computer systems. Emulation software and/or hardware enables the host computer system to mimic the functionality of the target computer system. A host computer system using the appropriate emulation will ideally respond to an application's binary code in the same or similar way as the target computer system.
[0006] One of the simplest types of emulation is a software interpreter that sequentially analyzes each instruction in an application's binary code modules, creates one or more equivalent instructions for the host computer system, and then executes the equivalent instructions. The emulator also typically includes data structures adapted to represent the state of the emulated target computer system. The emulator also may include software virtual machine functions or modules adapted to mimic the hardware functions of the emulated target computer system and to interface hardware resources of the host computer system with the application.
[0007] Because of the overhead associated with constantly analyzing and converting application instructions into equivalent host computer system instructions, software interpreters often require orders of magnitude more processing performance on a host computer system to execute an application at the same speed as the target computer system. Thus, for applications requiring real-time emulation, software interpreters are often too slow to be used when the host computer system is not substantially faster than the target computer system. [0008] A more complicated type of emulation employs binary translation to convert large portions of an application's binary code modules into corresponding portions of host computer system instructions prior to execution. Binary translation can be performed statically, i.e. prior to the execution of the application by the host computer system, or dynamically, i.e. during the execution of other portions of the application by the host computer system. Translated portions, or blocks, of the application can be cached, thereby amortizing the performance penalty associated with emulation for frequently executed portions of the application, such as loops, functions, and subroutines. Translated blocks of the application can also be optimized for execution by host computer system, taking advantage of application information known in advance or determined while running portions of the application.
[0009] It thus is desirable for emulators to provide improved performance when executing applications on a host computer system. It is further desirable for emulators to optimize translated code to take advantage of unique hardware features of the host computer system.
BRIEF SUMMARY OF THE INVENTION
[0010] Embodiments in accordance with the present invention include an emulator using code translation and recompilation to execute target computer system applications on a host computer system. In one embodiment, application code is partitioned into application code blocks of related instructions. Function calls and returns, jump table calls, and conditional branches can delineate boundaries between application code blocks. In an embodiment, application code block groups are sized to comply with branch instruction restrictions. When an application code block group is selected for execution, a cache tag of the application code block group is used to determine if a corresponding translated code block group is available and valid. If not, the application code block is translated into a corresponding translated code block and executed.
[0011] In one embodiment, sequentially executed translated code blocks are located in adjacent portions of memory to improve performance when switching between translated code blocks. In a further embodiment, when a function call from a first translated code block will return to a second translated code block, the emulator uses a link register of the host computer system to prefetch instructions and data from the second translated code block. In still a further embodiment, the emulator verifies the function return address with a return address stored by the target virtual machine in case a function modifies its return address.
[0012] In an embodiment, when translating application code blocks, the emulator takes into account structural hazards such as updates to status flag and other registers lagging behind their respective instructions. Code analysis is used to identify instructions susceptible to structural hazards due to dependence on a value set by a preceding instruction. The emulator then identifies the preceding instruction creating the value in question, and adds instructions preserving or recreating this value until accessed. The added instructions may modify a status flag value of the host computer system to match the behavior of the status flag register of the target computer system.
[0013] A further understanding of the nature and the advantages of the inventions disclosed herein may be realized by reference of the remaining portions of the specification and the attached drawings.
BRIEF DESCRIPTION OF THE DRAWINGS [0014] The invention will be described with reference to the drawings, in which:
[0015] Figure 1 illustrates a method of translating and executing application code in an emulator according to an embodiment of the invention;
[0016] Figure 2 illustrates an example partitioning of application code into translated code blocks according to an embodiment of the invention;
[0017] Figure 3 illustrates a method of sizing translated code blocks according to an embodiment of the invention;
[0018] Figures 4A-4B illustrate an example method of mapping function calls from application code to an optimal format for the host computer system according to an embodiment of the invention;
[0019] Figure 5 illustrates a method of compensating for status flag differences according to an embodiment of the invention;
[0020] Figure 6 illustrates an example hardware system suitable for implementing an embodiment of the invention; [0021] Figure 7 illustrates an example processor suitable for implementing an embodiment of the invention;
[0022] Figure 8 illustrates an example target computer system capable of being emulated using embodiments of the invention; and
[0023] Figure 9 illustrates an example emulator architecture on a host computer system capable of emulating the target computer system of Figure 8.
DETAILED DESCRIPTION OF THE INVENTION
[0024] Figure 1 illustrates a method 100 of translating and executing application code in an emulator in accordance with one embodiment of the present invention. In this embodiment, the emulator partitions the application code into blocks of related instructions. Groups of related blocks, such as blocks from the same function, are chained together to form block groups. Each block group is translated or recompiled to a format capable of execution by the host computer system. Method 100 begins at step 105, which sets the start of a block of application code to be translated to the beginning of the application code or any other application entry point, such as the beginning of a function.
[0025] Step 110 traces forward through the application code from the block start point to identify one or more block end points. In an embodiment, block end points are indicated by application code instructions that changes the control flow of the application, such as a branch instruction, a function call, a function return, or a jump table call.
[0026] Step 115 translates the set of application code instructions defined from the block start point to the block end points into a format capable of being executed by the host computer system. Embodiments of step 115 can use any code translation or recompilation technique known in the art to accomplish this task.
[0027] Step 120 caches the translated code block groups. In an embodiment, the blocks of a block group are chained or linked together according to the control flow of the application. In an embodiment, step 120 computes a cache tag for each translated code block or alternatively, a single cache tag for an entire block group of translated code blocks. The cache tag is used to determine whether the cached translated code block is still a valid translation. [0028] In an embodiment, the cache tag of a translated code block or block group is a checksum based upon its corresponding untranslated application code blocks. In another embodiment, the cache tag is or is derived from an effective memory address of corresponding untranslated application code blocks. As discussed in detail below, these types of cache tags can be used to match application code blocks with corresponding cached translated code blocks, regardless of the memory location of the application code block. In still another embodiment, the cache tag is, or is derived from, the memory address of the corresponding untranslated application code blocks.
[0029] Step 125 executes the translated code block group. Embodiments of the emulator execute translated code blocks on the same processor or on a different processor or processor core element that executes method 100. As discussed above, multiple blocks of a block group may be chained or linked together according to the control flow of the application. In one embodiment, the end of a translated code block includes a conditional or unconditional branch instruction used to select the next translated block in the block group to be executed. During step 125, the host system follows these instructions to execute the translated code blocks of a block group in the sequence specified by the control flow of the application. In a further embodiment, the end of a translated block can include an instruction calling the emulator or code translation application at the end of the block group, allowing the host system to continue executing the steps of method 100.
[0030] Step 130 determines the location of the next block group of application code to be executed. In an embodiment, static code analysis techniques can be used to identify the next block of application code to be executed in advance of runtime. In another embodiment, if the next block of application code to be executed cannot be determined statically, dynamic code analysis techniques are used to monitor the execution of a translated code block group to determine the next block group of application code at runtime. In further embodiments, step 130 makes this determination when the execution of the current translated code block is complete.
[0031] In an embodiment, step 130 determines the block start location of the next block group of application code from static or dynamic code analysis of the most recently executed translated code block. Step 130 then traces forward through the application code to identify one or more ends of code blocks in the block group, similar to step 110. [0032] Step 135 determines whether the next block group has already been translated and stored in the translated code block cache. In an embodiment, step 135 determines a cache tag value, such as a checksum, effective memory address, or actual memory address, of the next application code block group. Step 135 then compares this cache tag value with the cache tag previously stored in associated with translated code blocks in the translated code block cache. If the two cache tag values match, then the cached translated code block group is a valid representation of the application code block. Step 140 then selects the translated code block group from the translated code block cache. Method 100 then proceeds to step 125 to execute the selected translated code block group.
[0033] Conversely, if step 135 determines that the translated code block cache does not have a valid representation (or any representation at all) of the next block group of application code, step 145 sets the block start and end points to the boundaries of the next block group of application code. Method 100 then proceeds to step 115 to translate the next block of application code into a corresponding translated code block and cache and execute the newly translated code block. Steps 115 through 145 may be similarly repeated for each block of application code as the emulator processes and executes the application.
[0034] As discussed above, an embodiment of the emulator caches translated code blocks. Before executing a cached code block, a cache tag value of the virtual machine memory storing application code is compared with the cache tag of the corresponding cached translated code block. This ensures that the cached translated code block is a valid representation of the application code at the time of execution.
[0035] However, some applications employ relocatable code, which can be positioned at different places in memory. If the cache tag for evaluating the validity of a cached translated code block group is derived from a fixed memory address or a checksum of a fixed range of memory, the cache tag value for the code block group will change each time the relocatable code is moved to a different part of memory, even if the relocatable code itself doesn't change. Thus, even though the translated code block cache may already include a translated version of the relocatable code, a cache miss will occur and the emulator will retranslate the same application code each time it is moved to a new location. As a result, the emulator performance degrades substantially.
[0036] To overcome this problem in accordance with one embodiment, cache tag values are determined for code block groups based on application code block group boundaries, rather than fixed ranges of memory addresses. In one implementation of this embodiment, when a block group of application code is selected for execution and identified, a checksum of this application code block group is created. This checksum is compared with the checksums previously stored in association with the translated code block cache. If the application code block group checksum matches a checksum associated with a cached translated code block, this translated code block is executed.
[0037] In another implementation of this embodiment, the cache tag is based on an effective or source memory address of the application code block group. For example, an application might copy a block group of relocatable code from a fixed location in main memory into different locations in a scratchpad or execution memory. In this example, the effective address of the block group is the memory address in main memory, which does not change. By using this memory address to create the cache tag, the translated code block cache is effective with relocatable code.
[0038] Figure 2 illustrates an example partitioning 200 of application code into translated code blocks according to an embodiment of the invention. In this example, the original application code 205 is partitioned into code blocks along boundaries defined by control flow instructions, such as conditional branch instructions, jump tables, function calls, and function returns. Related application code blocks are then chained together to form a block group.
[0039] For example, application code 205 represents function code of an application. Block group 210 comprises code block 215B, corresponding with portion 215 A of the application code; code block 220B, corresponding with portion 220A of the application code; code block 225B, corresponding with portion 225A of the application code; and code block 230B, corresponding with portion 230A of the application code.
[0040] The code blocks of block group 210 are chained together according to the control flow of the application code 205. For example, the conditional branch at the end of block 215B can direct the host computer system to execute either code block 220B or 225B.
[0041] The application code blocks are translated from a target computer system format into a set of corresponding translated code blocks capable of being executed by the host computer system. Some types of host computer systems have restrictions on the distance in address space between a conditional branch or other control flow instruction and the branch destination or destination address. Complying with the restrictions can be made more difficult because translated code blocks are often larger than their corresponding portions of target computer system code. Thus, the translated code block groups should be sized so that the host computer system restrictions are not violated.
[0042] Figure 3 illustrates a method 300 of sizing translated code blocks in accordance with one embodiment. Step 305 selects a candidate translated code block group for potential inclusion in a block group and specifies a possible location in the translated code block group for the candidate translated code block. Step 310 evaluates the translated block group including the selected candidate translated code block to determine if all of the branch instructions comply with branch size restrictions of the host computer system.
[0043] In an embodiment, step 310 compares the size of the translated block group including the candidate code block to a maximum size limit. In another embodiment, step 310 uses static or dynamic code analysis to determine the potential destination addresses for each branch or control flow instruction. These destination addresses are then individually compared with their respective instructions to determine if the maximum size limit is violated.
[0044] If the translated code block group with the candidate code block does not comply with the branch size restrictions of the host computer system, step 320 starts a new translated code block group and adds the candidate code block to this new block group. Method 300 then proceeds back to step 305 to select another candidate code block for inclusion in the new translated code block group.
[0045] Conversely, if the translated code block group with the candidate code block does comply with the branch size restrictions of the host computer system, step 315 adds the candidate code block to translated code block group. Method 300 then proceeds back to step 305 to select another candidate code block for possible inclusion in the translated code block group.
[0046] In further embodiments, the translated code blocks of the translated code block group may be rearranged to comply with the branch size restrictions of the host computer system. In still another embodiment, multiple branch instructions can be chained together to allow for larger distances between source and destination addresses.
[0047] In an embodiment, the emulator attempts to store translated code blocks corresponding to adjacent application code blocks in adjacent portions of memory. Preserving adjacency between translated code blocks in block groups can improve branching performance for some types of host computer systems. Figures 4A and 4B illustrate an example application of this embodiment of the invention.
[0048] Figure 4A illustrates a function call and return mechanism 400 in an application for example target computer system. In this example 400, the target application 405 includes code for an example function X 407 and an example function Y 409. In this example 400, function X 407 includes a function call instruction 412, which directs the target computer system 415 to execute function Y 409.
[0049] In response to function call instruction 412, the target computer system 415 stores the return address 416 for the function call in return address register 417. The return address is typically the address of the instruction immediately following the function call instruction 412. However, some types of target computer systems and function call instructions set the return address to the location of a different instruction. In some types of target computer systems, the return address is stored in a stack or other memory instead of a register 417. In still further types of target computer systems, the previous value of the return address register 417 is stored in a stack or other memory to allow for multiple levels of function calls and function recursion.
[0050] After storing the appropriate return address 416, the target computer system 415 begins to execute function Y 409. When this is complete, a function return instruction 420 directs the target computer system 415 to resume execution of function X 407 beginning with the instruction at the previously stored return address. In response to the function return instruction 420, the target computer system 415 retrieves 422 the previously stored return address from the return address register 417. Using this return address, the target computer system 415 resumes execution 424 of function X 407 at the appropriate location.
[0051] Figure 4B illustrates a corresponding function call and return mechanism 430 for a translated application executed by a host computer system according to an embodiment of the invention. In this example 430, a host computer system 435 executes a translated target computer system application 432 corresponding with application 405 discussed above. Translated application 432 includes translated block group X' 437 and translated block group Y', which correspond with functions X 407 and Y 409 of the target computer system application, respectively.
[0052] Translated block group X' 437 includes a translated code blocks 440 and 445. In an embodiment, the target application code is partitioned into code blocks by control flow functions, such as the translated function call instruction 442, which corresponds with untranslated function call instruction 412. Moreover, as translated code blocks 440 and 445 correspond with adjacent portions of the untranslated application, an embodiment of the emulator attempts to store translated code blocks 440 and 445 in adjacent portions of memory to facilitate the transfer of execution between translated application code blocks.
[0053] In this embodiment, translated code block 440 ends with one or more translated function call instructions 442 that direct the host computer system to execute block group Y' 439, which corresponds with the function Y 409 of the original untranslated application.
[0054] In response to the translated function call instruction 442, the host computer system 435 stores 448 the function return address in the host link register 450. The host link register 450 is a specialized register of the host computer 435 adapted to store function return addresses. Often, the host computer system 435 is adapted to prefetch one or more instructions beginning at the function return address stored in a link register. This reduces or eliminates pipeline stalls upon returning from a function.
[0055] In an embodiment, the host computer system 435 stores 448 the address of the first instruction following the translated function call instruction 442 in the host link register 450. When translated code blocks 440 and 445 are arranged in adjacent portions of host computer system memory, this return address corresponds with the first instruction of translated code block 445. When translated code blocks 440 and 445 cannot be stored in adjacent portions of host computer system memory, an additional instruction must be added to translated code block 440 following the translated function call to jump to translated code block 445.
[0056] In addition to storing the return address in the host link register 450, the host computer system 435 also stores 452 a target memory space return address value in a target virtual machine return address register 455. The target memory space return address value stored in the target virtual machine return address register 455 corresponds with the return address value that would have been stored by the target computer system 415 in its return address register 417 in response to the function call instruction 412. The target virtual machine return address register 455 is a portion of the emulator virtual machine mimicking the state and functions of return address register 417 of the target computer system 415. The target virtual machine return address register 455 can be mapped directly to a register of the host computer system 435 or assigned to a location in the host computer system 435 memory. Additional virtual machine software code can be associated with the target virtual machine return address register 455 to mimic the state and functions of the return address register 417 of the target computer system 415.
[0057] After storing the return address for the translated application code block in the host link register 450 and the corresponding target memory space return address in target virtual machine return address register 455, the host computer system 435 begins to execute translated block group Y' 439, corresponding to the function Y 409 in the target application. The host computer system 435 executes the one or more translated code blocks 460 of block group Y' 439 to perform the same or equivalent operations as function Y 409. At the end of translated block group Y' 439, one or more translated function return instructions 465 directs the host computer system 435 to resume execution of translated block group X' 437.
[0058] Some target computer applications may overwrite the return address stored in the return address register 417 with a different address. This may be done so that a function returns to a different location in an application than it was initially called from. To account for this behavior, an embodiment of the emulator directs the host computer system 435 to retrieve 467 the target memory space return address previously stored in the target virtual machine return address register 455 in response to the translated function return instruction 465. In this embodiment, the retrieved target memory space return address is converted to a corresponding memory address in the host computer system.
[0059] The host computer system 435 then writes 469 the converted return address to the host link register 450. As discussed above, the host computer system 435 prefetches instructions and data starting at the address stored in the link register to avoid a pipeline stall when branching between translated code blocks. In this example, these prefetched instructions and data are part of translated code block 445. If the converted return address is the same as the return address previously stored in the host link register 450 by the translated function call 442, the host computer system 435 ignores the write 469 to the host link register 450 and retains the prefetched instructions and data of translated code block 445. The host computer system 435 can then begin executing the translated code block 445 of translated block group X' 437. Under this condition, the host computer system 435 avoids a pipeline stall and its associated performance penalty when jumping from the execution of translated block group Y' 439 to translated code block 445 of block group X' 437.
[0060] Conversely, if the converted return address is different than the return address previously stored in the host link register 450 by the translated function call 442, the host computer system 435 discards the prefetched instructions and data and executes translated code blocks beginning at the return address specified by the target virtual machine return address register 455. This condition may occur if the target computer application overwrites the return address stored in the return address register 417 with a different address. Under this condition, the host computer system 435 will experience a pipeline stall and its associated performance penalty when jumping from the execution of translated block group Y' 439 to translated code block 445 of block group X' 437. However, applications with this behavior are relatively rare compared to the default function call and return mechanism.
[0061] Embodiments of the invention can include variations of the above described behavior depending upon the type of target computer system, host computer system, and translated target applications. For example, if the target application never modifies the contents of the target computer system return address register 417 (or if the target computer system prohibits this behavior), then the host computer system may omit writing the contents of the counterpart target virtual machine return address register 455 to the host link register 450 prior to returning from a translated function call. Moreover, if the target application never reads the contents of the target computer system return address register 417, except when returning from a function call, then the target virtual machine return address register 455 can be omitted entirely.
[0062] In still further embodiment, the target virtual machine return address register 455 can store the return address expressed in host address space, rather than the target address space. Additional functions associated with the target virtual machine return address register 455 can translate this return address between the host address space and the target address space as needed. This may improve performance of the emulator if the translated target application infrequently accesses the virtual machine return address register 455.
[0063] Sometimes, the next translated code block cannot be stored adjacently to the previous translated code block. In these situations, an embodiment of step 410 uses a modified translated instruction to push the correct starting address for the next translated code block into the link register.
[0064] Some target computer systems have unique structural characteristics that need to be taken into account for emulation to operate correctly on the host computer system. For example, the value of a target computer system status flag register, which stores status flags such as the sign, zero, overflow, underflow, divide by zero, and carry bits, may lag its corresponding instruction by several processor cycles due to pipelining and other characteristics. In this example, if an add instruction would cause a status flag value to be set, such as the carry bit flag being set to "1 ", this status flag value would not appear in the status flag register until several processor cycles after the add instruction was executed.
[0065] If the lag times for updating status flag register values (or other state information) are different for the source and host computer systems, the emulator must compensate to ensure that the correct status flag register values are synchronized with the appropriate instructions. One approach is to copy the status flag values from the status flag register of the host computer system to a buffer after every translated instruction. The buffer values can then be synchronized with the appropriate translated instructions. However, this approach is very time-consuming and can decrease emulator performance.
[0066] However, in some types of host computer systems, the host computer system status flags may behave differently than their counterparts in the target computer system. Some types of host computer systems may incur large performance penalties in accessing its status flags. Moreover, some host computer systems may not even have counterparts to some or all of the status flags of the target computer system.
[0067] An alternative approach stores status flag values to a register or buffer only when needed. Figure 5 illustrates a method 500 of compensating for status flag differences according to an embodiment of the invention. Step 505 identifies an application code instruction accessing a status flag register value. The identified instruction can be an instruction that reads a value from the status flag register of the target computer system or an instruction that behaves differently based on a value from the target computer system status flag register, such as some types of conditional branch instruction.
[0068] Step 510 traces back in the application code to identify one or more instructions potentially generating the status flag value accessed by the instruction identified in step 505. In an embodiment, step 510 takes into account any lag in the target computer system between the time when an instruction is executed and when the status flag register is updated with the appropriate value. Some status flags are "sticky", in the sense that once they are set, they remain at that value until read or reset by the target computer system. For these types of status flags, step 510 identifies one or more instructions potentially responsible for setting the status flag value. [0069] Step 515 analyzes one or more translated instructions corresponding with the application code instructions identified in steps 505 and 510. If a difference between the source and host computer systems would cause the translated code to operate incorrectly, for example by accessing the wrong value in the status flag register, step 515 modifies the translated code block. In an embodiment, step 515 adds instructions to the translated code block to preserve a status flag value of the host computer system in a register or memory for later use by the translated application code. Additionally, step 515 modifies the translated instruction accessing the status flag value to refer to the stored status flag values, rather than the current values of the status flag register.
[0070] In a further embodiment, step 515 adds instructions to the translated code block to correct for differences in setting status flag values. For example, if an instruction executed on the target computer system would set a status flag value, such as a sign bit, but its corresponding translated instruction does not do the same thing in the status flag register of the host computer system, then step 515 can add instructions to compensate for this behavior.
[0071] In still a further embodiment, step 515 adds instructions to the translated code block to recreate the status flag values expected by the translated target application. This may be required if the host computer system does not have a status flag corresponding to a status flag of the target computer system. Additionally, this embodiment of step 515 may be used if accessing status flags in the host computer system decreases performance more than simply recreating the status flag value with additional instructions.
[0072] Figure 6 illustrates an example hardware system suitable for implementing an embodiment of the invention. Figure 6 is a block diagram of a computer system 1000, such as a personal computer, video game console, personal digital assistant, or other digital device, suitable for practicing an embodiment of the invention. Computer system 1000 includes a central processing unit (CPU) 1005 for running software applications and optionally an operating system. CPU 1005 may be comprised of one or more processing cores. Memory 1010 stores applications and data for use by the CPU 1005. Storage 1015 provides nonvolatile storage for applications and data and may include fixed disk drives, removable disk drives, flash memory devices, and CD-ROM, DVD-ROM, Blu-ray, HD-DVD, or other optical storage devices. User input devices 1020 communicate user inputs from one or more users to the computer system 1000, examples of which may include keyboards, mice, joysticks, touch pads, touch screens, still or video cameras, and/or microphones. Network interface 1025 allows computer system 1000 to communicate with other computer systems via an electronic communications network, and may include wired or wireless communication over local area networks and wide area networks such as the Internet. An audio processor 1055 is adapted to generate analog or digital audio output from instructions and/or data provided by the CPU 1005, memory 1010, and/or storage 1015. The components of computer system 1000, including CPU 1005, memory 1010, data storage 1015, user input devices 1020, network interface 1025, and audio processor 1055 are connected via one or more data buses 1060.
[0073] A graphics subsystem 1030 is further connected with data bus 1060 and the components of the computer system 1000. The graphics subsystem 1030 includes a graphics processing unit (GPU) 1035 and graphics memory 1040. Graphics memory 1040 includes a display memory (e.g., a frame buffer) used for storing pixel data for each pixel of an output image. Graphics memory 1040 can be integrated in the same device as GPU 1035, connected as a separate device with GPU 1035, and/or implemented within memory 1010. Pixel data can be provided to graphics memory 1040 directly from the CPU 1005. Alternatively, CPU 1005 provides the GPU 1035 with data and/or instructions defining the desired output images, from which the GPU 1035 generates the pixel data of one or more output images. The data and/or instructions defining the desired output images can be stored in memory 1010 and/or graphics memory 1040. In an embodiment, the GPU 1035 includes 3D rendering capabilities for generating pixel data for output images from instructions and data defining the geometry, lighting, shading, texturing, motion, and/or camera parameters for a scene. The GPU 1035 can further include one or more programmable execution units capable of executing shader programs.
[0074] The graphics subsystem 1030 periodically outputs pixel data for an image from graphics memory 1040 to be displayed on display device 1050. Display device 1050 is any device capable of displaying visual information in response to a signal from the computer system 1000, including CRT, LCD, plasma, and OLED displays. Computer system 1000 can provide the display device 1050 with an analog or digital signal.
[0075] In embodiments of the invention, CPU 1005 is one or more general-purpose microprocessors having one or more processing cores. Further embodiments of the invention can be implemented using one or more CPUs with microprocessor architectures specifically adapted for highly parallel and computationally intensive applications, such as media and interactive entertainment applications. Figure 7 illustrates an example processor 2000 suitable for implementing an embodiment of the invention.
[0076] Processor 2000 includes a number of processor elements, each capable of executing independent programs in parallel. Processor 2000 includes PPE processor element 2005. PPE processor element is a general-purpose processor of CISC, RISC, or other type of microprocessor architecture known in the art. In one example, PPE processor element 2005 is a 64-bit, multithreaded RISC architecture microprocessor, such as the PowerPC architecture. PPE processor element 2005 can include a cache memory 2007 partitioned into one, two, or more levels of caches temporarily holding data and instructions to be executed by PPE processor element 2005.
[0077] For additional performance, processor 2000 includes a number of SPE processor elements 2010. In this example, processor 2000 includes eight SPE processor elements 2010A-2010H; however, other example processors can include different number of SPE processor elements. SPE processor elements 2010 are adapted for stream processing of data. In stream processing, a program is executed repeatedly on each item in a large set of data. To facilitate stream processing, the SPE processor elements 2010 may include instruction execution units capable of executing SIMD instructions on multiple data operands simultaneously. SPE processor elements 2010 may also include instruction units capable of executing single-instruction, single data (SISD) for more general processing tasks.
[0078] Each SPE processor element, such as SPE processor element 2010A, includes local data and instruction storage 2012 A. Data and instructions can be transferred to and from the local data and instruction storage 2012A via DMA unit 2014A. The DMA units, such as unit 2014A, are capable of transferring data to and from each of the SPE processor elements 2010 without processor supervision, enabling the SPE processor elements 2010 to process data continuously without stalling.
[0079] Data and instructions are input and output by the processor 2000 via memory and I/O interfaces 2015. Data and instructions can be communicated between the memory and I/O interfaces 2015, the PPE processor element 2005, and SPE processor elements 2010 via processor bus 2020.
[0080] Embodiments of the invention can be used to improve emulator performance and compatibility for a variety of different types of target computer systems, including general computer system 1000 shown above. Figure 8 illustrates another example target computer system 3000 capable of being emulated using embodiments of the invention.
[0081] Target computer system 3000 illustrates the hardware architecture of the Sony Playstation 2 video game console. Target computer system 3000 includes a variety of components connected via a central data bus 3002. These components include a CPU core 3005; a pair of vector processing units, VPO 3010 and VPl 3015; a graphics processing unit interface 3020; an image processing unit 3030; an I/O interface 3035; a DMA controller 3040; and a memory interface 3045. In addition to the central data bus 3002, target computer system 3000 includes a private bus 3007 between CPU core 3005 and vector processing unit VPO 3010 and a private bus 3019 between vector processing unit VPU 1 3015 and graphics processing unit interface 3020.
[0082] In some applications, components 3005, 3010, 3015, 3020, 3030, 3035, 3040 and 3045 are included within a processor chip 3060. Processor chip 3060 is connected with graphics processing unit 3025 via graphics bus 3022 and with memory 3050 via memory bus 3055. Additional external components, such as sound and audio processing components, network interfaces, and optical storage components 3065, are omitted from Figure 8 for clarity.
[0083] Figure 9 illustrates an example emulator architecture 4000 on a host computer system capable of emulating the target computer system 3000 of Figure 8. In this example, emulator architecture 4000 is implemented on a host computer system including a processor similar to processor 2000 of Figure 7.
[0084] In emulator architecture 4000, PPE processor element 4005 executes one or more emulator threads that provide functions including emulator control; device drivers; a vector processing unit VPUl code translator; CPU core emulation including code interpreters and translators; and vector processing unit VPUO emulation.
[0085] SPE processor element 4010A executes one or more emulation threads that provide functions including DMA controller emulation; vector processing unit VPUl interface emulation; and graphics processing unit interface arbitration.
[0086] SPE processor element 4010B executes one or more emulation threads that execute the translated or recompiled vector processing unit VPUl code. SPE processor element 4010C executes one or more emulation threads that emulate the image processing unit. SPE processor element 4010D executes one or more emulation threads that emulate the I/O interface functions. SPE processor element 4010E executes one or more emulation threads that emulate the functions of sound and audio processors. SPE processor element 4010F executes one or more emulation threads that emulate the functions of the graphics processing unit interface.
[0087] In some implementations, additional emulation threads executed by PPE processor element 4005 and/or SPE processor elements can emulate the functionality of the graphics processing unit of the target computer system or translate graphics processing instructions to a format compatible with the graphics processing unit of the host computer system (omitted for clarity from Figure 9). In other implementations, the host computer system can include a graphics processing unit similar to or compatible with the graphics processing unit of the target computer system.
[0088] Additionally, embodiments of the invention can be utilized to improve the performance of multithreaded emulation and virtual machine applications. For example, embodiments of the invention can be used to emulate video game consoles such as the Playstation, Playstation 2, and PSP systems; x86-based computer and video game systems; PowerPC-based computer and video game systems; and Java, .NET, and other virtual machine and runtime environments.
[0089] Further embodiments can be envisioned to one of ordinary skill in the art from the specification and figures. In other embodiments, combinations or sub-combinations of the above disclosed invention can be advantageously made. The block diagrams of the architecture and flow charts are grouped for ease of understanding. However it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present invention.
[0090] The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims.

Claims

WHAT IS CLAIMED IS:
L A method of executing an application of a target computer using a host computer, the method comprising: identifying at least one target application code block to be executed by the host computer; determining if a translated code cache includes at least one translated code block corresponding with the identified at least one target application code block; in response to a determination that the translated code cache includes at least one corresponding translated code block, executing the at least one corresponding translated code block; and in response to a determination that the translated code cache does not include at least one corresponding translated code block: translating the at least one target application code block to a host computer format; storing the at least one translated target application code block in the translated code cache; and executing the at least one translated target application code block.
2. The method of claim 1 , wherein storing the at least one translated target application code block in the translated code cache comprises: determining a cache tag value for the at least one translated target application code block based on an attribute of the at least one target application code block; and storing the cache tag value in association with the at least one translated target application code block.
3. The method of claim 1, wherein determining if the translated code cache includes at least one translated code block corresponding with the identified at least one target application code block comprises: determining a cache tag value for the at least one target application code block based on an attribute of the at least one target application code block; comparing the cache tag value with at least one previously stored cache tag value, wherein each previously stored cache tag value is associated with at least one translated code block; and selecting at least one corresponding translated code block from the translated code cache in response to the cache tag value for the at least one target application code block matching a previously stored cache tag value.
4. The method of claim 3, wherein the attribute is a checksum of the at least one target application code block.
5. The method of claim 3, wherein the attribute is a fixed effective memory address associated with the at least one target application code block.
6. The method of claim 5, wherein the at least one target application code block includes relocatable target application code adapted to be executed from different memory addresses.
7. The method of claim 1 , wherein identifying at least one target application code block to be executed by the host computer comprises: selecting block group start point in the target application code; and tracing the control flow of the target application code from the block group start point to at least one block end point.
8. The method of claim 7, wherein each block end point includes a control flow instruction of the target application code.
9. The method of claim 8, wherein the control flow instruction includes a function call instruction.
10. The method of claim 8, wherein the control flow instruction includes a branch instruction.
11. The method of claim 8, wherein the control flow instruction includes a function return instruction.
12. The method of claim 8, wherein the control flow instruction includes a jump table call instruction.
13. The method of claim 7, wherein the block group start point includes a function entry point.
14. The method of claim 7, wherein the block group start point includes an instruction intended to be executed following a control flow instruction.
15. The method of claim 7, wherein the block group start point and a plurality of block end points define a block group including at least two target application code blocks partitioned according to the plurality of block end points.
16. The method of claim 15, wherein the block group includes two target application code blocks adapted to be executed in sequence, and wherein a corresponding translated block group includes at least two corresponding translated code blocks arranged in adjacent portions of host computer memory, such that the host computer can execute the at least two corresponding translated code blocks in sequence without branching.
17. The method of claim 15, wherein the block group includes a first target application code block including a function call and a second target application code block adapted to be executed upon returning from the function call, and wherein a corresponding translated block group includes first and second translated code blocks corresponding with the first and second target application code blocks, wherein the first translated code block includes a host computer function call instruction and the first and second translated code blocks are arranged in adjacent portions of host computer memory, such that a pipeline stall in the host computer is prevented upon returning from the host computer function call instruction to execute the second translated code block.
18. The method of claim 15, wherein identifying at least one target application code block to be executed by the host computer comprises: determining a branch size restriction for a first target application code block; identifying at least one target application code block to be executed after the first target application code block; determining if a location of the identified target application code block relative to the first target application code block complies with the branch size restriction; and in response to the determination that the location of the identified target application code block relative to the first target application code block does not comply with the branch size restriction: removing the first target application code block from the block group; and adding the first target application code block to a new block group.
19. The method of claim 15, wherein identifying at least one target application code block to be executed by the host computer comprises: determining a branch size restriction for the target application code blocks of the block group; determining a block group size restriction based on the branch size restriction; determining if the block group complies with the block group size restriction; and in response to the determination that the block group does not comply with the block group size restriction: removing at least one target application code block from the block group; and
adding the removed target application code blocks to a new block group.
20. The method of claim 1, wherein translating the at least one target application code block to a host computer format comprises: identifying a first instruction of the target application code block accessing a status flag value; identifying at least one second instruction of the target application code block potentially providing the status flag value to the first instruction; creating at least one status flag host instruction corresponding to each of the second instructions of the target application code block and adapted to store the status flag value outside of a status flag register of the host computer system; creating at least one first host instruction corresponding with the first instruction of the target application code and adapted to access the status flag value provided by the status flag host instruction.
21. The method of claim 20, wherein the status flag host instruction includes an instruction adapted to recreate a status flag value of the target computer system.
22. The method of claim 20, wherein the status flag host instruction includes an instruction adapted to read a status flag value from the status flag register of the host computer.
23. The method of claim 22, wherein the status flag host instruction includes an instruction adapted to modify a status flag value of the status flag register of the host computer to match a behavior of a status flag register of the target computer.
24. A computer program product embedded in a computer readable medium for executing an application of a target computer using a host computer, comprising: program code for identifying at least one target application code block to be executed by the host computer; program code for determining if a translated code cache includes at least one translated code block corresponding with the identified at least one target application code block; program code for executing the at least one corresponding translated code block in response to a determination that the translated code cache includes at least one corresponding translated code block; and program code for, in response to a determination that the translated code cache does not include at least one corresponding translated code block: translating the at least one target application code block to a host computer format; storing the at least one translated target application code block in the translated code cache; and executing the at least one translated target application code block.
PCT/US2007/068110 2006-05-03 2007-05-03 Code translation and pipeline optimization WO2007131089A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US79776106P 2006-05-03 2006-05-03
US60/797,761 2006-05-03
US11/740,636 2007-04-26
US11/740,636 US7568189B2 (en) 2006-05-03 2007-04-26 Code translation and pipeline optimization

Publications (2)

Publication Number Publication Date
WO2007131089A2 true WO2007131089A2 (en) 2007-11-15
WO2007131089A3 WO2007131089A3 (en) 2008-09-04

Family

ID=38668545

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/068110 WO2007131089A2 (en) 2006-05-03 2007-05-03 Code translation and pipeline optimization

Country Status (1)

Country Link
WO (1) WO2007131089A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7568189B2 (en) 2006-05-03 2009-07-28 Sony Computer Entertainment Inc. Code translation and pipeline optimization
WO2014143055A1 (en) * 2013-03-15 2014-09-18 Intel Corporation Mechanism for facilitating dynamic and efficient management of translation buffer prefetching in software programs at computing systems

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6223339B1 (en) * 1998-09-08 2001-04-24 Hewlett-Packard Company System, method, and product for memory management in a dynamic translator
US6820255B2 (en) * 1999-02-17 2004-11-16 Elbrus International Method for fast execution of translated binary code utilizing database cache for low-level code correspondence
US20050015758A1 (en) * 2003-07-15 2005-01-20 Geraint North Shared code caching method and apparatus for program code conversion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6223339B1 (en) * 1998-09-08 2001-04-24 Hewlett-Packard Company System, method, and product for memory management in a dynamic translator
US6820255B2 (en) * 1999-02-17 2004-11-16 Elbrus International Method for fast execution of translated binary code utilizing database cache for low-level code correspondence
US20050015758A1 (en) * 2003-07-15 2005-01-20 Geraint North Shared code caching method and apparatus for program code conversion

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7568189B2 (en) 2006-05-03 2009-07-28 Sony Computer Entertainment Inc. Code translation and pipeline optimization
WO2014143055A1 (en) * 2013-03-15 2014-09-18 Intel Corporation Mechanism for facilitating dynamic and efficient management of translation buffer prefetching in software programs at computing systems
CN105378683A (en) * 2013-03-15 2016-03-02 英特尔公司 Mechanism for facilitating dynamic and efficient management of translation buffer prefetching in software programs at computing systems
US9460022B2 (en) 2013-03-15 2016-10-04 Intel Corporation Mechanism for facilitating dynamic and efficient management of translation buffer prefetching in software programs at computing systems

Also Published As

Publication number Publication date
WO2007131089A3 (en) 2008-09-04

Similar Documents

Publication Publication Date Title
US7568189B2 (en) Code translation and pipeline optimization
US8219722B2 (en) DMA and graphics interface emulation
US7577826B2 (en) Stall prediction thread management
US7533246B2 (en) Application program execution enhancing instruction set generation for coprocessor and code conversion with marking for function call translation
JP4999183B2 (en) Virtual architecture and instruction set for parallel thread computing
TWI550412B (en) A method to be performed in a computer platform having heterogeneous processors and a computer platform
US8060356B2 (en) Processor emulation using fragment level translation
US7792666B2 (en) Translation block invalidation prehints in emulation of a target system on a host system
US9158566B2 (en) Page mapped spatially aware emulation of computer instruction set
JPH0782441B2 (en) Simulation method
US20110071813A1 (en) Page Mapped Spatially Aware Emulation of a Computer Instruction Set
WO2013070636A1 (en) Technique for inter-procedural memory address space optimization in gpu computing compiler
US10241766B2 (en) Application binary interface cross compilation
JP2008276740A5 (en)
US9207919B2 (en) System, method, and computer program product for bulk synchronous binary program translation and optimization
US9465595B2 (en) Computing apparatus, computing method, and computing program
US8276132B1 (en) System and method for representing and managing a multi-architecture co-processor application program
Trompouki et al. Optimisation opportunities and evaluation for GPGPU applications on low-end mobile GPUs
US20030093258A1 (en) Method and apparatus for efficient simulation of memory mapped device access
WO2007131089A2 (en) Code translation and pipeline optimization
JPH025138A (en) Simulation method
US20090322768A1 (en) Compile-time type-safe composable state objects
US8347310B1 (en) System and method for representing and managing a multi-architecure co-processor application program
US8281294B1 (en) System and method for representing and managing a multi-architecture co-processor application program
US9483405B2 (en) Simplified run-time program translation for emulating complex processor pipelines

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07761797

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07761797

Country of ref document: EP

Kind code of ref document: A2