WO2013066988A1 - Method and system for workitem synchronization - Google Patents

Method and system for workitem synchronization Download PDF

Info

Publication number
WO2013066988A1
WO2013066988A1 PCT/US2012/062768 US2012062768W WO2013066988A1 WO 2013066988 A1 WO2013066988 A1 WO 2013066988A1 US 2012062768 W US2012062768 W US 2012062768W WO 2013066988 A1 WO2013066988 A1 WO 2013066988A1
Authority
WO
WIPO (PCT)
Prior art keywords
barrier
workitems
group
workitem
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2012/062768
Other languages
English (en)
French (fr)
Inventor
Lee W. HOWES
Benedict R. GASTER
Michael C. HOUSTON
Michael Mantor
Mark Leather
Norman Rubin
Brian D. EMBERLING
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced Micro Devices Inc
Original Assignee
Advanced Micro Devices Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced Micro Devices Inc filed Critical Advanced Micro Devices Inc
Priority to CN201280053875.7A priority Critical patent/CN103917959B/zh
Priority to JP2014540034A priority patent/JP5984952B2/ja
Priority to EP12784403.3A priority patent/EP2774037B1/en
Priority to KR1020147012038A priority patent/KR101871961B1/ko
Publication of WO2013066988A1 publication Critical patent/WO2013066988A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/524Deadlock detection or avoidance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30076Arrangements for executing specific machine instructions to perform miscellaneous control operations, e.g. NOP
    • G06F9/30087Synchronisation or serialisation instructions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/522Barrier synchronisation

Definitions

  • the present invention relates generally to workitem synchronization.
  • GPU Graphics processing units
  • SIMD single instruction multiple data
  • GPU Graphics processing units
  • CPU central processing unit
  • GPU functions as the host or controlling processor and hands-off specialized functions, such as graphics processing, to other processors such as GPUs.
  • Multi-core CPUs where each CPU has multiple processing cores, offer processing capabilities for specialized functions (e.g., graphics processing) similar to those available on the GPU.
  • One or more of the computation cores of multi-core CPUs or GPUs can be part of the same die (e.g., AMD FusionTM) or, alternatively, in different dies (e.g., Intel XeonTM with NVIDIA GPU).
  • hybrid cores having characteristics of both CPU and GPU e.g., CellSPETM, Intel LarrabeeTM
  • GPGPU style of computing advocates using the CPU to primarily execute control code and to offload performance critical data-parallel code to the GPU.
  • the GPU is primarily used as an accelerator.
  • the combination of multi-core CPUs and GPGPU computing model encompasses both CPU cores and GPU cores as accelerator targets. Many of the multi-core CPU cores have performance that is comparable to GPUs in many areas.
  • OpenCL provides a compiler and a runtime environment in which code can be compiled and executed within a heterogeneous, or other, computing system ,
  • SIMT single instruction multiple thread
  • FIG. 1A illustrates the use of a single workitem (referred to in FIG. 1A as a kernel) to load a value into a group-shared memory space from which other workitems can obtain the loaded value.
  • the workitem that loads the value, as well as other workitems in the workgroup, are blocked from proceeding beyond the barrier until all workitems in the group reach the barrier.
  • FIG. IB illustrates a library function including a barrier instruction.
  • FIG. 1C is a kernel that calls the library function. The code in FIG. 1C illustrates an operation to block all workitems that call the corresponding library until the designated workitem has loaded the data to the shared area.
  • FIG. ID illustrates an example where placing a call to the barrier inside a library may lead to incorrect operations. For example, calling a library function that includes the barrier instruction from a kernel having a conditional, in which one of the conditionals does not have a call to the function, may lead to deadlock. This is because while the barrier would release only when all workitems of a group have reached it, one or more workitems, for which a condition is not fulfilled, would not reach the barrier at all.
  • barrier instruction must be encountered (i.e., reached in the instruction stream) by all workitems in a workgroup executing the kernel. If the barrier instruction is inside a conditional statement, then all workitems must enter the conditional if any workitem enters the conditional statement and executes the barrier. If the barrier instruction is inside a loop, all workitems must execute the barrier instruction for each iteration of the loop before any are allowed to continue execution beyond the barrier.
  • a technique for a workitem to indicate that it is permanently leaving a synchronization group such that subsequent barriers that occur . in the execution of workitems in that synchronization group do not wait for the workitem that had announced its departure from the group.
  • a further technique is disclosed by which a workitem can rejoin the synchronization group in order to continue to synchronize with other workitems of that group. The disclosed techniques may yield substantial advantages in improved processing efficiency and flexibility in programming in various situations.
  • the disclosed method, system, and computer program product embodiments include executing a barrier skip instruction by a first workitem from the group, and responsive to the executed barrier skip instruction, reconfiguring a barrier to synchronize other workitems from the group in a plurality of points in a sequence without requiring the first workitem to reach the barrier in any of the plurality of points.
  • FIGs. 1A-1D illustrate conventional barrier synchronization examples in pseudocode.
  • FIG. 2A illustrates a barrier skip instruction (in pseudocode) according to an embodiment of the present invention.
  • FIG. 2B illustrates a kernel with a barrier skip instruction using a library call (in pseudocode), according to the embodiment.
  • FIG. 3 illustrates a barrier reset instruction (in pseudocode), according to the embodiment.
  • FIG. 4A-4C illustrate pseudocode samples of exemplary use cases for barrier synchronization, according to the embodiment.
  • FIG. 5 illustrates an exemplary flow over time of several workitems, according to the embodiment.
  • FIG. 6 illustrates a method for workitem synchronization, according to the embodiment.
  • FIG. 7 illustrates a block diagram of a system for workitem synchronization, according to an embodiment.
  • FIG. 8 illustrates a block diagram of a workitem synchronization module, according to an embodiment. 12 062768
  • Embodiments of the present invention may be used in any computer system, computing device, entertainment system, media system, game systems, communication device, personal digital assistant, or any system using one or more processors.
  • the present invention may be particularly useful where the system comprises a heterogeneous computing system.
  • a "heterogeneous computing system,” as the term is used herein, is a computing system in which multiple kinds of processors are available.
  • workgroup Two or more workitems that are issued for execution in parallel is a "wavefront”.
  • a workgroup may comprise one or more wavefronts.
  • teachings of this disclosure may be applied to synchronize workitems across any one or more processors and/or groups of processes that have access to a shared memory.
  • kernel refers to a program and/or processing logic that is executed as one or more workitems in. parallel having the same code base. It should be noted thai, in some embodiments, the terms “wo ldtem” and “thread” are interchangeable.
  • the interehangeability, in this disclosure, of "workitem” and “thread” is illustrative, for example, of the flexible simulated or true independence of workitem execution embodied in the model in embodiments.
  • Embodiment of the present invention can significantly improve the performance of systems by enabling more efficient and more flexible synchronization between concurrent workitems.
  • a GPU multi-core CPU, or other processor that executes a very large number of concurrent workitems, for example, using a SIMD or S1MT framework
  • the embodiments improve efficiency by enabling some workitems to leave and/or re-join a group of workitems that synchronize their execution at several points in the instruction flow, For example, when a particular workitem requires no further synchronization with the rest of the synchronization group, it may issue a barrier skip instruction to permanently remove itself from the synchronization group.
  • a barrier reset instruction may be issued subsequently if the particular workitem has to be included again in the synchronization group.
  • the skip instruction in effect, declares that the corresponding workitem will not any point before the barrier is reset reach the barrier.
  • the reset instruction allows the same barrier to be reused.
  • FIG. 2 illustrates a function (in pseudocode) in which a barrier skip instruction
  • barrier b is declared, and a skip instruction is issued after exiting a conditional loop. Within the loop, the workitem waits on b until a specified condition is satisfied.
  • All workitems corresponding to a "theKernel" function may not iterate the same, or a similar number of iterations while in the loop.
  • the skip instruction is issued.
  • the issuing of the skip instruction by an exiting workitem indicates to other workitems that have not yet left the loop, that the exiting workitem will not reach the barrier again.
  • the barrier can be reconfigured to avoid waiting for the exiting workitem in the current and subsequent instantiations until, at least, another instruction, referred to as a "barrier reset.” is issued.
  • the barrier can be reconfigured such that the number of workitems required to reach the barrier can be reduced to account for the exiting workitem.
  • barrier() e.g., barrier() or barrier().wait
  • workitems that included a barrier function could not be handled efficiently.
  • barrier() e.g., barrier() or barrier().wait
  • Such an approach would clearly be wasteful in environments where the workitems could have different execution paths.
  • Another conventional synchronization instruction is barrier arrive.
  • the arrive instruction only releases the caller from the current barrier. For example, if in FIG. 2A, the skip instruction is replaced with an arrive instruction, the barrier would still require the exiting workitem to reach the barrier in the subsequent barrier instances. Thus, the arrive instruction does not provide a mechanism by which a workitem can permanently remove itself from a synchronization group.
  • FIG. 2B is an illustration of a library function "loadFunction" in pseudocode that allows a selected workitem to copy data to a shared space with other workitems in accordance with the present invention.
  • the library function includes two calls to a barrier wait instruction (e.g., b.wait()).
  • the two barrier wait instructions will be issued by any workitem that calls the library function.
  • the kernel function "theKernel” calls loadFunction when a specified condition is satisfied. Otherwise, it issues a skip instruction.
  • the skip instruction in the else portion of the conditional ensures that all workitems that do not satisfy the condition call a skip instruction.
  • all workitems that fail to satisfy the condition would be exempted from both subsequent instances of the barrier, and the barrier instances would not wait on these workitems.
  • the instances of the barrier in an instruction flow may be referred to as synchronization points.
  • the two synchronization points correspond to the two calls to barrier wait (b.waitQ) in the library function. Deadlock would not occur because each workitem either calls the loadFunction and thereby reaches the barrier at the two synchronization points, or calls the skip instruction exempting itself from having to reach any subsequent instances of the barrier.
  • FIG. 3 is an illustration, in pseudocode, of use of the barrier reset instruction (e.g., b.reset), according to an embodiment.
  • the kernel function "theKernel” calls loadFunction when a specified condition is satisfied, as illustrated in FIG. 2B, Otherwise, it issues a skip instruction followed by a reset instruction.
  • the issuing of the skip instruction ensures that all workitems that fail to satisfy the condition would be exempted from both subsequent synchronization points.
  • the barrier at the synchronization points would not wait on these workitems.
  • the barrier reset instruction resets the barrier to its original configuration.
  • the skip instruction in effect, reduced the size of the synchronizing group for the barrier
  • the reset instruction reverses the effect of any preceding skip instructions issued by workitems in the group.
  • the implementation of the reset instruction requires that any pending barriers are synchronized before the reset is completed.
  • the barrier reset is implemented as a self- synchronizing instruction that causes a synchronization point across all workitems with involvement in the barrier instance the reset is applied to.
  • a user may include one or more synchronization points to ensure synchronization at the reset instruction.
  • FIG. 4A is an illustration, in pseudocode, of use of a barrier having a defined size independent of its group size, according to an embodiment.
  • barrier b may be created with an original synchronization group size of 16, regardless of how many workitems are in the work group from which the barrier is called. Creating barrier b with a defined size of 16 enables any process calling a wait instruction on barrier b to synchronize with any fifteen of the workitems in the group that synchronizes on barrier b.
  • a barrier with a defined size may be created for an application in which it is known that only a particular number of the workitems would satisfy a defined condition. In this manner, as shown in FIG. 4A, the workitems that satisfy the condition can synchronize on barrier b, while the others take the "else" path. Because the barrier is only waiting for the defined number of workitems which meet the condition, no deadlock condition would occur, and workitems that do not satisfy the condition are not required to issue a skip instruction. 62768
  • barriers are used to synchronize two groups of workitems.
  • One group satisfies a specified condition.
  • the other group not satisfying that condition may be synchronized separately using barriers bl and b2.
  • the workitem belongs to the bl synchronization group or the b2 synchronization group and can issue a skip instruction to the group it does not belong to.
  • the size of barrier b2 (i.e., number of workitems for which b2 enforces synchronization) is represented by the number of workitems that have not issued a skip instruction upon bl . In FIG. 4C this number is the number of workitems that satisfied the first condition.
  • Other embodiments may require the underlying implementation of the barrier update b2 when bl is skipped.
  • the bl skip operation may be protected by having a synchronization point before the bl skip operation.
  • FIG. 5 is an illustration of an exemplary flow 500 over time of several workitems
  • Tl , T2, T3, and T4 start concurrently or substantially concurrently.
  • a first barrier wait instruction in each workitem causes it to synchronize at synchronization point 501.
  • Synchronizing at synchronization point 501 involves the workitems Tl, T2, T3, and T4 waiting for the last workitem among them to arrive at 501, and then resuming execution concurrently or substantially concurrently.
  • barrier arrive instruction notifies the next instance of the barrier to not wait on T4, and T4 proceeds without having to synchronize at the second instance of the barrier.
  • Tl issues a barrier skip instruction.
  • the barrier skip instruction notifies any subsequent instances of barrier to not wait on Tl, and Tl proceeds without having to synchronize at the subsequent instances of the barrier.
  • Tl issues a barrier reset instruction. This resets the barrier to its original configuration.
  • the original barrier was configured to synchronize on all four workitems Tl, T2, T3 and T4.
  • the reset causes Tl to synchronize with T2, T3 and T4 at synchronization point 507.
  • the synchronization at point 507 may achieved by implementing of the reset as a self-synchronizing instruction or by user-specified synchronization instructions associated with the reset.
  • T1-T4 proceed to synchronize at synchronization point 508 based upon a wait instruction in each workitem.
  • FIG. 6 is an illustration of an exemplary method 600 for workitem synchronization in accordance with an embodiment.
  • a group of workitems is started.
  • the workitems may be a plurality of workitems of identical code.
  • the actual instruction sequence executed in respective workitems of identical code may be the same, or different, depending on conditional evaluations and the like.
  • the workitems are not all identical code, and may comprise of any workitems that have share one or more synchronization points with each other. P T/US2012/062768
  • the plurality of workitems may be started concurrently or non-concurrently.
  • the workitems may be executed on a CPU, GPU, on two or more GPUs, on two or more cores of a CPU, or any combination of one or more GPU and one or more CPU cores.
  • the plurality of workitems is a workgroup that executes on one processing element of a GPU.
  • a barrier b is created.
  • Barrier b can be created, or instantiated, by executing an instruction on a workitem that declares barrier b.
  • an instance of the barrier is created in a memory.
  • Workitems that subsequently declare barrier b receive a reference to the already created barrier b.
  • the underlying implementation of barriers may be different in the respective frameworks and/or systems.
  • a counting semaphore is an exemplary mechanism by which a barrier with the above semantics can be implemented.
  • creation of the barrier b object in memory includes initializing one or more memory locations and/or registers in dynamic memory and/or hardware. For example, in relation to barrier b, several counts are required to be maintained as described below. The barrier object as well as all counts can be maintained in dynamic memory with the appropriate concurrency control mechanism in writing and reading to those memory locations. According to another embodiment, whereas an object corresponding to barrier b is instantiated in dynamic memory, the corresponding counts are maintained in specific hardware registers.
  • barrier b is initialized with a release threshold that is equal to the number of workitems in the group that was started at operation 602. According to another embodiment s the barrier b is created (at operation 604) with a defined size regardless of 2 062768
  • the release threshold is initialized to the defined size.
  • a synchronization instruction may be one of, but is not limited to, a barrier wait, a barrier arrive, a barrier skip, and a barrier reset.
  • the visit count is updated.
  • the visit count is incremented by one to indicate that workitem x reached the barrier.
  • the sum of the updated visit count and the skip count is compared to the release threshold. If the sum is equal to or greater than the release threshold, then workitem x is the last workitem to arrive at the barrier, and the barrier is released at operation 618. Releasing the barrier, according to an embodiment, causes one or more count values to be reset and the blocked workitems to resume execution.
  • the release of the barrier causes the resetting of the visit count at operation 620 and resuming of all workitems blocked on barrier at operation 622.
  • the visit count is reset to 0. Note that resetting the visit count erases any effects of barrier arrive instructions that occurred earlier. However, resetting the visit count does not erase the effects of any barrier skip instructions that occurred earlier.
  • the resetting of only the visit count, as done in the event of a barrier release from a block, can be considered as a "part-reset" operation on the barrier.
  • the term "part-reset”, as used herein, conveys that the portion of the barrier that applies to workitems that have not issued a skip instruction is reset.
  • blocked workitems resume execution.
  • blocked workitems may be waiting on a semaphore, and the semaphore is reset so that the blocked workitems can resume execution.
  • a counting semaphore implemented in hardware or software, can be used to block workitems.
  • workitem x is blocked.
  • the blocking of workitem x may, according to an embodiment, be performed by causing the 2012/062768
  • Operation 619 represents the continuation of execution of workitem x upon release of the barrier, for example, by operations 618-622.
  • a barrier reset instruction may be implemented as a self-synchronizing instruction and may cause a synchronization point across all workitems.
  • a user may include one or more synchronization points to ensure synchronization at the reset instruction.
  • step 638 If it is determined at step 638, that the instruction is not a barrier reset instruction then, at operation 639, workitem x may continue execution. Following operation 639, processing may continue to operation 608, when the next synchronization instruction is encountered in the instruction stream.
  • the release threshold instead of maintaining a separate skip count and release threshold for a barrier, only the release threshold may be maintained. For example, the release threshold may be decremented to reflect that a workitem has executed a barrier skip instruction. Then, when a barrier reset instruction is issued, the resetting of counts would include resetting the release threshold to its original dimension. In effect, according to this approach, the number of workitems that execute a barrier skip may be considered as leaving the synchronization group.
  • FIG. 7 is a block diagram illustration of a system for workitem synchronization in accordance with an embodiment.
  • an example heterogeneous computing system 700 can include one or more CPUs, such as CPU 701, and one or more GPUs, such as GPU 702.
  • Heterogeneous computing system 700 can also include system memory 703, persistent storage device 704, system bus 705, an input/output device 706, and a barrier synchronizer 709.
  • CPU 701 can include a commercially available control processor or a custom control processor.
  • CPU 701 executes the control logic that controls the operation of heterogeneous computing system 700.
  • CPU 701 can be a multi-core CPU, such as a multi-core CPU with two CPU cores 741 and 742.
  • CPU 701 in addition to any control circuitry, includes CPU cache memories 743 and 744 of CPU cores 741 and 742, respectively.
  • CPU cache memories 743 and 744 can be used to temporarily store instructions and/or parameter values during the execution of an application on CPU cores 741 and 742, respectively.
  • CPU cache memory 743 can be used to temporarily store one or more control logic instructions, values of variables, or values of constant parameters, from the system memory 703 during the execution of control logic instructions on CPU core 741.
  • CPU 701 can also include specialized vector instruction processing units.
  • CPU core 742 can include a Streaming SIMD Extensions (SSE) unit that can efficiently process vectored instructions.
  • SSE Streaming SIMD Extensions
  • CPU 701 can include more or less than the CPU cores in the example chosen, and can also have either no cache memories, or more complex cache memory hierarchies.
  • GPU 702 can include a commercially available graphics processor or custom designed graphics processor, GPU 702, for example, can execute specialized code for 2 062768
  • GPU 702 can be used to execute graphics functions such as graphics pipeline computations and rendering of image on a display.
  • GPU 702 includes a GPU global cache memory 710 and one or more compute units 712 and 713.
  • a graphics memory 707 can be included in, or coupled to, GPU 702.
  • Each compute unit 712 and 713 are associated with a GPU local memory 714 and 715, respectively.
  • Each compute unit includes one or more GPU processing elements (PE).
  • PE GPU processing elements
  • compute unit 712 includes GPU processing elements 721 and 722
  • compute unit 713 includes GPU PEs 723 and 724.
  • Each GPU processing element 721, 722, 723, and 724 is associated with at least one private memory (PM) 731, 732, 733, and 734, respectively.
  • Each GPU PE can include one or more of a scalar and vector floating-point units.
  • the GPU PEs can also include special purpose units such as inverse-square root units and sine/cosine units.
  • GPU global cache memory 710 can be coupled to a system memory such as system memory 703, and/or graphics memory such as graphics memory 707.
  • System memory 703 can include at least one non-persistent memory such as dynamic random access memory (DRAM).
  • System memory 703 can store processing logic instructions, constant values and variable values during execution of portions of applications or other processing logic.
  • control logic and/or other processing logic of barrier synchronizer 709 can reside within system memory 703 during execution of barrier synchronizer 709 by CPU 701.
  • processing logic refers to control flow instructions, instructions for performing computations, and instructions for associated access to resources.
  • Persistent memory 704 includes one or more storage devices capable of storing digital data such as magnetic disk, optical disk, or flash memory. Persistent memory 704 can, for example, store at least parts of instruction logic of barrier synchronizer 709. At the startup of heterogeneous computing system 700, the operating system and other application software can be loaded in to system memory 703 from persistent storage 704.
  • System bus 705 can include a Peripheral Component Interconnect (PCI) bus,
  • PCI Peripheral Component Interconnect
  • System bus 705 can also include a network such as a local area network (LAN), along with the functionality to couple components, including components of heterogeneous computing system 700.
  • LAN local area network
  • Input/output interface 706 includes one or more interfaces connecting user input/output devices such as keyboard, mouse, display and/or touch screen.
  • user input can be provided through a keyboard and mouse connected user interface 706 to heterogeneous computing system 700.
  • the output of heterogeneous computing system 700 can be output to a display through user interface 706.
  • Graphics memory 707 is coupled to system bus 705 and to GPU 702. Graphics memory 707 is, in general, used to store data transferred from system memory 703 for fast access by the GPU. For example, the interface between GPU 702 and graphics memory 707 can be several times faster than the system bus interface 705.
  • Barrier synchronizer 709 includes logic to synchronize functions and processing logic on either or both GPU 702 and CPU 701. Barrier synchronizer 709 may be configured to synchronize workitems globally across groups of processors in a computer, in each individual processor, and/or within each processing element of a processor. Barrier synchronizer 709 is further described in relation to FIG. 8 below. A person of skill in the art will understand that barrier synchronizer can be implemented using software, firmware, hardware, or any combination thereof. When implemented in software, for example, barrier synchronizer 709 can be a computer program written in C or OpenCL, that when compiled and executing resides in system memory 703. In source code form and/or compiled executable form, barrier synchronizer 709 can be stored in persistent memory 704.
  • barrier synchronizer 709 is specified in a hardware description language such as Verilog, RTL, netlists, to enable ultimately configuring a manufacturing process through the generation of maskworks/photomasks to generate a hardware device embodying aspects of the invention described herein.
  • a hardware description language such as Verilog, RTL, netlists
  • heterogeneous computing system 700 can include more or less components that shown in FIG. 7.
  • heterogeneous computing system 700 can include one or more network interfaces, and or software applications such as the OpenCL framework.
  • FIG. 8 is an illustration of barrier synchronizer 800, according to an embodiment.
  • Barrier synchronizer 800 includes a workitem blocking module 802, a barrier release module 804, a barrier workitem group module 806, a barrier skip module 808, and a barrier reset module 810. Moreover, barrier synchronizer 800 can include barrier registers 812. According to an embodiment, barrier synchronizer 800 is included in barrier synchronizer 709.
  • Workitem blocking module 802 operates to block one or more workitems on a barrier.
  • a barrier can be implemented using a semaphore (e.g., counting semaphore) and registers.
  • Workitem blocking may be implemented by causing blocked workitems to wait upon the semaphore.
  • the semaphore may be implemented in hardware or software. Workitems may be blocked when a barrier wait instruction is encountered.
  • Workitem blocking module can, for example, include processing logic to implement operations, including operation 616 of method 600.
  • Barrier release module 804 operates to release a barrier when a sufficient number of workitems have reached it.
  • a barrier can be implemented using a semaphore, and releasing the barrier may include releasing the semaphore.
  • Workitems can be released when a barrier wait instruction is encountered and it turns out to be the last workitem to complete the requirements for number of workitems to reach the barrier.
  • Barrier release module can, for example, include processing logic to implement operations, including operations 618-622 of method 600.
  • Barrier workitem group module 806 operates to keep track of the synchronization groups among the various executing workitems. Barrier workitem group module 806 may also operate to create and initialize the barriers according to group makeup. According to an embodiment, Barrier release module 806 can, for example, include processing logic to implement operations, including operations 602-606 of method 600.
  • Barrier skip module 808 operates to implement the barrier skip instructions.
  • barrier skip instruction would cause the skip count for the corresponding barrier to be updated.
  • barrier skip module 808 can include processing logic to implement operations, including operations 630-636 of method 600.
  • Barrier reset module 810 operates to implement the barrier reset instructions.
  • barrier reset instruction would cause the visit count and skip counts related to the barrier to be reset, and/or the release threshold adjusted.
  • Barrier reset module 810 can, for example, include processing logic to implement operations, including operations 638-640 of method 600.
  • Barrier registers 812 includes hardware and/or software registers that relate to the barriers.
  • Barrier registers 812 can include barrier records 814 comprising a plurality of 8
  • An exemplary barrier record 814 includes a barrier identifier 822, a lock 824, a blocked workitem count 826, an arrived workitem count 828, a skipped workitem count 830, and threshold 832.
  • Barrier identifier 822 can be a pointer, index, or other identifier to uniquely identify the memory location or register of the barrier.
  • Lock 824 may be a pointer or reference to a semaphore or other entity upon which the processes are blocked.
  • Blocked workitem count 826 is the number of worki terns that are blocked upon the barrier.
  • Arrived workitem count 828 is the number of workiierns that have issued a barrier arrive instruction.
  • Skipped workitem count 830 is the number of workitems that have issued a barrier skip instruction.
  • Threshold 832 is the release threshold, or group size of the synchronization group.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
PCT/US2012/062768 2011-11-03 2012-10-31 Method and system for workitem synchronization Ceased WO2013066988A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201280053875.7A CN103917959B (zh) 2011-11-03 2012-10-31 用于工作项同步的方法和系统
JP2014540034A JP5984952B2 (ja) 2011-11-03 2012-10-31 作業項目の同期のための方法及びシステム
EP12784403.3A EP2774037B1 (en) 2011-11-03 2012-10-31 Method and system for workitem synchronization
KR1020147012038A KR101871961B1 (ko) 2011-11-03 2012-10-31 작업항목 동기화를 위한 방법 및 시스템

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/288,833 2011-11-03
US13/288,833 US8607247B2 (en) 2011-11-03 2011-11-03 Method and system for workitem synchronization

Publications (1)

Publication Number Publication Date
WO2013066988A1 true WO2013066988A1 (en) 2013-05-10

Family

ID=47172902

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/062768 Ceased WO2013066988A1 (en) 2011-11-03 2012-10-31 Method and system for workitem synchronization

Country Status (6)

Country Link
US (1) US8607247B2 (enExample)
EP (1) EP2774037B1 (enExample)
JP (1) JP5984952B2 (enExample)
KR (1) KR101871961B1 (enExample)
CN (1) CN103917959B (enExample)
WO (1) WO2013066988A1 (enExample)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9304940B2 (en) 2013-03-15 2016-04-05 Intel Corporation Processors, methods, and systems to relax synchronization of accesses to shared memory

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007002855A2 (en) * 2005-06-29 2007-01-04 Neopath Networks, Inc. Parallel filesystem traversal for transparent mirroring of directories and files
US9092272B2 (en) * 2011-12-08 2015-07-28 International Business Machines Corporation Preparing parallel tasks to use a synchronization register
US10585801B2 (en) 2012-11-26 2020-03-10 Advanced Micro Devices, Inc. Prefetch kernels on a graphics processing unit
JP5994601B2 (ja) * 2012-11-27 2016-09-21 富士通株式会社 並列計算機、並列計算機の制御プログラム及び並列計算機の制御方法
US9697003B2 (en) 2013-06-07 2017-07-04 Advanced Micro Devices, Inc. Method and system for yield operation supporting thread-like behavior
US10402234B2 (en) * 2016-04-15 2019-09-03 Nec Corporation Fine-grain synchronization in data-parallel jobs
US10402235B2 (en) * 2016-04-15 2019-09-03 Nec Corporation Fine-grain synchronization in data-parallel jobs for distributed machine learning
US10223436B2 (en) * 2016-04-27 2019-03-05 Qualcomm Incorporated Inter-subgroup data sharing
US10929944B2 (en) 2016-11-23 2021-02-23 Advanced Micro Devices, Inc. Low power and low latency GPU coprocessor for persistent computing
US20180239532A1 (en) * 2017-02-23 2018-08-23 Western Digital Technologies, Inc. Techniques for performing a non-blocking control sync operation
US11353868B2 (en) 2017-04-24 2022-06-07 Intel Corporation Barriers and synchronization for machine learning at autonomous machines
AU2018289605B2 (en) * 2017-06-22 2023-04-27 Icat Llc High throughput processors
GB2569271B (en) * 2017-10-20 2020-05-13 Graphcore Ltd Synchronization with a host processor
GB2569098B (en) * 2017-10-20 2020-01-08 Graphcore Ltd Combining states of multiple threads in a multi-threaded processor
GB2569273B (en) * 2017-10-20 2020-01-01 Graphcore Ltd Synchronization in a multi-tile processing arrangement
GB2569274B (en) 2017-10-20 2020-07-15 Graphcore Ltd Synchronization amongst processor tiles
DE102018205392A1 (de) * 2018-04-10 2019-10-10 Robert Bosch Gmbh Verfahren und Vorrichtung zur Fehlerbehandlung in einer Kommunikation zwischen verteilten Software Komponenten
DE102018205390A1 (de) * 2018-04-10 2019-10-10 Robert Bosch Gmbh Verfahren und Vorrichtung zur Fehlerbehandlung in einer Kommunikation zwischen verteilten Software Komponenten
US10824481B2 (en) * 2018-11-13 2020-11-03 International Business Machines Corporation Partial synchronization between compute tasks based on threshold specification in a computing system
US11449339B2 (en) * 2019-09-27 2022-09-20 Red Hat, Inc. Memory barrier elision for multi-threaded workloads
CN112749019B (zh) * 2019-10-29 2025-08-29 辉达公司 用于协调计算机系统上的操作的高性能同步机制
US11080051B2 (en) * 2019-10-29 2021-08-03 Nvidia Corporation Techniques for efficiently transferring data to a processor
US11409579B2 (en) * 2020-02-24 2022-08-09 Intel Corporation Multiple independent synchonization named barrier within a thread group
US11231881B2 (en) * 2020-04-02 2022-01-25 Dell Products L.P. Raid data storage device multi-step command coordination system
US12314760B2 (en) * 2021-09-27 2025-05-27 Advanced Micro Devices, Inc. Garbage collecting wavefront
US11816349B2 (en) 2021-11-03 2023-11-14 Western Digital Technologies, Inc. Reduce command latency using block pre-erase
US20230289242A1 (en) * 2022-03-10 2023-09-14 Nvidia Corporation Hardware accelerated synchronization with asynchronous transaction support
CN114896079B (zh) * 2022-05-26 2023-11-24 上海壁仞智能科技有限公司 指令执行方法、处理器和电子装置

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090037707A1 (en) * 2007-08-01 2009-02-05 Blocksome Michael A Determining When a Set of Compute Nodes Participating in a Barrier Operation on a Parallel Computer are Ready to Exit the Barrier Operation

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5930807A (en) * 1997-04-23 1999-07-27 Sun Microsystems Apparatus and method for fast filtering read and write barrier operations in garbage collection system
JP3810631B2 (ja) * 2000-11-28 2006-08-16 富士通株式会社 情報処理プログラムを記録した記録媒体
JP4448784B2 (ja) 2005-03-15 2010-04-14 株式会社日立製作所 並列計算機の同期方法及びプログラム
US7587555B2 (en) * 2005-11-10 2009-09-08 Hewlett-Packard Development Company, L.P. Program thread synchronization
US7555607B2 (en) * 2005-11-10 2009-06-30 Hewlett-Packard Development Company, L.P. Program thread syncronization for instruction cachelines
US7660961B2 (en) * 2007-04-03 2010-02-09 Sun Microsystems, Inc. Concurrent evacuation of the young generation
KR101458028B1 (ko) * 2007-05-30 2014-11-04 삼성전자 주식회사 병렬 처리 장치 및 방법
US8140773B2 (en) * 2007-06-27 2012-03-20 Bratin Saha Using ephemeral stores for fine-grained conflict detection in a hardware accelerated STM
US8719514B2 (en) * 2007-06-27 2014-05-06 Intel Corporation Software filtering in a transactional memory system
JP2009176116A (ja) * 2008-01-25 2009-08-06 Univ Waseda マルチプロセッサシステムおよびマルチプロセッサシステムの同期方法
US20100281082A1 (en) * 2009-04-30 2010-11-04 Tatu Ylonen Oy Ltd Subordinate Multiobjects
JP5304194B2 (ja) * 2008-11-19 2013-10-02 富士通株式会社 バリア同期装置、バリア同期システム及びバリア同期装置の制御方法
US8370577B2 (en) * 2009-06-26 2013-02-05 Microsoft Corporation Metaphysically addressed cache metadata
US8229907B2 (en) * 2009-06-30 2012-07-24 Microsoft Corporation Hardware accelerated transactional memory system with open nested transactions
US8316194B2 (en) * 2009-12-15 2012-11-20 Intel Corporation Mechanisms to accelerate transactions using buffered stores
US8402218B2 (en) * 2009-12-15 2013-03-19 Microsoft Corporation Efficient garbage collection and exception handling in a hardware accelerated transactional memory system
US8280866B2 (en) * 2010-04-12 2012-10-02 Clausal Computing Oy Monitoring writes using thread-local write barrier buffers and soft synchronization
US20110264880A1 (en) * 2010-04-23 2011-10-27 Tatu Ylonen Oy Ltd Object copying with re-copying concurrently written objects
US9069545B2 (en) * 2011-07-18 2015-06-30 International Business Machines Corporation Relaxation of synchronization for iterative convergent computations

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090037707A1 (en) * 2007-08-01 2009-02-05 Blocksome Michael A Determining When a Set of Compute Nodes Participating in a Barrier Operation on a Parallel Computer are Ready to Exit the Barrier Operation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"ATI Stream Computing OpenCL Programming Guide", 1 June 2010, ADVANCED MICRO DEVICES, article "ATI Stream Computing OpenCL Programming Guide", XP055025523 *
JAYANTH GUMMARAJU ET AL: "Efficient Implementation of GPGPU Synchronization Primitives on CPUs", CF'10, MAY 17-19, 2010, BERTINORO, ITALY., 17 May 2010 (2010-05-17), pages 85 - 86, XP055047061, Retrieved from the Internet <URL:http://delivery.acm.org/10.1145/1790000/1787295/p85-gummaraju.pdf> [retrieved on 20121207] *
SHIVALI AGARWAL ET AL: "Distributed Generalized Dynamic Barrier Synchronization", 2 January 2011, DISTRIBUTED COMPUTING AND NETWORKING, SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 143 - 154, ISBN: 978-3-642-17678-4, XP019159068 *
SHUCAI XIAO ET AL: "Inter-block GPU communication via fast barrier synchronization", PARALLEL&DISTRIBUTED PROCESSING (IPDPS), 2010 IEEE INTERNATIONAL SYMPOSIUM ON, IEEE, PISCATAWAY, NJ, USA, 19 April 2010 (2010-04-19), pages 1 - 12, XP031950238, ISBN: 978-1-4244-6442-5, DOI: 10.1109/IPDPS.2010.5470477 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9304940B2 (en) 2013-03-15 2016-04-05 Intel Corporation Processors, methods, and systems to relax synchronization of accesses to shared memory
US10235175B2 (en) 2013-03-15 2019-03-19 Intel Corporation Processors, methods, and systems to relax synchronization of accesses to shared memory

Also Published As

Publication number Publication date
CN103917959A (zh) 2014-07-09
US20130117750A1 (en) 2013-05-09
KR20140088550A (ko) 2014-07-10
US8607247B2 (en) 2013-12-10
KR101871961B1 (ko) 2018-08-02
EP2774037A1 (en) 2014-09-10
JP5984952B2 (ja) 2016-09-06
CN103917959B (zh) 2017-11-14
EP2774037B1 (en) 2019-09-25
JP2014532937A (ja) 2014-12-08

Similar Documents

Publication Publication Date Title
EP2774037B1 (en) Method and system for workitem synchronization
US10467013B2 (en) Method and system for yield operation supporting thread-like behavior
US9424099B2 (en) Method and system for synchronization of workitems with divergent control flow
US11803380B2 (en) High performance synchronization mechanisms for coordinating operations on a computer system
US11847508B2 (en) Convergence among concurrently executing threads
US20140157287A1 (en) Optimized Context Switching for Long-Running Processes
US10915364B2 (en) Technique for computational nested parallelism
JP5701487B2 (ja) 同期並列スレッドプロセッサにおける間接的な関数呼び出し命令
KR101759266B1 (ko) 프로세서들에 걸쳐 데이터-병렬 쓰레드들을 지닌 프로세싱 로직을 매핑하는 방법
WO2021000282A1 (en) System and architecture of pure functional neural network accelerator
US9612863B2 (en) Hardware device for accelerating the execution of a systemC simulation in a dynamic manner during the simulation
CN112749019B (zh) 用于协调计算机系统上的操作的高性能同步机制
US20150379172A1 (en) Device and method for accelerating the update phase of a simulation kernel
US20120151145A1 (en) Data Driven Micro-Scheduling of the Individual Processing Elements of a Wide Vector SIMD Processing Unit
Ashley-Rollman et al. Simulating multi-million-robot ensembles
KR20210091817A (ko) 레이 트레이싱에서 삼각형 및 박스 교차 테스트를 위한 병합된 데이터 경로
US20250165292A1 (en) Data processor
Cheramangalath et al. GPU Architecture and Programming Challenges
Dinavahi et al. Many-Core Processors
CN120693601A (zh) 用于分布式架构的图形工作流式传输技术

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12784403

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2014540034

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20147012038

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2012784403

Country of ref document: EP