WO2011028986A2 - Unité de traitement permettant une répartition asynchrone de tâches - Google Patents

Unité de traitement permettant une répartition asynchrone de tâches Download PDF

Info

Publication number
WO2011028986A2
WO2011028986A2 PCT/US2010/047786 US2010047786W WO2011028986A2 WO 2011028986 A2 WO2011028986 A2 WO 2011028986A2 US 2010047786 W US2010047786 W US 2010047786W WO 2011028986 A2 WO2011028986 A2 WO 2011028986A2
Authority
WO
WIPO (PCT)
Prior art keywords
task
tasks
type
processing unit
computer
Prior art date
Application number
PCT/US2010/047786
Other languages
English (en)
Other versions
WO2011028986A3 (fr
Inventor
Michael Mantor
Rex Mccrary
Original Assignee
Advanced Micro Devices, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=43501178&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO2011028986(A2) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Advanced Micro Devices, Inc. filed Critical Advanced Micro Devices, Inc.
Priority to EP10779865.4A priority Critical patent/EP2473920B8/fr
Priority to IN2726DEN2012 priority patent/IN2012DN02726A/en
Priority to JP2012528081A priority patent/JP5791608B2/ja
Priority to CN201080049174.7A priority patent/CN102640115B/zh
Publication of WO2011028986A2 publication Critical patent/WO2011028986A2/fr
Publication of WO2011028986A3 publication Critical patent/WO2011028986A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/461Saving or restoring of program or task context
    • G06F9/463Program control block organisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition

Definitions

  • the present invention is generally directed to computing operations performed in computer systems. More particularly, the present invention is directed to a processing unit, such as a graphics-processing unit (GPU), that performs computing operations and applications thereof.
  • a processing unit such as a graphics-processing unit (GPU)
  • GPU graphics-processing unit
  • a GPU is a complex integrated circuit that is adapted to perform data-parallel computing tasks, such as graphics-processing tasks.
  • a GPU may, for example, execute graphics-processing tasks required by an end-user application, such as a video-game application.
  • the GPU may be a discrete (i.e., separate) device and/or package or may be included in the same device and/or package as another processor (e.g., a central processing unit (CPU)).
  • CPU central processing unit
  • GPUs are frequently integrated into routing or bridge devices such as, for example, Northbridge devices.
  • API application-programming interface
  • An API allows the end-user application to output graphics data and commands in a standardized format, rather than in a format that is dependent on the GPU.
  • Several types of APIs are commercially available, including DirectX® developed by Microsoft Corporation of Redmond, Washington and OpenGL ⁇ promulgated by the Khronos Group.
  • the API communicates with a driver.
  • the driver translates standard code received from the API into a native format of instructions understood by the GPU.
  • the driver is typically written by the manufacturer of the GPU.
  • the GPU then executes the instructions from the driver.
  • a graphics-processing task performed by a GPU typically involves complex mathematical computations, such as matrix and vector operations.
  • a GPU may execute a plurality of different threads (sequence of instructions).
  • Each thread may comprise a shader program, such as a geometry shader, a pixel shader, a vertex shader, or the like.
  • Each thread e.g., shader program
  • Each thread is typically associated with a set of state data (such as texture handles, shader constants, transform matrices, or the like) that is locally stored in data-storage units of the GPU.
  • the locally stored state data is called a context.
  • the GPU includes an array of processing elements, called a shader core.
  • the array of processing elements is organized into single-instruction, multiple-data (SIMD) devices.
  • Multiple threads e.g., shader programs
  • SIMD single-instruction, multiple-data
  • Multiple threads may be issued to the shader core at the same time, with the data needed to execute each thread (e.g., shader program) being distributed in parallel to different processing elements of the shader core.
  • the different processing elements may then perform operations on the data in parallel.
  • a GPU can perform the complex mathematical computations required for a graphics-processing task more quickly than a typical central-processing unit (CPU).
  • CPU central-processing unit
  • an operating-system (OS) scheduler stores the tasks in a command buffer.
  • a conventional GPU processes one command buffer at a time.
  • the OS scheduler serially places tasks in the command buffer, and the GPU typically processes the tasks in the order in which they are placed in the command buffer.
  • the GPU may process tasks out of the order in which they were placed in the command buffer. For example, the GPU may interrupt the execution of a first task to execute a more-important (e.g., low-latency) task that was placed in the command buffer after the first task.
  • a conventional GPU performs a context switch. That is, the state data associated with the threads of the first task are swapped into back-up storage units maintained by the conventional GPU, and new state data associated with the threads (e.g., shader programs) of the more-important (e.g., low-latency) task are retrieved and placed in the data-storage units of the shader core.
  • the shader core executes the threads (e.g., shader programs) of the more- important (e.g., low-latency) task based on the new state data in the data-storage units.
  • the shader core can resume executing the threads of the first task.
  • context switching allows a GPU to process tasks out of the order in which they were placed in the command buffer
  • context switching is problematic for several reasons. As an initial matter, a substantial amount of time is required to perform a context switch, thereby limiting the performance of the GPU.
  • context switching requires additional local memory (e.g., back-up storage units) to store the context that is being switched. The additional local memory takes up precious chip area, resulting in a larger GPU.
  • context switching makes the GPU ineffective at processing low-latency, high-priority tasks.
  • a conventional GPU To prepare the shader core for executing a low-latency, high-priority task, a conventional GPU must perform a context switch.
  • the time associated with the context switch e.g., hundreds of clock cycles
  • Embodiments of the present invention meet the above-described needs by providing methods, apparatuses, and systems for enabling asynchronous task dispatch and applications thereof.
  • an embodiment of the present invention provides a processing unit that includes a plurality of virtual engines and a shader core.
  • the plurality of virtual engines is configured to (i) receive, from an operating system (OS), a plurality of tasks substantially in parallel with each other and (ii) load a set of state data associated with each of the plurality of tasks.
  • the shader core is configured to execute the plurality of tasks substantially in parallel based on the set of state data associated with each of the plurality of tasks.
  • the processing unit may also include a scheduling module that schedules the plurality of tasks to be issued to the shader core.
  • the processing unit is defined in software.
  • a computer-program product includes a computer-readable storage medium containing instructions which, if executed on a computing device, define the processing unit.
  • the processing unit is included in a computing system.
  • the computing system includes a memory, a first processing unit, a second processing unit, and a bus coupled to the memory, the first processing unit, and the processing unit.
  • An example computing system may include, but is not limited to, a supercomputer, a desktop computer, a laptop computer, a video-game console, an embedded device, a handheld device (e.g., a mobile telephone, smart phone, MP3 player, a camera, a GPS device, or the like), or some other device that includes or is configured to include a processing unit.
  • Another embodiment of the present invention provides a computer- implemented method for processing tasks in a processing unit.
  • This computer- implemented method includes several operations. In a first operation, a plurality of tasks are received, from an operating system (OS), in parallel with each other. In a second operation, a set of state data associated with each of the plurality of tasks is loaded. In a third operation, the plurality of tasks are executed substantially in parallel in a shader core based on the set of state data associated with each of the plurality of tasks.
  • This computer-implemented method may also include scheduling the plurality of tasks to be issued to the shader core.
  • a further embodiment of the present invention provides a computer- implemented method for providing tasks to a processing unit.
  • This method includes several operations.
  • a first operation a plurality of tasks are received from one or more applications, wherein each task includes an indication of a priority type.
  • the processing unit is provided with the plurality of tasks and the indication of the priority type associated with each task.
  • instructions stored on a computer-readable storage medium of a computer-program product may cause a computing device to perform this method, if the instructions are executed by the computing device.
  • FIG. 1 is a block diagram illustrating an example computer system in accordance with an embodiment of the present invention.
  • FIG. 2 is a block diagram of an example GPU in accordance with an embodiment of the present invention.
  • FIGS. 3A and 3B are block diagrams illustrating example work flows for issuing tasks to virtual engines of a GPU in accordance with an embodiment of the present invention.
  • FIG. 4 illustrates a more-detailed example work flow for issuing tasks to virtual engines of a GPU in accordance with an embodiment of the present invention.
  • FIG. 5 depicts a block diagram of an example computer system in which an embodiment of the present invention may be implemented.
  • Embodiments of the present invention provide a processing unit that enables asynchronous task dispatch and applications thereof.
  • references to "one embodiment,” “an embodiment,” “an example embodiment,” etc. indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
  • a processing unit includes a plurality of virtual engines embodied on a single shader core.
  • Each virtual engine is configured to receive data-parallel processing tasks (e.g., graphics-processing tasks and general- compute tasks) and independently execute these tasks on the single shader core.
  • the processing unit may execute two or more different streams of processing tasks—such as, a first stream of low-latency processing tasks and a second stream of standard graphics-processing tasks— without requiring a context switch. Executing two or more different streams of processing tasks provides the low-latency benefits of context switching without the overhead associated with stopping and draining the processing unit of data.
  • embodiments of the present invention enable multiple contexts to exist and be executed (substantially) simultaneously in a single shader core.
  • the GPU processes a plurality of commands buffers.
  • Low- latency processing tasks may, for example, be placed in a first command buffer
  • standard graphics-processing tasks may, for example, be placed in a second command buffer.
  • a first virtual engine of the GPU retrieves the low-latency processing tasks
  • a second virtual engine of the GPU retrieves the standard graphics-processing tasks. Tasks from each virtual engine are then issued to a single shader core substantially in parallel with each other.
  • resources of the shader core are partitioned in space and/or time.
  • a first (e.g., low-latency) task from a first virtual engine is issued to a first subset of processing elements (SIMDs) of the shader core
  • a second (e.g., standard graphics) task from a second virtual engine is issued to a second subset of processing elements (SIMDs) of the shader core.
  • the first and second tasks share a percentage of time of the processing elements (SIMDs) of the shader core.
  • the GPU includes a scheduling module to schedule the tasks from the two or more different virtual engines for execution on the shader core.
  • Sharing the resources of the GPU to provide a plurality of virtual engines in accordance with embodiments of the present invention improves the use of the GPU resources, especially on large chips.
  • Two or more streams of tasks can be issued to a single shader core, enabling the GPU to efficiently use computational and input/output facilities.
  • resources e.g., SIMDs
  • SIMDs SIMDs
  • shared resources of the GPU shader core can be divided between concurrent tasks based on demand, priority, and/or preset limits— while temporarily enabling any one task to (substantially) fully consume the resources of the GPU.
  • FIG. 1 is a block diagram of a computing system 100 according to an embodiment.
  • Computing system 100 includes a CPU 102, a GPU 110, and may optionally include a coprocessor 112.
  • CPU 102 and GPU 110 are shown as separate blocks. This is for illustrative purposes only, and not limitation. A person skilled in the relevant art(s) will understand that CPU 102 and GPU 110 may be included in separate packages or may be combined in a single package or integrated circuit.
  • Computing system 100 also includes a system memory 104 that may be accessed by CPU 102, GPU 110, and coprocessor 112.
  • computing system 100 may comprise a supercomputer, a desktop computer, a laptop computer, a video-game console, an embedded device, a handheld device (e.g., a mobile telephone, smart phone, MP3 player, a camera, a GPS device, or the like), or some other device that includes or is configured to include a GPU.
  • GPU 1 10 assists CPU 102 by performing certain special functions (such as, graphics-processing tasks and data-parallel, general-compute tasks), usually faster than CPU 102 could perform them in software.
  • GPU 110 includes a plurality of virtual engines that share resources of a single shader core. In this way, the plurality of virtual engines of GPU 110 can execute a plurality of tasks substantially in parallel.
  • GPU 1 10 may be integrated into a chipset and/or CPU 102. Additional details of GPU 1 10 are provided below.
  • Coprocessor 1 12 also assists CPU 102.
  • Coprocessor 112 may comprise, but is not limited to, a floating point coprocessor, a GPU, a networking coprocessor, and other types of coprocessors and processors as would be apparent to a person skilled in the relevant art(s).
  • Bus 114 may be any type of bus used in computer systems, including a peripheral component interface (PCI) bus, an accelerated graphics port (AGP) bus, a PCI Express (PCIE) bus, or another type of bus whether presently available or developed in the future.
  • PCI peripheral component interface
  • AGP accelerated graphics port
  • PCIE PCI Express
  • computing system 100 further includes local memory 106 and local memory 108.
  • Local memory 106 is coupled to GPU 110 and may also be coupled to bus 114.
  • Local memory 108 is coupled to coprocessor 112 and may also be coupled to bus 114.
  • Local memories 106 and 108 are available to GPU 110 and coprocessor 112 respectively in order to provide faster access to certain data (such as data that is frequently used) than would be possible if the data were stored in system memory 104.
  • GPU 110 and coprocessor 1 12 decode instructions in parallel with CPU 102 and execute only those instructions intended for them.
  • CPU 102 sends instructions intended for GPU 110 and coprocessor 112 to respective command buffers.
  • computing system 100 may also include or be coupled to a display device (e.g., cathode-ray tube, liquid crystal display, plasma display, or the like).
  • the display device is used to display content to a user (such as, when computing system 100 comprises a computer, video-game console, or handheld device).
  • GPU 110 includes a plurality of virtual engines embodied on a shader core. Each virtual engine is configured to execute a stream of processing tasks provided by an OS scheduler, wherein each processing task of a given stream may include a plurality of individual processing threads. Because GPU 1 10 includes a plurality of virtual engines, GPU 110 can execute the different streams of processing tasks from the OS scheduler without requiring a context switch. In fact, in embodiments GPU 110 (substantially) simultaneously executes tasks from a plurality of streams, which correspond to a plurality of different contexts, in a single shader core by sharing resources of the shader core among the tasks.
  • FIG. 2 is a block diagram illustrating example hardware components of
  • GPU 110 includes a command processor 230, input logic (including a vertex analyzer 208, scan converter 212, and arbitration logic 222), a shader core 214, output logic 224, and a memory system 210. Each of these components is described below.
  • input logic including a vertex analyzer 208, scan converter 212, and arbitration logic 222
  • shader core 214 includes shader core 214, output logic 224, and a memory system 210.
  • Command processor 230 receives tasks (e.g., graphics-processing and general- compute tasks) from one or more command buffers filled by the OS scheduler. As illustrated in FIG. 3, command processor 230 includes a plurality of virtual engines that share resources of GPU 110. The different virtual engines of command processor 230 process different types of tasks.
  • tasks e.g., graphics-processing and general- compute tasks
  • FIG. 3 command processor 230 includes a plurality of virtual engines that share resources of GPU 110. The different virtual engines of command processor 230 process different types of tasks.
  • command processor 230 includes a first background engine 202A, a second background engine 202B, a real-time low-latency engine 202C, a primary 3D engine 202D, and a low-latency 3D engine 202E.
  • Background engines 202 process low-priority tasks. It is to be appreciate, however, that command processor 230 may include other types of virtual engines. Background engines 202 take over the resources of GPU 110 only when no other virtual engines are using the resources of GPU 110.
  • Real-time low-latency engine 202C has priority access to the resources of GPU 110 in order to process high-priority tasks.
  • Primary 3D engine 202D processes standard graphics-processing tasks, and low-latency 3D engine 202E processes high-priority graphics-processing tasks.
  • Low-latency 3D engine 202E has priority access to the graphics-processing resources of GPU 110.
  • Input logic arbitrates which tasks are issued to shader core 214.
  • input logic implements a software routine to schedule the tasks for execution in shader core 214 based on the availability of the resources of shader core 214 and the relative priority of the various tasks.
  • input logic includes graphics pre-processing logic (which prepares graphics- processing tasks for issuance to shader core 214) and arbitration logic 222 (which provides tasks to shader core 214).
  • Graphics pre-processing logic includes vertex analyzer 208 and scan converter
  • Tasks from primary 3D engine 202D and low-latency 3D engine 202E are sent to the graphics pre-processing logic.
  • First-in, first-out (FIFO) buffer 204A receives the tasks from primary 3D engine 202D
  • FIFO buffer 204B receives the tasks from low-latency 3D engine 202E.
  • Multiplexer 206 provides tasks from one of FIFO buffers 204 to vertex analyzer 208.
  • Vertex analyzer 208 identifies shader programs associated with a graphics- processing and/or general-compute task and schedules when each shader program can be launched in shader core 214 based on input and output data that will be available. In addition to scheduling shader programs for launch, vertex analyzer 208 also generates pointers to a vertex buffer and includes connectivity data. The pointers are used to read vertices from a vertex buffer. If a vertex has already been processed and is stored in the vertex buffer, vertex analyzer 208 may read that vertex from the vertex buffer, so that a vertex is only processed one time.
  • the connectivity data specifies how vertices fit together to make a primitive (e.g., triangle), so that the primitive can be rasterized properly.
  • Vertex analyzer 208 sends graphics-processing tasks to scan converter 212 and sends general-compute tasks to arbitration logic 222.
  • Scan converter 212 traverses the primitives to determine pixels to be processed by shader core 214.
  • Scan converter 212 then sends the pixels to arbitration logic 222.
  • Arbitration logic 222 includes a plurality of multiplexers to provide the tasks from the different virtual engines of command processor 230 to shader core 214.
  • Shader core 214 includes a plurality of processing elements 220 for executing the tasks provided to GPU 110.
  • Processing elements 220 are arranged as SIMD devices, enabling shader core 214 to execute a plurality of data-parallel processing tasks (substantially) simultaneously.
  • processing elements 220 of shader core 214 are partitioned in space and/or time.
  • a first (e.g., low-latency) task from a first virtual engine e.g., real-time low-latency engine 202C
  • a second (e.g., standard graphics) task from a second virtual engine e.g., primary 3D engine 202D
  • Each subset of processing elements 220 then independently executes the task it received.
  • Shader core 214 also includes one or more local data shares (LDS) 228 for storing data used by processing elements 220 to execute the processing tasks provided by the OS scheduler.
  • LDS 228 stores state data associated with each task to be executed by shader core 214.
  • LDS 228 stores the state data of a plurality of different contexts, enabling shader core 214 to (substantially) simultaneously execute a plurality of different tasks from OS scheduler associated with the plurality of different contexts without requiring a context switch.
  • Intermediate results of processing elements 220 may be reprocessed in shader core 214.
  • processing elements 220 may implement a plurality of different shader programs (e.g., geometry shader, vertex shader, pixel shader, tessellation shader, or the like) to complete a single graphics-processing task provided by the OS scheduler.
  • the intermediate results of the different shader programs are sent back to vertex analyzer 208 and/or scan converter 212 and eventually recirculated to processing elements 220.
  • the final results are provided to output logic 224.
  • Output logic 224 includes a plurality of buffers, including write-combining caches, depth buffers, and color buffers.
  • the write-combining caches combine data to be written to off-chip memory, enabling efficient access to off-chip memory.
  • the depth buffers buffer results for z-testing.
  • the color buffers results for color blending.
  • Memory system 210 includes one or more on-chip caches and one or more off-chip memory interfaces. Memory system 210 is coupled to each of command processor 230, vertex analyzer 208, scan converter 212, shader core 214, and output logic 224. When data is needed by any of these components to execute a shader program, a request is made to the on-chip cache of memory system 210. If there is a hit in the on-chip cache (i.e., the requested data is in the on-chip cache), the data is forwarded to the component that requested it.
  • the requested data must be retrieved from off-chip memory (e.g., system memory 104 of FIG. 1) via the off-chip memory interface of memory system 210. After the data is retrieved from off-chip memory, the data is forwarded to the component that requested it. In addition, the data is also stored in the on-chip cache using cache memory techniques that are well known to persons skilled in the relevant art(s).
  • GPU 1 10 is configured to execute a plurality of streams of processing tasks provided by an OS scheduler.
  • the plurality of streams of processing tasks may be generated by a single application or more than one application.
  • FIG. 3 A illustrates an example in which shader core 214 (substantially) simultaneously executes two (or more) different streams of processing tasks, wherein the streams of processing tasks are generated by a single application 302.
  • Application 302 may be, for example, an end-user application that generates graphics-processing tasks (such as a video-game application, a computer-aided design (CAD) application, or the like) or an end-user application that generates general-compute tasks (e.g., mathematical algorithms, physics simulations, or the like) to be executed on a GPU.
  • CAD computer-aided design
  • application 302 generates a first task 308A and a second task 308B.
  • Each task 308 that application 302 generates includes a priority type.
  • application 302 may indicate that first task 308 A is a low-latency, high- priority task and second task 308B is a standard priority task.
  • OS scheduler 310 receives the tasks generated by application 302 and issues the tasks to different virtual engines of GPU 110. For example, OS scheduler 310 issues first task 308 A to a first virtual engine 312A and issues second task 308B to a second virtual engine 312B. The tasks from each virtual engine 312 are then (substantially) simultaneously executed by shader core 214 of GPU 110.
  • FIG. 3B illustrates an example in which shader core 214 (substantially) simultaneously executes two (or more) different streams of processing tasks, wherein each stream is generated by a different application.
  • a first processing task 330A is generated by a first application 302 A
  • a second processing task 330B is generated by a second application 302B.
  • Each task 330 includes a priority type.
  • OS scheduler 310 receives tasks 330 and issues them to different virtual engines of GPU 110. For example, OS scheduler 310 issues first task 330A to a first virtual engine 332A and issues second task 330B to a second virtual engine 332B. The tasks from each virtual engine 332 are then (substantially) simultaneously executed by shader core 214 of GPU 110.
  • GPU 110 receives both the tasks and the priority types.
  • application 302 provides bits to an API indicating the priority type of each task.
  • the API provides this information to the driver of GPU 110.
  • GPU 110 includes a scheduling module that schedules the tasks to be executed on shader core 214 based, at least in part, on the priority type specified by the application.
  • shader core 214 may (substantially) simultaneously execute two or more streams of processing tasks generated by one or more applications.
  • FIG. 4 is an example workflow, illustrating various layers of software and hardware between one or more applications running on a computing system (e.g., computing system 100) and GPU 1 10 included in the computing system.
  • a computing system e.g., computing system 100
  • GPU 1 10 included in the computing system.
  • CPU 102 provides the primary functionality required by applications 402.
  • CPU 102 may include a plurality of cores 412A-N, wherein first application 402 A runs primarily on a first core 412 A and second application 402B runs primarily on a second core 412N.
  • Tasks 404 may comprise data-parallel processing tasks (e.g., graphics-processing tasks, general- compute tasks, or the like) that GPU 110 can likely perform faster than CPU 102 could perform them in software.
  • Each task 404 includes an indication of the priority type as specified by the applications 402 (like the priority type included in the tasks illustrated in FIGS. 3A and 3B).
  • an OS scheduler would provide tasks 404 to a single command buffer in a serial fashion, and a conventional GPU would serially process the tasks.
  • OS scheduler 310 provides each task 404 to one of a plurality of command buffers 420A- N based on the priority type specified by the applications 402. For example, OS scheduler 310 provides a first type of task (e.g., high-priority tasks) to first command buffer 420A, a second type of task (e.g., graphics-processing tasks) to second command buffer 420B, and so on.
  • first type of task e.g., high-priority tasks
  • second type of task e.g., graphics-processing tasks
  • GPU 110 includes a plurality of virtual engines 432A-N, each configured to service one of command buffers 420A-N.
  • a first virtual engine 432A is configured to service first command buffer 420A
  • a second virtual engine 432B is configured to service second command buffer 420B
  • an N-th virtual engine 432N is configured to service N-th command buffer 420N.
  • Tasks from virtual engines 432 are then (substantially) simultaneously executed by shader core 214 as described above.
  • a scheduling module 434 of GPU 110 schedules the tasks to be executed by shader core 214 based on at least the following conditions: (i) the priority type specified by applications 402; (ii) the relative priority between the tasks 404 processed by virtual engines 432; and (iii) the availability of resources within shader core 214.
  • scheduling module 434 may divide the resources of shader core 214 between concurrent tasks 404 based on demand, priority, and/or preset limits— while temporarily enabling any one of tasks 404 to fully consume the resources of GPU 110.
  • Embodiments of the present invention may be implemented using hardware, software or a combination thereof and may be implemented in one or more computer systems or other processing systems.
  • An example of a computer system 500 is shown in FIG. 5.
  • Computer system 500 includes one or more processors, such as processor 504.
  • Processor 504 may be a general purpose processor (such as, a CPU 102) or a special purpose processor (such as, a GPU 110).
  • Processor 504 is connected to a communication infrastructure 506 (e.g., a communications bus, cross-over bar, or network).
  • a communication infrastructure 506 e.g., a communications bus, cross-over bar, or network.
  • Computer system 500 includes a display interface 502 that forwards graphics, text, and other data from communication infrastructure 506 (or from a frame buffer not shown) for display on display unit 530.
  • Computer system 500 also includes a main memory 508, preferably random access memory (RAM), and may also include a secondary memory 510.
  • the secondary memory 510 may include, for example, a hard disk drive 512 and/or a removable storage drive 514, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, etc.
  • the removable storage drive 514 reads from and/or writes to a removable storage unit 518 in a well known manner.
  • Removable storage unit 518 represents a floppy disk, magnetic tape, optical disk, etc. which is read by and written to by removable storage drive 514.
  • the removable storage unit 518 includes a computer usable storage medium having stored therein computer software and/or data.
  • secondary memory 510 may include other similar devices for allowing computer programs or other instructions to be loaded into computer system 500.
  • Such devices may include, for example, a removable storage unit 522 and an interface 520. Examples of such may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an erasable programmable read only memory (EPROM), or programmable read only memory (PROM)) and associated socket, and other removable storage units 522 and interfaces 520, which allow software and data to be transferred from the removable storage unit 522 to computer system 500.
  • a program cartridge and cartridge interface such as that found in video game devices
  • EPROM erasable programmable read only memory
  • PROM programmable read only memory
  • Computer system 500 may also include a communications interface 524.
  • Communications interface 524 allows software and data to be transferred between computer system 500 and external devices.
  • Examples of communications interface 524 may include a modem, a network interface (such as an Ethernet card), a communications port, a Personal Computer Memory Card International Association (PCMCIA) slot and card, etc.
  • Software and data transferred via communications interface 524 are in the form of signals 528 which may be electronic, electromagnetic, optical or other signals capable of being received by communications interface 524. These signals 528 are provided to communications interface 524 via a communications path ⁇ e.g., channel) 526.
  • This channel 526 carries signals 528 and may be implemented using wire or cable, fiber optics, a telephone line, a cellular link, an radio frequency (RF) link and other communications channels.
  • RF radio frequency
  • computer-readable storage medium is used to generally refer to media such as removable storage drive 514 and a hard disk installed in hard disk drive 512. These computer program products provide software to computer system 500.
  • Computer programs are stored in main memory 508 and/or secondary memory 510. Computer programs may also be received via communications interface 524. Such computer programs, when executed, enable the computer system 500 to perform the features of the present invention, as discussed herein. In particular, the computer programs, when executed, enable the processor 504 to perform the features of the present invention. Accordingly, such computer programs represent controllers of the computer system 500.
  • the software may be stored in a computer program product and loaded into computer system 500 using removable storage drive 514, hard drive 512 or communications interface 524.
  • the control logic when executed by the processor 504, causes the processor 504 to perform the functions of embodiments of the invention as described herein.
  • processing units may also be embodied in software disposed, for example, in a computer-readable medium configured to store the software (e.g., a computer-readable program code).
  • the program code causes the enablement of embodiments of the present invention, including the following embodiments: (i) the functions of the systems and techniques disclosed herein (such as, providing tasks to GPU 110, scheduling tasks in GPU 1 10, executing tasks in GPU 1 10, or the like); (ii) the fabrication of the systems and techniques disclosed herein (such as, the fabrication of GPU 1 10); or (iii) a combination o the functions and fabrication of the systems and techniques disclosed herein.
  • the program code can be disposed in any known computer-readable medium including semiconductor, magnetic disk, or optical disk (such as CD-ROM, DVD-ROM). As such, the code can be transmitted over communication networks including the Internet and internets. It is understood that the functions accomplished and/or structure provided by the systems and techniques described above can be represented in a core (such as a GPU core) that is embodied in program code and may be transformed to hardware as part of the production of integrated circuits.
  • a core such as a GPU core

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)

Abstract

L'invention concerne une unité de traitement qui comprend une pluralité de moteurs virtuels et un noyau d'ombrage. La pluralité de moteurs virtuels est configurée de façon (i) à recevoir, d'un système d'exploitation (OS), une pluralité de tâches sensiblement parallèles les unes aux autres et (ii) à charger un ensemble de données d'état associées à chacune des tâches dans la pluralité de tâches. Le noyau d'ombrage est configuré de façon à exécuter la pluralité de tâches de manière sensiblement parallèle, sur la base de l'ensemble de données d'état associées à chacune des tâches dans la pluralité de tâches. L'unité de traitement peut également comprendre un module d'ordonnancement ordonnançant la pluralité de tâches devant être adressées au noyau d'ombrage.
PCT/US2010/047786 2009-09-03 2010-09-03 Unité de traitement permettant une répartition asynchrone de tâches WO2011028986A2 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP10779865.4A EP2473920B8 (fr) 2009-09-03 2010-09-03 Unité de traitement comprenant un processeur de commandes avec plusieurs buffers afin de permettre un envoi parallèle asynchrone de tâches de type different a un coeur shader
IN2726DEN2012 IN2012DN02726A (fr) 2009-09-03 2010-09-03
JP2012528081A JP5791608B2 (ja) 2009-09-03 2010-09-03 非同期タスクディスパッチを可能にする処理ユニット
CN201080049174.7A CN102640115B (zh) 2009-09-03 2010-09-03 包括具有多缓冲区以使在着色器核心上不同类型工作能够异步并行分派的指令处理器的图形处理单元

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US23971209P 2009-09-03 2009-09-03
US61/239,712 2009-09-03
US12/874,134 US8854381B2 (en) 2009-09-03 2010-09-01 Processing unit that enables asynchronous task dispatch
US12/874,134 2010-09-01

Publications (2)

Publication Number Publication Date
WO2011028986A2 true WO2011028986A2 (fr) 2011-03-10
WO2011028986A3 WO2011028986A3 (fr) 2011-11-24

Family

ID=43501178

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/047786 WO2011028986A2 (fr) 2009-09-03 2010-09-03 Unité de traitement permettant une répartition asynchrone de tâches

Country Status (7)

Country Link
US (1) US8854381B2 (fr)
EP (1) EP2473920B8 (fr)
JP (1) JP5791608B2 (fr)
KR (1) KR101642105B1 (fr)
CN (1) CN102640115B (fr)
IN (1) IN2012DN02726A (fr)
WO (1) WO2011028986A2 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013025823A (ja) * 2011-07-18 2013-02-04 Apple Inc 仮想gpu
WO2013090773A3 (fr) * 2011-12-14 2013-08-08 Advanced Micro Devices, Inc. Politiques pour attribution de ressources de dispositif d'ombrage dans un noyau d'ombrage
WO2013090605A3 (fr) * 2011-12-14 2014-05-08 Advanced Micro Devices, Inc. Sauvegarde et restauration d'état de contexte de nuanceur et restauration d'un front d'onde d'apd en défaut
CN104205174A (zh) * 2012-04-04 2014-12-10 高通股份有限公司 图形处理中的拼补着色
CN104662531A (zh) * 2012-04-23 2015-05-27 惠普发展公司,有限责任合伙企业 使用图形处理单元的统计分析
KR101563098B1 (ko) * 2011-12-15 2015-10-23 퀄컴 인코포레이티드 커맨드 프로세서를 갖는 그래픽 프로세싱 유닛

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9430281B2 (en) 2010-12-16 2016-08-30 Advanced Micro Devices, Inc. Heterogeneous enqueuing and dequeuing mechanism for task scheduling
US9378560B2 (en) * 2011-06-17 2016-06-28 Advanced Micro Devices, Inc. Real time on-chip texture decompression using shader processors
US9195501B2 (en) * 2011-07-12 2015-11-24 Qualcomm Incorporated Instruction culling in graphics processing unit
US9430807B2 (en) * 2012-02-27 2016-08-30 Qualcomm Incorporated Execution model for heterogeneous computing
US9996394B2 (en) 2012-03-01 2018-06-12 Microsoft Technology Licensing, Llc Scheduling accelerator tasks on accelerators using graphs
US20130311548A1 (en) * 2012-05-15 2013-11-21 Nvidia Corporation Virtualized graphics processing for remote display
CN103064657B (zh) * 2012-12-26 2016-09-28 深圳中微电科技有限公司 单个处理器上实现多应用并行处理的方法及装置
US20140267327A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Graphics Processing using Multiple Primitives
US9424079B2 (en) 2013-06-27 2016-08-23 Microsoft Technology Licensing, Llc Iteration support in a heterogeneous dataflow engine
EP3049959A1 (fr) * 2013-09-27 2016-08-03 Hewlett Packard Enterprise Development LP Traitement d'un flux hybride associé à une classe de services
US10198788B2 (en) * 2013-11-11 2019-02-05 Oxide Interactive Llc Method and system of temporally asynchronous shading decoupled from rasterization
WO2015074239A1 (fr) * 2013-11-22 2015-05-28 Intel Corporation Procédé et appareil pour améliorer l'efficacité de tâches chaînées dans une unité de traitement graphique
WO2015103376A1 (fr) * 2014-01-06 2015-07-09 Johnson Controls Technology Company Véhicule ayant de multiples domaines d'exploitation d'interface utilisateur
JP6507169B2 (ja) * 2014-01-06 2019-04-24 ジョンソン コントロールズ テクノロジー カンパニーJohnson Controls Technology Company 複数のユーザインターフェース動作ドメインを有する車両
US9530174B2 (en) 2014-05-30 2016-12-27 Apple Inc. Selective GPU throttling
WO2015194133A1 (fr) * 2014-06-19 2015-12-23 日本電気株式会社 Dispositif arithmétique, procédé de commande de dispositif arithmétique et support de stockage dans lequel un programme de commande de dispositif arithmétique est enregistré
US10133597B2 (en) * 2014-06-26 2018-11-20 Intel Corporation Intelligent GPU scheduling in a virtualization environment
KR102263326B1 (ko) 2014-09-18 2021-06-09 삼성전자주식회사 그래픽 프로세싱 유닛 및 이를 이용한 그래픽 데이터 처리 방법
US10423414B2 (en) * 2014-11-12 2019-09-24 Texas Instruments Incorporated Parallel processing in hardware accelerators communicably coupled with a processor
US10210655B2 (en) * 2015-09-25 2019-02-19 Intel Corporation Position only shader context submission through a render command streamer
CN106598705B (zh) * 2015-10-15 2020-08-11 菜鸟智能物流控股有限公司 一种异步任务的调度方法、装置、系统以及电子设备
US9830677B2 (en) * 2016-03-03 2017-11-28 International Business Machines Corporation Graphics processing unit resource sharing
KR102577184B1 (ko) * 2016-05-24 2023-09-11 삼성전자주식회사 전자 장치 및 그의 동작 방법
US20180033114A1 (en) * 2016-07-26 2018-02-01 Mediatek Inc. Graphics Pipeline That Supports Multiple Concurrent Processes
CN106648551A (zh) * 2016-12-12 2017-05-10 中国航空工业集团公司西安航空计算技术研究所 一种混合图形处理器指令处理系统
US11609791B2 (en) * 2017-11-30 2023-03-21 Advanced Micro Devices, Inc. Precise suspend and resume of workloads in a processing unit
US10540824B1 (en) * 2018-07-09 2020-01-21 Microsoft Technology Licensing, Llc 3-D transitions
US11436783B2 (en) 2019-10-16 2022-09-06 Oxide Interactive, Inc. Method and system of decoupled object space shading
US11403729B2 (en) * 2020-02-28 2022-08-02 Advanced Micro Devices, Inc. Dynamic transparent reconfiguration of a multi-tenant graphics processing unit
US11340942B2 (en) * 2020-03-19 2022-05-24 Raytheon Company Cooperative work-stealing scheduler
US11941723B2 (en) 2021-12-29 2024-03-26 Advanced Micro Devices, Inc. Dynamic dispatch for workgroup distribution

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2050658C (fr) 1990-09-14 1997-01-28 John M. Peaslee Commutation de canaux et de contextes dans un processeur graphique
US6252600B1 (en) * 1998-10-02 2001-06-26 International Business Machines Corporation Computer graphics system with dual FIFO interface
US6943800B2 (en) * 2001-08-13 2005-09-13 Ati Technologies, Inc. Method and apparatus for updating state data
US7659898B2 (en) * 2005-08-08 2010-02-09 Via Technologies, Inc. Multi-execution resource graphics processor
US8884972B2 (en) * 2006-05-25 2014-11-11 Qualcomm Incorporated Graphics processor with arithmetic and elementary function units
US8345053B2 (en) * 2006-09-21 2013-01-01 Qualcomm Incorporated Graphics processors with parallel scheduling and execution of threads
US7830387B2 (en) * 2006-11-07 2010-11-09 Microsoft Corporation Parallel engine support in display driver model
US8284205B2 (en) * 2007-10-24 2012-10-09 Apple Inc. Methods and apparatuses for load balancing between multiple processing units
US20090160867A1 (en) * 2007-12-19 2009-06-25 Advance Micro Devices, Inc. Autonomous Context Scheduler For Graphics Processing Units
US8629878B2 (en) * 2009-08-26 2014-01-14 Red Hat, Inc. Extension to a hypervisor that utilizes graphics hardware on a host

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013025823A (ja) * 2011-07-18 2013-02-04 Apple Inc 仮想gpu
US10120728B2 (en) 2011-07-18 2018-11-06 Apple Inc. Graphical processing unit (GPU) implementing a plurality of virtual GPUs
US9727385B2 (en) 2011-07-18 2017-08-08 Apple Inc. Graphical processing unit (GPU) implementing a plurality of virtual GPUs
WO2013090773A3 (fr) * 2011-12-14 2013-08-08 Advanced Micro Devices, Inc. Politiques pour attribution de ressources de dispositif d'ombrage dans un noyau d'ombrage
WO2013090605A3 (fr) * 2011-12-14 2014-05-08 Advanced Micro Devices, Inc. Sauvegarde et restauration d'état de contexte de nuanceur et restauration d'un front d'onde d'apd en défaut
US10579388B2 (en) 2011-12-14 2020-03-03 Advanced Micro Devices, Inc. Policies for shader resource allocation in a shader core
JP2015502618A (ja) * 2011-12-14 2015-01-22 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッドAdvanced Micro Devices Incorporated シェーダコアにおけるシェーダリソース割当てのポリシー
KR101563098B1 (ko) * 2011-12-15 2015-10-23 퀄컴 인코포레이티드 커맨드 프로세서를 갖는 그래픽 프로세싱 유닛
US10535185B2 (en) 2012-04-04 2020-01-14 Qualcomm Incorporated Patched shading in graphics processing
US9412197B2 (en) 2012-04-04 2016-08-09 Qualcomm Incorporated Patched shading in graphics processing
WO2013151751A3 (fr) * 2012-04-04 2015-09-24 Qualcomm Incorporated Raccord d'ombrage dans un traitement graphique
KR101784671B1 (ko) 2012-04-04 2017-10-12 퀄컴 인코포레이티드 그래픽스 프로세싱에서의 패치된 쉐이딩
CN104813367A (zh) * 2012-04-04 2015-07-29 高通股份有限公司 图形处理中的拼补着色
CN104813367B (zh) * 2012-04-04 2018-11-30 高通股份有限公司 图形处理中的拼补着色
CN104205174B (zh) * 2012-04-04 2019-03-01 高通股份有限公司 图形处理中的拼补着色
US10559123B2 (en) 2012-04-04 2020-02-11 Qualcomm Incorporated Patched shading in graphics processing
CN104205174A (zh) * 2012-04-04 2014-12-10 高通股份有限公司 图形处理中的拼补着色
US11200733B2 (en) 2012-04-04 2021-12-14 Qualcomm Incorporated Patched shading in graphics processing
EP2834793B1 (fr) * 2012-04-04 2023-09-06 Qualcomm Incorporated Raccord d'ombrage dans un traitement graphique
US11769294B2 (en) 2012-04-04 2023-09-26 Qualcomm Incorporated Patched shading in graphics processing
CN104662531A (zh) * 2012-04-23 2015-05-27 惠普发展公司,有限责任合伙企业 使用图形处理单元的统计分析

Also Published As

Publication number Publication date
EP2473920B1 (fr) 2018-03-14
IN2012DN02726A (fr) 2015-09-11
KR101642105B1 (ko) 2016-07-22
CN102640115B (zh) 2015-03-25
WO2011028986A3 (fr) 2011-11-24
JP5791608B2 (ja) 2015-10-07
JP2013504131A (ja) 2013-02-04
CN102640115A (zh) 2012-08-15
US8854381B2 (en) 2014-10-07
EP2473920A2 (fr) 2012-07-11
US20110115802A1 (en) 2011-05-19
EP2473920B8 (fr) 2018-05-16
KR20120064097A (ko) 2012-06-18

Similar Documents

Publication Publication Date Title
EP2473920B1 (fr) Unité de traitement comprenant un processeur de commandes avec plusieurs buffers afin de permettre un envoi parallèle asynchrone de tâches de type different a un coeur shader
US9142057B2 (en) Processing unit with a plurality of shader engines
US10217183B2 (en) System, method, and computer program product for simultaneous execution of compute and graphics workloads
CN106575431B (zh) 用于高度高效的图形处理单元(gpu)执行模型的方法和装置
US11010858B2 (en) Mechanism to accelerate graphics workloads in a multi-core computing architecture
US20140184617A1 (en) Mid-primitive graphics execution preemption
US7747842B1 (en) Configurable output buffer ganging for a parallel processor
US9471307B2 (en) System and processor that include an implementation of decoupled pipelines
US10002455B2 (en) Optimized depth buffer cache apparatus and method
CN106662999B (zh) 用于simd结构化分支的方法和装置
US10565670B2 (en) Graphics processor register renaming mechanism
US10410311B2 (en) Method and apparatus for efficient submission of workload to a high performance graphics sub-system
US20180095785A1 (en) Thread Priority Mechanism
US8675003B2 (en) Efficient data access for unified pixel interpolation
US10580108B2 (en) Method and apparatus for best effort quality of service (QoS) scheduling in a graphics processing architecture
CN113342485A (zh) 任务调度方法、装置、图形处理器、计算机系统及存储介质
US10909037B2 (en) Optimizing memory address compression
US20180075650A1 (en) Load-balanced tessellation distribution for parallel architectures
CN110352403B (zh) 图形处理器寄存器重命名机制

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080049174.7

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10779865

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2012528081

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2010779865

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2726/DELNP/2012

Country of ref document: IN

ENP Entry into the national phase

Ref document number: 20127008414

Country of ref document: KR

Kind code of ref document: A