CN103927150A - Parallel Runtime Execution On Multiple Processors - Google Patents

Parallel Runtime Execution On Multiple Processors Download PDF

Info

Publication number
CN103927150A
CN103927150A CN201410187203.6A CN201410187203A CN103927150A CN 103927150 A CN103927150 A CN 103927150A CN 201410187203 A CN201410187203 A CN 201410187203A CN 103927150 A CN103927150 A CN 103927150A
Authority
CN
China
Prior art keywords
processing unit
carrying
carry out
application program
api
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410187203.6A
Other languages
Chinese (zh)
Other versions
CN103927150B (en
Inventor
阿夫泰伯·穆恩史
杰里米·萨德梅尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Computer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/800,319 external-priority patent/US8286196B2/en
Application filed by Apple Computer Inc filed Critical Apple Computer Inc
Publication of CN103927150A publication Critical patent/CN103927150A/en
Application granted granted Critical
Publication of CN103927150B publication Critical patent/CN103927150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Stored Programmes (AREA)

Abstract

The present invention relates to parallel runtime execution on multiple processors. A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices arc initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads. Sources and existing executables for an API function are stored in an API library to execute a plurality of executables in a plurality of physical compute devices, including the existing executables and online compiled executables from the sources.

Description

During parallel running on multiprocessor, carry out
The application is to be dividing an application of April 9, application number in 2008 Chinese patent application that is 200880011684.8 when parallel running " carrying out on multiprocessor " applying date.
The cross reference of related application
The title that the application and Aaftab Munshi etc. submitted on April 11st, 2007 is the U.S. Provisional Patent Application No.60/923 of " DATA PARALLEL COMPUTING ON MULTIPLE PROCESSORS " (data parallel on multiprocessor), 030 and the title submitted on April 20th, 2007 of Aaftab Munshi be the U.S. Provisional Patent Application No.60/925 of " PARALLEL RUNTIME EXECUTION ON MULTIPLE PROCESSORS " when parallel running (carrying out on multiprocessor), 620 is relevant, and require the two rights and interests, the two is incorporated into this by reference.
Technical field
Relate generally to data parallel of the present invention, more specifically, carries out while the present invention relates to across both data parallel operation of CPU (CPU (central processing unit)) and GPU (Graphics Processing Unit).
Background technology
Along with GPU continues to be evolved into high performance parallel computation device, increasing application be written into with the similar GPU of general-purpose calculating appts in executing data parallel computation.Nowadays, these application are designed to move on the specific GPU that uses supplier's special interface.Therefore, they can not be when data handling system has GPU and CPU balance (leverage) CPU, in the time of can not be on such application is just operating in from different suppliers' GPU, be balanced.
Yet, along with comprising a plurality of cores, increasing CPU carrys out the calculating of executing data parallel model, by any one of available CPU and/or GPU, can support increasing Processing tasks.Traditionally, GPU is to configure by the mutually incompatible programmed environment of separating with CPU.Most of GPU need the specific specific program of supplier.As a result, application is difficult to balance CPU and GPU aspect processing resource, for example, have the GPU of data parallel ability together with multi-core CPU.
Therefore, need modern data handling system for example to overcome above problem, to allow to be applied in this task of execution in any available processes resource (CPU or one or more GPU) that can execute the task.
Summary of the invention
One embodiment of the present of invention comprise the API request in response to the application from moving in Main Processor Unit, load the one or more method and apparatus of carrying out body for the data processing task of this application.In response to another API request from this application, in the body carried out loading is selected to be performed in being attached to another processing unit such as CPU or GPU of this Main Processor Unit.
In an alternative embodiment, the application program of moving in Main Processor Unit generates API request, to be used for loading the one or more bodies of carrying out for data processing task.Then, by this application program, generate the 2nd API, one in the body carried out that is used for selecting to load for carrying out in being attached to another processing unit such as CPU or GPU of this Main Processor Unit.
In an alternative embodiment, for the source of target processing unit run time between the body carried out based on being loaded into processing unit be compiled.Processing unit and target processing unit can be CPU (central processing unit) (CPU) or Graphics Processing Unit (GPU).Difference between processing unit and target processing unit is detected with the body the carried out taking-up source from being loaded.
In an alternative embodiment, in response to the API request that carrys out self-application, utilize and comprise that a plurality of new tasks of carrying out body upgrade the task queue being associated with a plurality of processing units such as CPU or GPU.Judgement to carrying out the condition of dispatching from the new task of queue in a plurality of processing units.Condition based on judged, that selects to be associated with new task a plurality ofly carries out in body one for execution.
In an alternative embodiment, the API request in response to carrying out self-application, loads the source for executing data processing capacity from this application, with the one or more middle execution of a plurality of target data processing units such as CPU or GPU, can carry out body.Automatically determine polytype target data processing unit.The one or more middle determined type that will carry out of based target processing unit compiles can carry out body.
In an alternative embodiment, source and the one or more bodies of can carrying out accordingly that compile for a plurality of processing units are stored in API storehouse and realize api function.In response to the request of application to API storehouse of operation from primary processor (host processor), from API storehouse, take out the one or more corresponding body of carrying out of this source and this api function.The attached processing unit not comprising for these a plurality of unit, goes out to add from taken out source compiled online and can carry out body.According to api function, in attached processing unit, carry out concomitantly to add together with in one or more processing units and can carry out body and one or more taken out body carried out.
In an alternative embodiment, on primary processor, receive API Calls and carry out application, this application has a plurality of threads for carrying out.Primary processor coupling CPU and GPU.These a plurality of threads by asynchronous schedule for the executed in parallel on CPU and GPU.If GPU is busy with graphics process thread, the thread that will carry out on GPU that is scheduled can be performed in CPU.
In an alternative embodiment, on primary processor, receive API Calls and carry out application, this application has a plurality of threads for carrying out.Primary processor is coupled to CPU and GPU.These a plurality of threads by asynchronously initializing for the executed in parallel on CPU and GPU.If GPU is busy with graphics process thread, the thread that will carry out on GPU that is initialised can be performed in CPU.
From accompanying drawing and following detailed description, other features of the present invention will be apparent.
Accompanying drawing explanation
Unrestrictedly by example in the diagram of accompanying drawing illustrate the present invention, similar label represents similar element, in accompanying drawing:
Fig. 1 illustrates the block diagram of an embodiment of system that the calculation element that comprises CPU and/or GPU for configuring is carried out the data parallel of application;
Fig. 2 illustrates to have the block diagram of example of calculation element that the computation processor of a plurality of threads is carried out in a plurality of parallel work-flows concomitantly;
Fig. 3 illustrates the block diagram of an embodiment that is configured to a plurality of physical compute devices of logical compute device via compute device identifier;
Fig. 4 illustrates by coupling to utilize compute device identifier to configure the process flow diagram of an embodiment of the processing of a plurality of physical compute devices from the ability need that receives of application;
Fig. 5 illustrates the process flow diagram of carrying out an embodiment who calculates the processing that can carry out body in logical compute device;
Fig. 6 is the process flow diagram that illustrates an embodiment who processes when loading can be carried out the operation of body, and this processing comprises for being determined to carry out these the one or more physical compute devices that can carry out body and comes compiling source;
Fig. 7 illustrates from carry out queue, to select to calculate kernel and carry out example with the process flow diagram of an embodiment of the processing carried out in the corresponding one or more physical compute devices of the logical compute device with being associated with this execution example;
Fig. 8 A is the process flow diagram that illustrates an embodiment of the processing of setting up API (application programming interface) storehouse, and this processing will be carried out body for one or more API a plurality of and source is stored in storehouse according to a plurality of physical compute devices;
Fig. 8 B is the process flow diagram that illustrates an embodiment who applies the processing of carrying out a plurality of respective sources of carrying out in body and asking based on API to take out from API storehouse;
Fig. 9 illustrates the sample source code of endorsing the example of the compute kernel source of carrying out body in the calculating that will carry out in a plurality of physical compute devices;
Figure 10 illustrates by calling API to be configured for and in a plurality of physical compute devices, to carry out a plurality of sample source codes of carrying out the example of the logical compute device of in body;
Figure 11 illustrates an example that can be combined with embodiment described herein, have the typical computer system of a plurality of CPU and GPU (Graphics Processing Unit).
Embodiment
At this, method and apparatus for the data parallel on multiprocessor is described.In the following description, a large amount of specific detail have been set forth so that the thorough explanation to the embodiment of the present invention to be provided.Yet, it will be apparent to one skilled in the art that and can carry out the embodiment of the present invention and without these specific detail.In other example, be not shown specifically known assembly, structure and technology in order to avoid make the understanding of this description fuzzy.
In instructions, mention that " embodiment " or " embodiment " mean that specific features, structure or the feature in conjunction with this embodiment, described can comprise at least one embodiment of the present invention.Each local phrase " in one embodiment " occurring in instructions not necessarily relates to same embodiment.
For example, for example, by comprise the processing logic of hardware (, circuit, special logic etc.), software (software moving) or the two combination on general-purpose computing system or custom-built machine, carry out the processing described in following diagram.Although following, according to some sequential operation, describe processing, it should be understood that and can carry out some operation in described operation with different orders.And, can be concurrently rather than sequentially carry out some operation.
Graphics Processing Unit (GPU) can be to carry out efficient graphic operation such as 2D, 3D graphic operation and/or the dedicated graphics processors of digital video correlation function.GPU can comprise for carrying out such as position piece transmission (blitter) operation, texture, polygon and plays up special (programmable) hardware of the graphic operation (rendering), pixel painted (shading) and vertex coloring.Known GPU obtains data and image background is rendered in this frame buffer for showing by pixel is admixed together from frame buffer.GPU can also control this frame buffer and allow this frame buffer to be used to refresh the display such as CRT or LCD display, CRT or LCD display are (for example the refreshing of speed of at least 20Hz, every 1/30 second, utilize this display of Refresh Data from frame buffer) the short display that retains.Conventionally, GPU can, from obtaining graphics process task with the CPU of GPU coupling, export raster graphics image by display controller to display device." GPU " mentioning in this manual can be image processor or the programmable graphics processor described in the U.S. Patent No. 6970206 " Method for Deinterlacing Interlaced Video by A Graphics Processor " (for the method video after interweaving being deinterleaved by graphic process unit) of U.S. Patent No. 7015913 " Method and Apparatus for Multitheraded Processing of Data In a Programmable Graphics Processor " as Lindholdm etc. (method and apparatus of processing for the multithreading of the data of programmable graphics processor) and Swan etc., these two patents are passed to quote and are incorporated into this.
In one embodiment, the parallel data processing task that a plurality of dissimilar processors (for example CPU or GPU) can be carried out one or more application concomitantly increases the utilization ratio of available processes resource in data handling system.The processing resource of data handling system can be based on a plurality of physical compute devices.Physical compute devices can be CPU or GPU.In one embodiment, parallel data processing task can be entrusted to polytype processor, for example, can carry out CPU or the GPU of this task.Data processing task can be from some particular procedure ability of processor requirement.Processing power can be for example dedicated texture (texturing) hardware supported, double-precision floating point computing, dedicated local memory, stream data cache or synchronization primitives (synchronization primitives).Dissimilar processor can provide different still overlapping processing power collection.For example, CPU and GPU can carry out double-precision floating point calculating.In one embodiment, application can balance can with CPU or any one in GPU carry out executing data parallel processing task.
In another embodiment, can run time between automatically perform selection and the distribution to the processing resource of the number of different types for parallel data processing task.Application can be by API (application programming interfaces) during to the operation of data handling system platform send the prompting that comprises the desirable capability requirement list of data processing task.Correspondingly, during operation platform can determine a plurality of current can with, data processing task that the CPU with the ability matching with received prompting and/or GPU entrust this application.In one embodiment, this capability requirement list can depend on basic data processing task.Capability requirement list can Application Example as comprised the different processor set from different suppliers' the GPU with different editions and multi-core CPU.Therefore, can prevent that application from providing take the program that particular type CPU or GPU be target.
Fig. 1 illustrates the block diagram of an embodiment of system that the calculation element that comprises CPU and/or GPU for configuring is carried out the data parallel of application.System 100 can realize parallel computation architecture.In one embodiment, system 100 can be the graphics system that comprises one or more primary processors, and these primary processors are coupled by data bus 113 and one or more central processing units 117 and one or more other processors such as Media Processor 115.A plurality of primary processors can be connected to together in mandatory system (hosting system) 101.These a plurality of central processing units 117 can comprise the multi-core CPU from different suppliers.Media Processor can be to have the GPU that dedicated texture is played up hardware.Another Media Processor can be to support dedicated texture to play up the GPU of hardware and double-precision floating point architecture.A plurality of GPU can connect together for scalable connecting interface (SLI) or CrossFire configuration.
In one embodiment, mandatory system 101 can support software stack, and software stack comprises stack components, for example, apply 103, compute platform layer 111, layer 109, compute compiler 107 and compute application library 105 while calculating operation.Application 103 can be called with other stack component and is connected by API (application programming interfaces).Can move concomitantly one or more threads for the application 103 in mandatory system 101.Compute platform layer 111 can service data structure or computer data structure, stores the processing power of each attached physical compute devices.In one embodiment, application can be taken out by compute platform layer 111 information of the available processes resource of relevant mandatory system 101.Application can be selected and be specified for carrying out the ability need of Processing tasks by compute platform layer 111.Therefore, compute platform layer 111 can be determined for this Processing tasks distribution and the initialization process resource from attached CPU117 and/or GPU115 of being configured to of physical compute devices.In one embodiment, the application that compute platform layer 111 can be corresponding for the physical compute devices of the one or more reality with configured generates one or more logical compute device.
While calculating operation, layer 109 can be according to configured for applying 103 processing resource, and for example one or more logical compute device are carried out the execution of management processing task.In one embodiment, carry out Processing tasks and can comprise that establishment represents the compute kernel object of Processing tasks and distributes the storage resources that can carry out body, input/output data etc. such as preserving.The body carried out being loaded for compute kernel object can be compute kernel object.Calculating can be carried out body and can be included in the compute kernel object that will carry out in the computation processor such as CPU or GPU.While calculating operation, layer 109 can be carried out alternately with distributed physical unit the actual execution of Processing tasks.While in one embodiment, calculating operation, layer 109 can for example, coordinate to carry out a plurality of Processing tasks from different application according to the run time behaviour of each processor configuring for Processing tasks (, CPU or GPU).While calculating operation, layer 109 can be selected one or more processors based on run time behaviour from be configured to carry out the physical unit of Processing tasks.Execution Processing tasks can comprise carries out one or more a plurality of threads of carrying out body concomitantly in a plurality of physical processing apparatus.The situation that while in one embodiment, calculating operation, layer 109 can practice condition be followed the tracks of each performed Processing tasks when monitoring the operation of each processor.
During operation, layer can be from applying 103 loadings one or more the carry out bodies corresponding with Processing tasks.While in one embodiment, calculating operation, layer 109 automatically can be carried out body from needed the adding of compute application library 105 load and execution Processing tasks.While calculating operation layer 109 can from apply 103 or compute application library 105 body carried out of loading calculation kernel objects and corresponding source program thereof both.The source program of compute kernel object can be compute kernel program.Logical compute device according to being configured to comprise the physical compute devices of polytype and/or different editions, can load a plurality of bodies of carrying out based on single source program.While in one embodiment, calculating operation, layer 109 for example can activate compute compiler 107, by the loaded optimum body carried out that can carry out the target processor (, CPU or GPU) of body for being configured to execution of source program compiled online one-tenth.
Except carry out body according to respective sources program existing, the body carried out that compiled online goes out can also be stored for calling in the future.In addition, calculating can be carried out body and can be loaded into while calculating operation 109 by compiled offline and via API Calls.Compute application library 105 and/or application 103 can be in response to coming the storehouse API of self-application to ask to load the body carried out being associated.Can or apply 103 for compute application library 105 and dynamically update the newly organized body carried out translating.While in one embodiment, calculating operation 109 can go out by compute compiler 107 compiled online of the calculation element by new upgraded version newly can carry out the existing calculating that body replaces in application and can carry out body.While calculating operation, 109 can insert the body of newly can carrying out that compiled online goes out and upgrade compute application library 105.While in one embodiment, calculating operation, 109 can call compute compiler 107 when the body carried out of loading processing task.In another embodiment, compute compiler 107 can be called to set up the body carried out for compute application library 105 by off-line.Compute compiler 107 can compile and link compute kernel program and generate and in calculating, endorse execution body.In one embodiment, compute application library 105 can comprise a plurality of functions that are used for supporting that development kit for example and/or image are processed.Each built-in function can be corresponding to the calculating source program of storing in the compute application library 105 for a plurality of physical compute devices and one or more body of carrying out.
Fig. 2 is the block diagram that illustrates the example of the calculation element with a plurality of computation processors, and these a plurality of computation processors operate concurrently and carry out concomitantly a plurality of threads.Each computation processor concurrently (or concomitantly) is carried out a plurality of threads.Thread that can executed in parallel can be called thread block.Calculation element can have a plurality of thread block that can be executed in parallel.For example, be illustrated in calculation element 205, M thread carried out as a thread block.Thread in a plurality of thread block, for example, the thread 1 of computation processor _ 1205 and the thread N of computation processor _ L203 can divide on other computation processor or on a plurality of calculation elements and carry out concurrently on a calculation element.A plurality of thread block on a plurality of computation processors can be carried out concurrently and in calculating, endorse execution body.More than one computation processor can be the one single chip based on for example ASIC (special IC) device.In one embodiment, can on the more than one computation processor of crossing over a plurality of chips, carry out concomitantly a plurality of threads of self-application.
Calculation element can comprise one or more computation processors, for example computation processor _ 1205 and computation processor _ L203.Local storage can be coupled with computation processor.Can by and the local storage that is coupled of computation processor be supported in the shared storage between the single thread block thread moving in computation processor.Cross over a plurality of threads of different thread block, for example thread 1213 and thread N209 can share the stream of storing in the stream storer 217 being coupled with calculation element 201.Stream can be in calculating, to endorse the set of carrying out the element that body can operate on it, for example image stream or nonsteady flow.Nonsteady flow can be allocated for the global variable operating on it during stores processor task.Image stream can be the impact damper that can be used for image buffers, texture buffering or frame buffering.
In one embodiment, the local storage of computation processor can be implemented as special-purpose local storage, for example the local shared storage 219 of processor _ 1 and the local shared storage 211 of processor _ L.In another embodiment, the local storage of computation processor can be implemented as for the stream of the stream storer of one or more computation processors 2 of calculation element and reads-write buffer memory, for example, for the stream data cache 215 of the computation processor 205203 of calculation element 201.In another embodiment, local storage can realize and the computation processor of local storage coupling in the special-purpose local storage shared between thread in the thread block moved, for example, with the local shared storage 219 of computation processor _ 1205 coupling.Special-purpose local storage can not crossed over the thread of different threads piece and be shared.For example, for example, if the local storage of computation processor (processor _ 1205m) is implemented as stream and (reads-write buffer memory, stream data cache 215), the variable of stating in local storage can be distributed and be stored in the stream of realizing local storage of being realized for example to read-write, in buffer memory (, stream data cache 215) from stream storer 217.When for example stream, read-write buffer memory and special-purpose local storage when all unavailable for corresponding calculation element, the thread in thread block can be shared the local variable distributing in stream storer 217.In one embodiment, each thread is associated with privately owned (private) storer, and privately owned storer is used for storing the thread private variable of the function use of calling in thread.For example, privately owned storer 1211 can only be accessed by thread 1213.
Fig. 3 is the block diagram that illustrates an embodiment of a plurality of physical compute devices that are configured to logical compute device via compute device identifier.In one embodiment, application 303 and podium level 305 can move in host CPU 301.Application 303 can be in the application 103 of Fig. 1.Mandatory system 101 can comprise host CPU 301.Each in physical compute devices Physical_Compute_Device-1305...Physical_Compute_Device-N 311 can be in the CPU117 of Fig. 1 or GPU115.In one embodiment, compute platform layer 111 can ask to generate compute device identifier 307 in response to the API that carrys out self-application 303, for carrying out configuration data parallel processing resource according to the list of ability need included in API request.Compute device identifier 307 can relate to the configuration of carrying out according to compute platform layer 111 and select actual physical compute devices Physical_Compute_Device-1305...Physical_Compute_Device-N 311.In one embodiment, logical compute device 309 can represent the one group selected actual physical compute devices separated with host CPU 301.
Fig. 4 illustrates ability need for receiving from application by coupling, utilizes compute device identifier to configure the process flow diagram of embodiment of the processing of a plurality of physical compute devices.Can, according to the system 100 of Fig. 1, in the data handling system by mandatory system 101 trustships, carry out and process 400.Data handling system can comprise primary processor and a plurality of physical compute devices (for example, the CPU117 of Fig. 1 and GPU115) that is attached to primary processor of hosted platform layer (for example compute platform layer 111 of Fig. 1).
In piece 401, in one embodiment, process 400 and can set up the data structure (or computer data structure) that represents a plurality of physical compute devices that are associated with one or more corresponding abilities.Each physical compute devices can be attached to carries out the disposal system of processing 400.The ability of the physical compute devices such as CPU or GPU or computing power can comprise whether physical compute devices is supported processing feature, memory access mechanism or specify expansion.Processing feature can be for example, with dedicated texture hardware supported, double-precision floating point computing or synchronous support (mutual exclusion) relevant.The memory access mechanism of physical processing apparatus can be with the type of nonsteady flow buffer memory, the type of image stream buffer memory or dedicated local memory be supported relevant.The system applies of data handling system can be carried out more new data structure in response to new physical compute devices is attached to data handling system.In one embodiment, can pre-determine the ability of physical compute devices.In another embodiment, the system applies of data handling system can run time between find new attached physical processing apparatus.This system applies can be taken out the ability of newfound physical compute devices, upgrades the data structure that represents attached physical compute devices and their respective capabilities.
According to an embodiment, at piece 403, processing 400 can receive the capability requirement of self-application.This application can send capability requirement to system applies by calling API.This system applies can be corresponding with the podium level of software stack in the mandatory system of this application.In one embodiment, capability requirement can identify the list of required ability of carrying out the task of this application for asking to process resource.In one embodiment, this application can require requested resource to execute the task concomitantly in a plurality of threads.As response, at piece 405, process 400 and can from attached physical compute devices, select one group of physical compute devices.Can determine selection based on mating between capability requirement and the computing power of storing in capabilities data structure.In one embodiment, process 400 promptings that can provide according to handling capacity demand and carry out coupling.
Process 400 and can determine coupling scoring according to the number of the computing power of mating between physical compute devices and capability requirement.In one embodiment, process 400 and can select to there are a plurality of physical compute devices that the highest coupling is marked.In another embodiment, if each ability in ability need is mated, process 400 and can select physical compute devices.Process 400 and can, at piece 405, determine many group coupling physical compute devices.In one embodiment, according to load balance ability, select every group of coupling physical unit.In one embodiment, at piece 407, process 400 and can generate compute device identifier for the selected every group of physical compute devices in piece 405 places.Process 400 and can to application, return to generated one or more compute device identifier by calling API.Which application can be selected to adopt process resource according to compute device identifier and execute the task.In one embodiment, process 400 and can generate for each received ability need maximum compute device identifier at piece 407.
In one embodiment, at piece 409, process 400 and can divide according to corresponding compute device identifier the resource of the logical compute device that is used in the selected one group of physical compute devices in INIT block 405 places.Process 400 and can, according to the selection at piece 405 places, in response to the API from receiving the application of one or more compute device identifier, ask to carry out the initialization to logical compute device.Process 400 and can in the logical compute device of this application, create context object.In one embodiment, context object is associated with an application thread in the upper mandatory system moving of this application.Carry out concomitantly in a logical compute device or cross over the context object that a plurality of threads of the Processing tasks of different logical compute device can be based on separating.
In one embodiment, process 400 can be based on comprising cuCreateContext, cuRetainContext and cuReleaseContext a plurality of API.API cuCreateContext creates computational context.Computational context can be corresponding to compute context object.API cuRetainContext uses the concrete computational context by Context identifier as the input independent variable of cuRetainContext, to increase progressively the number of example.API cuCreateContext carries out implicit expression reservation.This contextual third party library that passes to them by application for common acquisition is helpful.Yet likely this application can be deleted context and do not notified storehouse.Allow Multi-instance to be attached to context and discharge from context the no longer valid problem of computational context of being used by storehouse that solved.If the input independent variable of cuRetainContext is with effectively compute context object is not corresponding, cuRetainContext returns to CU_INVALID_CONTEXT.API cuReleaseContext discharges example from effective computational context.If the input independent variable of cuReleaseContext is with effectively compute context object is not corresponding, cuReleaseContext returns to CU_INVALID_CONTEXT.
Fig. 5 illustrates the process flow diagram of carrying out the embodiment that calculates the processing that can carry out body in logical compute device.In one embodiment, can be in data handling system operation time layer when the calculating of Fig. 1 operation (for example, layer 109) carry out and process 500.At piece 501 places, processing 500 can be carried out body for the calculating that will move in logical compute device and distribute one or more stream.The calculating that Processing tasks can be operated by convection current can be carried out body and carry out.In one embodiment, Processing tasks can comprise inlet flow and output stream.Process 500 can be by distributed stream memory mapped to the logical address of applying or be mapped to distributed stream storer from it.In one embodiment, process 500 operations of can the API based on carrying out self-application asking execution block 501.
At piece 503 places, according to an embodiment, processing 500 can create the compute kernel object of logical compute device.Compute kernel object can be for the object creating with carrying out body for carrying out the stream being associated of the respective handling task of function.Process 500 and can set up function argument for compute kernel object at piece 505.Function argument can be included as the stream that function inputs or outputs distribution, the stream that for example piece 501 places distribute.Process 500 and can endorse execution body in by calculatings at piece 507 places and/or compute kernel source is loaded in compute kernel object.In calculating, endorsing and carry out body can be will be performed for carrying out the body carried out of the respective handling task being associated with kernel objects according to logical compute device.In one embodiment, in calculating, endorse execution body and can comprise the data of description being for example associated with type, version and/or the compile option of target physical calculation element.Compute kernel source can be to compile out from it source code of endorsing execution body in calculating.Processing 500 can load in a plurality of calculating corresponding with compute kernel source and endorse execution body at piece 507.Process 500 and can carry out to endorse in loading calculation execution body from application or by the calculating storehouse the compute application library 105 such as Fig. 1.In calculating, endorsing execution body can utilize corresponding compute kernel source to load.In one embodiment, processing 500 can be according to coming the API request execution block 503,505 of self-application and the operation at 507 places.
At piece 511, processing 500 can be upgraded execution queue and utilize logical compute device object computer kernel objects.Processing 500 (for example can utilize while calculating operation, calculating when operation 109 of Fig. 1) suitable independent variable, for example, in response to coming the API Calls of self-application or compute application library (, the application 103 of Fig. 1 or compute application library 105) to carry out calculating kernel.In one embodiment, process 500 and can generate the calculating kernel execution example of carrying out calculating kernel.For example, to the API Calls itself of when calculating of Fig. 1 operation (109) when carrying out the calculating operation of calculating kernel, can be in fact asynchronous.Carry out example can by can be when calculating operation when the calculating of Fig. 1 operation (for example, 109) calculating event object of returning identify.Calculate kernel execution example and can be added to the execution queue of calculating kernel example for carrying out.In one embodiment, to comprising the number of the thread of while executed in parallel and the number of the computation processor that will use on computation processor for carrying out the API Calls of the execution queue of calculating kernel example.Calculate kernel execution example and can comprise that indication is desirable for carrying out the preferred value of the priority of corresponding compute kernel object.Calculate that kernel carries out that example also can comprise the event object of the execution example before sign and/or for carrying out expected numbers object thread and the expected numbers object thread block of this execution.Can be in API Calls the number of given thread piece and the number of thread.In one embodiment, event object can be indicated the execution sequence relation between another execution example that comprises the execution example of this event object and identified by event object.Can require the execution example that comprises event object to be performed after another execution example being identified by this event object completes execution.Event object can be called queue_after_event_object.In one embodiment, carrying out queue can comprise a plurality of for carrying out the calculating kernel execution example of corresponding compute kernel object.For one or more calculating kernels of a compute kernel object, carry out example and can be scheduled for the execution of carrying out queue.In one embodiment, process 500 and can ask to upgrade this execution queue in response to the API that carrys out self-application.The hosted data system that this execution queue can be run on by this application is carried out trustship.
At piece 513, process 500 and can select to calculate kernel execution example from the execution queue for carrying out.In one embodiment, processing 500 can select more than onely will be carried out example by the calculating kernel of concurrent execution according to respective logic calculation element.Processing 500 can judge: based on calculating kernel, other are carried out priority that examples are associated and dependence and from carry out queue, have selected calculating kernel execution example in carrying out example and carrying out queue.The body carried out that can be loaded into corresponding compute kernel object by basis is carried out this compute kernel object, thereby carry out, calculates kernel and carries out example.
At piece 517, in one embodiment, process 500 and can select a plurality of of carrying out in body that are loaded into the compute kernel object corresponding with selected calculating kernel example can carry out body, for carrying out in the physical compute devices being associated with the logical compute device of this compute kernel object.Process 500 can for one calculate kernel carry out example select will be in more than one physical compute devices the more than one body of carrying out of executed in parallel.This selection can be based on being associated with selected calculating kernel example the current practice condition of the corresponding physical compute devices of logical compute device.The practice condition of physical compute devices can comprise that number, the local storage of the thread of operation utilize level and processor to utilize level (for example, the peak value number of the operation of time per unit) etc.In one embodiment, this selection can be the level of utilizing based on predetermined.In another embodiment, this selection can be based on carrying out with calculating kernel the number of thread and the number of thread block that example is associated.Process 500 and can take out practice condition from physical compute devices.In one embodiment, process 500 and can carry out for selecting to calculate the operation that kernel is carried out example from carrying out queue, with the application moving in piece 513517 places and mandatory system, carry out asynchronously.
At piece 519, processing 500 can check the situation of the calculating kernel execution example that is scheduled for the execution in this execution queue.Can be identified by unique calculating event object each and carry out example.When corresponding calculating kernel carry out example when calculating operation (for example, during the operation of Fig. 1 109) while being queued, event object can be returned to and for example call, for carrying out application or the compute application library (, the application 103 of Fig. 5 or compute application library 105) of the API of this execution example.In one embodiment, process 500 and can ask to carry out practice condition inspection in response to the API that carrys out self-application.Processing 500 can identify this calculatings kernel by inquiry and carry out the situation of the calculating event object of example and determine completing of execution calculating kernel execution example.Process 500 and can wait for until calculates the execution of kernel execution example and be done, to return to the API Calls of self-application.Processing 500 can control from the processing execution example of various streams and read and/or write based on event object.
At piece 521, according to an embodiment, process 500 and can take out the result of carrying out calculating kernel execution example.Subsequently, process 500 and can clear up the processing resource that is allocated for this calculating kernel execution example of execution.In one embodiment, processing 500 can be by the stream memory copy of preserving the result of endorsing execution body in execution calculating in local storage.Process 500 and can delete nonsteady flow or the image stream of distributing in piece 501 places.Processing 500 can delete for calculating the kernel events object of deleting when kernel execution is done.If each being associated with specific compute kernel object calculates kernel and carry out example by complete execution, process 500 and can delete specific compute kernel object.In one embodiment, process 500 operations of can the API based on being initiated by application asking execution block 521 places.
Fig. 6 is the process flow diagram that illustrates the embodiment processing when loading can be carried out the operation of body, processes and comprise source compiling for being determined to carry out one or more physical compute devices that this can carry out body during this operation.Process 600 can be used as Fig. 5 piece 507 places processing 500 a part and be performed.In one embodiment, processing 600 physical compute devices that can be associated with logical compute device for each at piece 601 places selects one or more and this physical compute devices in the existing calculating of compatibility, to endorse execution body mutually.In calculating, endorsing execution body can be performed in compatible mutually physical compute devices.In this existing calculating, endorse and carry out body and can obtain from the calculating storehouse of application or the compute application library 105 by for example Fig. 1.In selected calculating, endorsing each that carry out in body endorses in calculating and carries out body and can be carried out by least one physical compute devices.In one embodiment, this selection can be based on carrying out the data of description that body is associated with endorsing in existing calculating.
If there is selecteed existing compute kernel object, process 600 and can judge at piece 603 places whether in selected calculating, endorse any one that carry out in body is optimum for physical compute devices.This judgement can be for example the version based on physical compute devices.In one embodiment, if the version of target physical calculation element in data of description and the version of physical compute devices match, process 600 and can judge and in existing calculating, endorse that to carry out body be optimum for this physical compute devices.
At piece 605, in one embodiment, process 600 and can use compiled online device (for example compute compiler 107 of Fig. 1) to set up in the new calculating for physical compute devices optimum and endorse execution body from corresponding computer inner core source.If finding to endorse in selected calculating at piece 603 places endorses that to carry out body be optimum for physical compute devices in not calculating in carrying out body, process 600 and can carry out online foundation.In one embodiment, if find to endorse in existing calculating at piece 601 places to endorse in not calculating in carrying out body, carry out body and physical compute devices compatibility mutually, process 600 and can carry out online foundation.Compute kernel source can or obtain by the calculating storehouse the compute application library 105 such as Fig. 1 from application.
The foundation at if block 605 places is successfully, in one embodiment, processes 600 and can endorse in by newly-established calculating at piece 607 places and carry out body and be loaded in corresponding compute kernel object.Otherwise, process 600 and can endorse in by selected calculating at piece 609 places and carry out body and be loaded into kernel objects.In one embodiment, if endorse execution body in calculating, be not also loaded, process 600 and can be loaded into compute kernel object by endorsing execution body in calculating.In another embodiment, if endorse in the existing calculating of compute kernel object to carry out in body not with physical compute devices, endorse execution body mutually in compatible calculating, and corresponding compute kernel source can not obtain, processing 600 can generation error message.
Fig. 7 illustrates from carry out queue, to select to calculate kernel and carry out example with the process flow diagram of an embodiment of the processing carried out in the corresponding one or more physical compute devices of the logical compute device with being associated with this execution example.Processing 700 parts of processing 500 that can be used as piece 513 places of Fig. 5 is performed.In one embodiment, process 700 and can in piece 701 place's signs are carried out queue, carry out the dependence condition between example by current dispatched calculating kernel.Calculate kernel and carry out the execution that the dependence condition of example can prevent from calculating kernel execution example, if the uncompleted words of this condition.In one embodiment, dependence can be the relation between the inlet flow based on being fed to by output stream.In one embodiment, process 700 and can detect the dependence of carrying out between example according to inlet flow and the output stream of carrying out the respective function of example.In another embodiment, the execution example that has a lower priority can have dependence with another execution with high priority.
At piece 703, in one embodiment, process 700 and can carry out example and select to carry out example for carrying out without any the calculating kernel of uncompleted dependence condition from a plurality of calculating kernels that are scheduled.This selection can be the priority of carrying out example based on being assigned to.In one embodiment, selected calculating kernel execution example can be associated and there is no uncompleted dependence condition with the limit priority that a plurality of calculating kernels are carried out in examples.At piece 705, process 700 and can take out the current practice condition of carrying out the corresponding physical compute devices of example with selected calculating kernel.In one embodiment, the practice condition of physical compute devices can be to take out from predetermined memory location.In another embodiment, processing 700 can send situation request to physical compute devices and receive practice condition report.Process 700 can be at piece 707 places the practice condition based on taken out, assign one or more in physical compute devices to carry out selected calculating kernel and carry out example.In one embodiment, physical compute devices can be assigned according to the load balancing with other physical compute devices for carrying out.Selected physical compute devices can for example, be associated with the practice condition that meets preassigned (, utilizing level and/or storer to utilize at predetermined process device below horizontal).In one embodiment, preassigned can depend on selected calculating kernel and carry out the number of thread and the number of thread block that example is associated.Process 700 and can be loaded into one or more assigned physical compute devices by endorsing execution body in the calculating separating of the execution example for identical or Multi-instance, with executed in parallel in a plurality of threads.
Fig. 8 A is the process flow diagram that illustrates an embodiment of the processing of setting up API (application programming interface) storehouse, and this processing will be carried out body for one or more API a plurality of and source is stored in storehouse according to a plurality of physical compute devices.Process 800A can be at piece 801 places by off-line execution so that the source code of api function is loaded in data handling system.Source code can be the compute kernel source that will carry out in one or more physical compute devices.In one embodiment, process 800A and can for api function, assign a plurality of target physical calculation elements at piece 803 places.Can for example, according to type (, CPU or GPU), version or supplier, assign target physical calculation element.The target physical calculation element that processing 800A can assign for each at piece 805 places compiles source code into can carry out body, for example, endorses execution body in calculating.In one embodiment, process 800A and can for example, based on compiled online device (compute compiler 107 of Fig. 1), carry out compilation offline.At piece 807, process 800A and the source code of api function and the corresponding body of carrying out being compiled out for assigned target physical calculation element can be stored in API storehouse.In one embodiment, can store each and can carry out body and data of description, data of description for example comprises type, version and supplier and/or the compile option of target physical calculation element.By run time between processing (for example, the processing 500 of Fig. 5) can take out data of description.
Fig. 8 B is the process flow diagram that illustrates an embodiment who applies the processing of carrying out a plurality of respective sources of carrying out in body and asking based on API to take out from API storehouse.In one embodiment, process 800B for example, in the data handling system that comprises API storehouse (, the compute application library 105 of Fig. 1) (for example,, in the mandatory system 101 of Fig. 1) run application (for example, the application 103 of Fig. 1).At piece 811 places, processing 800B can for example, for example, for example, based on API request taking-up source (, compute kernel source) and one or more corresponding carry out body (, endorse and carry out body in calculating), processing 500 at piece 507 places of Fig. 5 from API storehouse.Each can be carried out body and can be associated with one or more target physical calculation elements.In one embodiment, in calculating, endorse carry out body can with the physical compute devices backward compatibility of miscellaneous editions.At piece 813 places, process 800B and can in a plurality of physical compute devices, carry out an api function of carrying out to be associated in the body carried out that request is taken out based on API, for example processing 500 at piece 517 places of Fig. 5.Process 800B can with at piece 813 places, carry out api functions and at piece 809 places, carry out application asynchronously.
Fig. 9 illustrates the sample source code of endorsing the example of the compute kernel source of carrying out body in the calculating that will carry out in a plurality of physical compute devices.Example 900 can be the api function with the independent variable (arguments) that comprises variable 901 and stream 903.Example 900 can be the programming language of the parallel computation environment the system 101 based on such as Fig. 1.In one embodiment, can utilize the additional extension and the restriction that are designed to realize the one or more embodiment in said embodiment, according to ANSI (American National Standards Institute (ANSI)) C standard, specify parallel programming language.These expansions can comprise and be used to refer to the function qualifier (qualifier) of determining the compute kernel function that will carry out in calculation element, and for example qualifier 905.Compute kernel function can be can't help other compute kernel function and be called.In one embodiment, can carry out compute kernel function by the principal function (host function) of concurrent program language.Principal function can be conventional ANSI C function.In the primary processor that principal function can be separated at the calculation element with carrying out compute kernel function, be performed.In one embodiment, these expansions can comprise local delimiter, to describe the variable that need to be assigned in the local storage being associated by the shared calculation element of all threads of thread block.Can be at the local delimiter of the inner statement of compute kernel function.To the restriction of parallel programming language, can during compiler time or working time, be enhanced with when these restrictions are breached, generation error situation, for example, output error message or exit execution.
Figure 10 illustrates by calling API to be configured for and in a plurality of physical compute devices, to calculate a plurality of sample source codes of carrying out the example of the logical compute device of in body.Example 1000 can by for example, in the attached host computer system of a plurality of physical compute devices (, the mandatory system 101 of Fig. 1) operation should be used for carry out.Example 1000 can be specified the principal function of parallel programming language.Processing in example 1000 operation can be by the processing the processing 500 such as Fig. 5, as API Calls, be performed.Minute flow 1001 and the processing 500 that loads piece 501 places that the processing operation of stream picture 1003 can be by Fig. 5 are performed.The processing 500 that creates piece 503 places that the processing operation of compute kernel object 1005 can be by Fig. 5 is performed.Process operation 1007 and the compute kernel source the example such as Fig. 9 900 can be loaded into the compute kernel object being created out.Process operation 1009 and can from loaded compute kernel source, explicitly foundation endorse execution body in calculating.In one embodiment, process operation 1009 and can be loaded into created compute kernel object by endorsing execution body in set up calculating.Subsequently, process in the calculating that operation 1011 can explicitly selects to set up and endorse and carry out body for carrying out the compute kernel object creating.
In one embodiment, processing operation 1013 can supplementary variable and the function argument of stream as the compute kernel object creating.The processing 500 of processing frame 505 places that operation 1013 can be by Fig. 5 is performed.Process operation 1015 and can carry out created compute kernel object.In one embodiment, the processing 500 of processing piece 511 places that operation 1015 can be by Fig. 5 is performed.Processing operation 1015 can calculate accordingly kernel execution example and be updated so that execution queue is utilized with created compute kernel object.Process operation 1017 and can synchronously wait for the completing of execution of created compute kernel object.In one embodiment, process operation 1019 and can from the execution of compute kernel object, take out result.Subsequently, process operation 1021 can clear up distributed for carrying out the resource of compute kernel object, for example event object, the compute kernel object creating and the storer that distributes.Whether in one embodiment, process operation 1017 can be set up based on kernel events object.The processing 500 of processing piece 519 places that operation 1017 can be by Fig. 5 is performed.
Figure 11 illustrates an example of the computer system that can use together with one embodiment of the invention.First, system 1100 may be implemented as a part for the system shown in Fig. 1.Note, although Figure 11 illustrates the various assemblies of computer system, it is not intended to any concrete architecture or the mode of these assemblies of representative interconnection, because these details do not have close relationship for the present invention.Also to understand, can also with there is assembly still less or more network computer and other data handling systems (for example, handheld computer, personal digital assistant (PDA), cell phone, entertainment systems, consumer-elcetronics devices etc.) of multicompartment come together to realize one or more embodiment of the present invention.
As shown in Figure 11, the computer system 1101 as a kind of data handling system of form comprises: the bus 1103, ROM (ROM (read-only memory)) 1107, volatibility RAM1109 and the nonvolatile memory 1111 that are coupled to (one or more) microprocessor 1105 such as CPU and/or GPU.Microprocessor 1103 can take out instruction and carry out these instructions and carry out aforesaid operations from storer 1107,1109,1111.Bus 1103 arrives these various assembly interconnects together, and by these assemblies 1105,1107,1109 and 1111 and display controller and display device 1113 and peripheral unit interconnection, peripheral unit is for example can be I/O (I/O) device and other devices well known in the art of mouse, keyboard, modulator-demodular unit, network interface, printer.Conventionally input/output device 915 is coupled to this system by i/o controller 1117.Volatibility RAM (random access memory) 1109 be conventionally implemented as need constantly electric power with refresh or maintenance memory in the dynamic ram (DRAM) of data.Can comprise alternatively that with the display controller of display device 1108 couplings one or more GPU carry out processes and displays data.Alternatively, can provide GPU storer 1111 to support GPU included in display device 1108.
Even high-capacity storage 1111 normally at electric power for example, by still can the magnetic hard disk drives of service data (, mass data) or the storage system of magneto-optical drive or CD-ROM drive or DVD RAM or flash memory or other type from system removes.Conventionally, high-capacity storage 1111 can be also random access memory, although this is not required.Although it is the local devices that are directly coupled to the remaining component in data handling system that Figure 11 illustrates high-capacity storage 1111, yet, to understand, the present invention can utilize the nonvolatile memory away from this system, for example, by the network interface such as modulator-demodular unit or Ethernet interface or wireless networking interface, be coupled to the network storage device of data handling system.Bus 1103 can comprise by the interconnective one or more buses of various bridges well known in the art, controller and/or adapter.
Can utilize the logical circuit such as dedicated logic circuit or utilize microcontroller or the other forms of processing core of executive routine code command realizes the part of foregoing.Therefore, can utilize the program code such as machine-executable instruction to carry out the processing of instructing by above-mentioned discussion, machine-executable instruction makes the machine of carrying out these instructions carry out some function.In this context, " machine " can be that intermediate form (or " abstract ") instruction transformation (is for example become to processor designated order, such as " virtual machine " (for example, Java Virtual Machine), the abstract execution environment of (Common Language Runtime), higher level lanquage virtual machine the etc. during operation of interpretive routine, common language) and/or the semi-conductor chip that is designed to carry out instruction (for example, " logical circuit " realized with transistor) electronic circuit of disposing on, for example application specific processor and/or general processor.The processing from what has been discussed above instructed can also be carried out by being designed to carry out these electronic circuits of processing (or part of these processing) (replace machine or be combined with machine), and without executive routine code.
Manufacture can be used to program code stored.Program code stored manufacture may be implemented as, but be not limited to, be suitable for the machine readable media of one or more storeies (for example, one or more flash memories, random access memory (static, dynamically or other)), CD, CD-ROM, DVD ROM, EPROM, EEPROM, card magnetic or light or other type of store electrons instruction.Also can (for example, for example, via communication link (, network connects)) for example, from remote computer (, server) program code be downloaded to requesting computer (for example, client) by the data-signal of realizing in propagation medium.
According to the algorithm of the operation of the data bit in computer memory and symbolic expression, represented previous detailed description.These arthmetic statements and statement are that those technician in data processing field pass on their substance institute use instruments of work most effectively to others skilled in the art.Here, algorithm be generally envisioned for cause desirable result be certainly in harmony the sequence of operation.These operations be need to be to the physical operations of physical quantity those operations.Conventionally, but not necessarily necessary, this tittle adopts and can be stored, be passed on, be combined, be compared or the form of operated electric signal or magnetic signal otherwise.Sometimes, particularly for the common reason of using, be proved and these signals carried to make bit, value, element, symbol, character, term, numeral etc. be easily.
Yet, it should be noted, the whole terms in all these and similar term are associated and are only the labels that facilitates of applicable this tittle with physical quantity suitably.Unless specialized or otherwise apparent from above discussion, can understand, in whole instructions, utilization is such as the discussion of the term of " processing " or " calculating " or " judgement " or " demonstration " etc., relate to computer system or similarly action and the processing of computing electronics, computer system or similarly computing electronics operate the data of physics (electronics) amount in the RS that is represented as computer system and they are transformed into and are expressed as similarly computer system memory or register or other such information-storing device, the data of the physical quantity in transmission or display device.
The invention still further relates to for carrying out the equipment of operation described herein.This equipment can be built for needed object by special, or it can comprise the multi-purpose computer that is activated selectively or reconfigure by the computer program of being stored in computing machine.Such computer program can be stored in computer-readable recording medium, computer-readable recording medium is for example but is not limited to the dish (comprising floppy disk, CD, CD-ROM and magneto-optic disk, ROM (read-only memory) (ROM), RAM, EPROM, EEPROM, magnetic or optical card) of any type or is suitable for the medium of any type of store electrons instruction, and they are coupled to computer system bus separately.
Here the processing representing is not relevant with any concrete computing machine or miscellaneous equipment inherently with showing.General-purpose system can be used with basis together with the program of this instruction separately, or can confirm that it is easily that the more special equipment of structure is carried out described operation.From following description, for the needed structure of various such systems, be obvious.In addition, the present invention describes about any concrete programming language.Be appreciated that various programming languages can be used for realizing as the instruction of the invention as described in here.
More than discuss and only described some exemplary embodiment of the present invention.Those skilled in the art will easily recognize from such discussion, can carry out various modifications and without departing from the spirit and scope of the present invention to accompanying drawing and claim.

Claims (22)

1. a computer implemented method, comprising:
Application program in the first processing unit run time between, in response to the 2nd API request receiving from described application program, load one or more body carried out of the data processing task of described application program, wherein, it is compatible with the second processing unit that computing equipment identifier by the appointment in described the 2nd API request of described application program identifies that described one or more can be carried out body, described computing equipment identifier with coupling by described application program run time between by an API, ask the processing unit of one or more demands of previous appointment to be associated, and
In response to the 3rd API request receiving from described application program between described run time, for described the second processing unit, select described one or more of carrying out in body and can carry out body.
2. computer implemented method according to claim 1, wherein said the first processing unit and described the second processing unit are CPU (central processing unit) (CPU) or Graphics Processing Unit (GPU).
3. computer implemented method according to claim 1, wherein said one or more selected of carrying out in body can carry out body and are associated with described the 3rd API request.
4. computer implemented method according to claim 1, wherein said one or more body of carrying out comprises described one or more data of description of carrying out at least one body carried out in body, and described data of description comprises version and the type of supported processing unit.
5. computer implemented method according to claim 1, wherein said one or more bodies of carrying out comprise source, described source is compiled to generate described one or more body of carrying out.
6. computer implemented method according to claim 5, wherein said source loads from described application program via described the 2nd API.
7. computer implemented method according to claim 5, wherein said source is from one or morely carrying out the storehouse that body is associated and load with described.
8. computer implemented method according to claim 5, wherein said loading comprises:
The information of more described data of description and described the second processing unit; And
For described the second processing unit, from described source compiled online, go out described one or more described of carrying out body and can carry out body.
9. computer implemented method according to claim 8, wherein said one or more described of carrying out in body can carry out body and are associated with described the 2nd API request.
10. computer implemented method according to claim 8, wherein said compiling based on: it is described that relatively to indicate described one or more at least one body carried out of carrying out in body be not optimum for described the second processing unit.
11. computer implemented methods according to claim 8, wherein said compiling based on: describedly relatively indicate described one or more at least one body carried out of carrying out in body not support described the second processing unit.
12. computer implemented methods according to claim 8, wherein said compiling comprises:
Generate the data of description after the described one or more described renewal that can carry out body of carrying out in body; And
Store described one or more described of carrying out in body and can carry out body, described one or more described of carrying out in body can carry out body and comprise the data of description after described renewal.
13. computer implemented methods according to claim 12, wherein said one or more described of carrying out in body can carry out body and are stored to replace described one or more at least one body carried out of carrying out in body.
14. computer implemented methods according to claim 1, wherein said one or more body of carrying out comprises the described one or more described data of description that can carry out body of carrying out in body, and wherein, described selection is based on described data of description.
15. computer implemented methods according to claim 14, wherein saidly one or morely carry out in body selected one and can carry out body and be associated with the described one or more up-to-date versions of body of carrying out for described the second processing unit based on described data of description.
16. computer implemented methods according to claim 14, wherein saidly one or morely carry out in body selected one and can carry out the execution sequence relation of body based on the indication of described data of description and be associated.
17. 1 kinds of computer implemented methods, comprising:
By the application program in the first processing unit run time between produce an API and ask, one or more demand of the second processing unit is specified in a described API request;
By described application program run time between produce the 2nd API and ask, to load one or more body carried out of the data processing task of described application program, wherein, it is compatible with described the second processing unit that computing equipment identifier by the appointment in described the 2nd API request of described application program identifies that described one or more can be carried out body, and described computing equipment identifier is associated with the processing unit of coupling by described application program one or more demands of previous appointment in a described API request; And
By described application program run time between produce the 3rd API and ask, can carry out from described one or more body, select can carry out body in order to carry out in described the second processing unit.
18. computer implemented methods according to claim 17, wherein said the first processing unit and described the second processing unit are CPU (central processing unit) (CPU) or Graphics Processing Unit (GPU).
19. computer implemented methods according to claim 17, wherein said the 2nd API request with therefrom compile out described one or more source of carrying out body and be associated.
20. computer implemented methods according to claim 19, wherein the selected body of carrying out is from described source compiled offline.
21. 1 kinds of data handling systems, comprising:
For the application program of the first processing unit run time between, in response to the 2nd API request receiving from described application program, load the device of one or more body carried out of the data processing task of described application program, wherein, it is compatible with the second processing unit that computing equipment identifier by the appointment in described the 2nd API request of described application program identifies that described one or more can be carried out body, described computing equipment identifier with coupling by described application program run time between by an API, ask the processing unit of one or more demands of previous appointment to be associated, and
For the 3rd API request in response to receiving from described application program between described run time, be that described the second processing unit is selected the described one or more devices that can carry out body of carrying out in body.
22. 1 kinds of data handling systems, comprising:
For by the application program at the first processing unit run time between produce the device of an API request, one or more demand of the second processing unit is specified in a described API request;
For asking by producing the 2nd API between described application program is described run time, to load the device of one or more body carried out of the data processing task of described application program, wherein, it is compatible with described the second processing unit that computing equipment identifier by the appointment in described the 2nd API request of described application program identifies that described one or more can be carried out body, and described computing equipment identifier is associated with the processing unit of coupling by described application program one or more demands of previous appointment in a described API request; And
By described application program run time between produce the 3rd API and ask, can carry out from described one or more body, select to carry out the device of body in order to carry out in described the second processing unit.
CN201410187203.6A 2007-04-11 2008-04-09 Perform during parallel running on multiprocessor Active CN103927150B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US92303007P 2007-04-11 2007-04-11
US60/923,030 2007-04-11
US92562007P 2007-04-20 2007-04-20
US60/925,620 2007-04-20
US11/800,319 US8286196B2 (en) 2007-05-03 2007-05-03 Parallel runtime execution on multiple processors
US11/800,319 2007-05-03
CN200880011684.8A CN101802789B (en) 2007-04-11 2008-04-09 Parallel runtime execution on multiple processors

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN200880011684.8A Division CN101802789B (en) 2007-04-11 2008-04-09 Parallel runtime execution on multiple processors

Publications (2)

Publication Number Publication Date
CN103927150A true CN103927150A (en) 2014-07-16
CN103927150B CN103927150B (en) 2016-09-07

Family

ID=51145382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410187203.6A Active CN103927150B (en) 2007-04-11 2008-04-09 Perform during parallel running on multiprocessor

Country Status (1)

Country Link
CN (1) CN103927150B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324381A (en) * 2018-03-30 2019-10-11 北京忆芯科技有限公司 KV in cloud computing and mist computing system stores equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301324A (en) * 1992-11-19 1994-04-05 International Business Machines Corp. Method and apparatus for dynamic work reassignment among asymmetric, coupled processors
WO1998019238A1 (en) * 1996-10-28 1998-05-07 Unisys Corporation Heterogeneous symmetric multi-processing system
WO2006055342A2 (en) * 2004-11-19 2006-05-26 Motorola, Inc. Energy efficient inter-processor management method and system
US20060143615A1 (en) * 2004-12-28 2006-06-29 Seiko Epson Corporation Multimedia processing system and multimedia processing method
CN1877490A (en) * 2006-07-04 2006-12-13 浙江大学 Method for saving energy by optimizing running frequency through combination of static compiler and dynamic frequency modulation techniques

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301324A (en) * 1992-11-19 1994-04-05 International Business Machines Corp. Method and apparatus for dynamic work reassignment among asymmetric, coupled processors
WO1998019238A1 (en) * 1996-10-28 1998-05-07 Unisys Corporation Heterogeneous symmetric multi-processing system
WO2006055342A2 (en) * 2004-11-19 2006-05-26 Motorola, Inc. Energy efficient inter-processor management method and system
US20060143615A1 (en) * 2004-12-28 2006-06-29 Seiko Epson Corporation Multimedia processing system and multimedia processing method
CN1877490A (en) * 2006-07-04 2006-12-13 浙江大学 Method for saving energy by optimizing running frequency through combination of static compiler and dynamic frequency modulation techniques

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110324381A (en) * 2018-03-30 2019-10-11 北京忆芯科技有限公司 KV in cloud computing and mist computing system stores equipment

Also Published As

Publication number Publication date
CN103927150B (en) 2016-09-07

Similar Documents

Publication Publication Date Title
CN101802789B (en) Parallel runtime execution on multiple processors
CN101657795B (en) Data parallel computing on multiple processors
US20200285521A1 (en) Application interface on multiple processors
US10552226B2 (en) Data parallel computing on multiple processors
CN102870096B (en) Sub-impact damper object
US8108633B2 (en) Shared stream memory on multiple processors
CN104823215A (en) Sprite graphics rendering system
CN103838669A (en) System, method, and computer program product for debugging graphics programs locally
KR20090061177A (en) Multi-threading framework supporting dynamic load-balancing and multi-thread processing method using by it
CN103870242A (en) System, method, and computer program product for optimizing the management of thread stack memory
US20150145871A1 (en) System, method, and computer program product to enable the yielding of threads in a graphics processing unit to transfer control to a host processor
CN103927150A (en) Parallel Runtime Execution On Multiple Processors
US20200264781A1 (en) Location aware memory with variable latency for accelerating serialized algorithm
US11836506B2 (en) Parallel runtime execution on multiple processors
AU2018226440A1 (en) Data parallel computing on multiple processors

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant