CN105930210A - Method and device for calling MPI function - Google Patents

Method and device for calling MPI function Download PDF

Info

Publication number
CN105930210A
CN105930210A CN201610229179.7A CN201610229179A CN105930210A CN 105930210 A CN105930210 A CN 105930210A CN 201610229179 A CN201610229179 A CN 201610229179A CN 105930210 A CN105930210 A CN 105930210A
Authority
CN
China
Prior art keywords
mpi
function
call request
request
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610229179.7A
Other languages
Chinese (zh)
Other versions
CN105930210B (en
Inventor
何锐邦
唐会军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201610229179.7A priority Critical patent/CN105930210B/en
Publication of CN105930210A publication Critical patent/CN105930210A/en
Application granted granted Critical
Publication of CN105930210B publication Critical patent/CN105930210B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Abstract

The application provides a method and device for calling the MPI function and relates to the technical field of testing. The device comprises a sequencing operation module, a calling execution module and a first preset module, wherein the sequencing operation module is used for conducting a uniform sequencing operation for all MPI calling requests when a computation server executes a task; the calling execution module is suitable for calling MPI functions corresponding to the MPI calling requests one by one for operation according to a sequence after the sequencing operation; the first preset module is suitable for constructing a queue and constructing a corresponding packaging function aiming at each MPI function in an OpenMPI library; when the computation server executes the task, if threads in the computation server send the respective MPI calling requests to call the MPI functions in the open MPI library, the queue and the packaging functions are used for conducting the uniform sequencing operation for all MPI calling requests. The MPI function calling method and the MPI function calling device have the beneficial effects that the MPI functions in the MPI library can be called by multiple threads, and in a cloud computing environment, the advantages of fine grit and high parallel performance of cloud computing can be fully exerted.

Description

MPI function calling method and device
Technical field
The application relates to technical field of measurement and test, particularly relates to a kind of MPI function calling method and device.
Background technology
Cloud computing (cloud computing: a kind of by computer network with service by the way of provide the most scalable The computation schema of virtualized resource) kind of program can be divided into concurrent program without communication type, and also Line program has communication type.For concurrent program without communication type, this calculating (commonly referred to operation) Each process (the most each subtask), can at any time start, it is not necessary to and other subtasks are same Shi Qidong, because being not required to communication between subtask, each subtask has processed the calculating of oneself just can be direct Terminate.Therefore cloud computing platform scheduler can dispatch different subtasks at different time, processes this The scheduling mode of dispatching requirement is referred to as task level scheduling.Communication type is had, due to this meter for concurrent program The each subtask calculating (i.e. operation) is required for intercommunication, it is therefore desirable to wait whole subtask all to start After, could start to calculate.Namely need cloud computing platform to be currently owned by starting all sons of this operation After the resource of task, could be by this job initiation, otherwise, need other Activity Calculations by the time complete, Discharge enough resources, this operation could be started.The scheduling mode processing this dispatching requirement is referred to as operation Level scheduling.
MPI (Message Passing Interface, message passing interface;A kind of program message passing connects Mouthful, provide the multilingual function library realizing one series interfaces simultaneously) it is conventional on cloud computing cluster One of message passing mechanism, cloud computing program can use MPI to carry out transmission and the reception of data. MPI standard defines one group of function, makes application program from a MPI process, message can be delivered to another Individual MPI process.
There is communication type for concurrent program, prior art substantially uses OpenMPI carry out each meter Communication between the process of operator node.OpenMPI is a kind of high-performance message passing library, is initially conduct Merge technology and resource from other several projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI), it is a realization of increasing income of MPI-2 standard, by some scientific research institutions together with enterprise Exploitation and maintenance.Therefore, OpenMPI can obtain professional technique, industrial technology from high-performance community With resource support, create best MPI library.OpenMPI be supplied to system and software vendor, Program developer and research worker much facilitate.Easy of use, and run this in various operations System, network interconnection, and a collection of/dispatching patcher.
OpenMPI the most only supports single-threaded calling, because not considering it it designs when Applying to multi-thread environment, some key data structures do not add the protection of multi-thread access.If Different threads calls the function of OpenMPI, these key data structures can be destroyed, cause OpenMPI inter-process is made mistakes, and makes program occur abnormal.
In prior art, if use OpenMPI, each calculating node can only use single thread mode, Or use strict program stage by stage.Wherein, pattern the most stage by stage, program life cycle can be divided For several stages, each stage does different things.Calculating and network such as it is divided into receive and dispatch the stage, two Stage will not be overlapping, i.e. during calculation stages, does not have network transmitting-receiving and calls, concentrate and data are all calculated Complete (can calculate with multithreading);And during the network transmitting-receiving stage, all computational threads are completely in resistance Plug waiting state, is called OpenMPI function by single network receiving or sending thread, calculating is just tied Fruit is sent on other machines, and receives their result of calculation from other machines.The most once more carry out What calculating and network were received and dispatched replaces.
And in above two mode, single thread mode does not make full use of calculating node can support that multithreading enters The advantage that row processes;
And strict pattern the most stage by stage, it is only suitable for traditional parallel computation environment.And under cloud computing environment, Can have the disadvantage that
1, in cloud computing environment, generally require use multithreading and data are carried out more fine-grained cutting Point, and calculating is got up the most parallel so that calculate and network can be carried out simultaneously, improve journey Sequence energy.And strict program the most stage by stage calculates and network can not be carried out simultaneously, cloud can not be played completely Calculate fine granularity, high parallel advantage.
2, programming person needs the moment to remember to use OpenMPI function at multiple threads, and this is right For cloud computing programming person, it is easy to make mistakes, cause program that problem inconspicuous occurs.
Summary of the invention
In view of the above problems, it is proposed that the present invention is to provide one to overcome the problems referred to above or at least partly Ground solves the MPI function call device of the problems referred to above and corresponding MPI function calling method.
According to one aspect of the present invention, it is provided that a kind of MPI function calling method, including:
For a calculation server in cloud computing, when described calculation server performs a task, if Thread in described calculation server sends respective MPI call request and calls in OpenMPI storehouse MPI function, carries out unified sorting operation by described each MPI call request;
By the order after described sorting operation, call the MPI letter that described MPI call request is corresponding one by one Number operates;
A queue is built for OpenMPI storehouse;For each MPI function structure in OpenMPI storehouse Build and encapsulate function accordingly;Described queue and described encapsulation function are for performing one when described calculation server During task, if the thread in described calculation server sends respective MPI call request and calls OpenMPI MPI function in storehouse, carries out unified sorting operation, wherein, institute by described each MPI call request Stating encapsulation function please for the MPI call request of the MPI encapsulated for correspondence function is processed as first Ask and put in aforementioned queue.
Optionally, described when the thread transmission respective MPI call request tune in described calculation server During with MPI function in OpenMPI storehouse, described each MPI call request is carried out unified sequence behaviour Work includes:
The MPI call request for a MPI function sent for each thread, described MPI function pair MPI call request described in the encapsulation intercepting api calls answered, and it is processed as according to described MPI call request One request, more described first request is put in described queue.
Optionally, MPI call request described in the encapsulation intercepting api calls that described MPI function is corresponding, according to Described MPI call request is processed as the first request, more described first request is put into described queue includes:
By MPI call request described in described MPI encapsulation intercepting api calls;
The parameter of described MPI call request and the MPI function of correspondence are encapsulated in a structure;
Described structure is put into described queue as the first request.
Optionally, also include:
Build the life cycle receiving or sending thread equal to or more than described task life cycle;Described receipts Hair line journey, for by the order after described sorting operation, calls described MPI call request corresponding one by one MPI function operates.
Optionally, described by the order after described sorting operation, call described MPI call request one by one Corresponding MPI function carries out operation and includes:
From described queue, MPI call request is sequentially read by described receiving or sending thread, and according to request Content is called corresponding MPI function and is operated.
Optionally, the parameter list one of the parameter list of described encapsulation function and corresponding MPI function Cause.
According to another aspect of the present invention, it is provided that a kind of MPI function call device, including:
Sorting operation module, is suitable to, for a calculation server in cloud computing, calculate service when described When device performs a task, if the thread in described calculation server sends respective MPI call request and calls MPI function in OpenMPI storehouse, carries out unified sorting operation by described each MPI call request;
Call execution module, the order after being suitable to by described sorting operation, call described MPI one by one and adjust The MPI function corresponding with request operates;
First preset module, is suitable to build a queue for OpenMPI storehouse;For OpenMPI storehouse In each MPI function build encapsulate function accordingly;Described queue and described encapsulation function are for working as institute State calculation server when performing a task, if the thread in described calculation server sends respective MPI Call request calls the MPI function in OpenMPI storehouse, described each MPI call request is unified Sorting operation, wherein, described encapsulation function is for the MPI of MPI function that will encapsulate for correspondence Call request is processed as the first request and puts in aforementioned queue.
Optionally, described sorting operation module includes:
First sequence operation module, is suitable to the MPI for a MPI function for each thread sends and adjusts With request, MPI call request described in the encapsulation intercepting api calls that described MPI function is corresponding, and according to institute State MPI call request and be processed as the first request, more described first request is put in described queue.
Optionally, described first sequence operation module includes:
Blocking module, is suitable to by MPI call request described in described MPI encapsulation intercepting api calls;
Structure package module, is suitable to the MPI letter of the parameter by described MPI call request and correspondence Number is encapsulated in a structure;
Structure placement module, is suitable to as the first request, described structure is put into described queue.
Optionally, also include:
Second preset module, is suitable to build a life cycle equal to or more than described task life cycle Receiving or sending thread;Described receiving or sending thread, for by the order after described sorting operation, calls described one by one MPI function corresponding to MPI call request operates.
Optionally, call execution module described in include:
First calls execution module, is suitable to sequentially read MPI from described queue by described receiving or sending thread Call request, and call corresponding MPI function according to request content and operate.
Optionally, the parameter list of described encapsulation function is consistent with the parameter list of corresponding MPI function.
MPI function calling method according to the present invention can be adjusted for the MPI of thread each in multithreading With request, described each MPI call request is carried out unified sorting operation, thus solves prior art In achieve and only support the single-threaded mechanism called for existing OpenMPI, for the application of multithreading Environment, when multithreading calls the MPI function in OpenMPI storehouse, destroys the critical data knot of MPI function The situation of structure, causes OpenMPI inter-process to be made mistakes, and makes program occur abnormal, and then can not be complete Full performance cloud computing fine granularity, the problem of high parallel advantage, reached to make multithreading call MPI Storehouse, in cloud computing environment, can give full play to cloud computing fine granularity, high parallel advantage beneficial effect.
Accompanying drawing explanation
By reading the detailed description of hereafter preferred implementation, various other advantage and benefit for this Field those of ordinary skill will be clear from understanding.Accompanying drawing is only used for illustrating the purpose of preferred implementation, And it is not considered as limitation of the present invention.And in whole accompanying drawing, be denoted by the same reference numerals Identical parts.In the accompanying drawings:
Fig. 1 shows that one MPI function calling method the most of the present invention is implemented The schematic flow sheet of example one;
Fig. 2 shows that one MPI function calling method the most of the present invention is implemented The schematic flow sheet of example two;
Fig. 3 shows that one MPI function call device the most of the present invention is implemented The structural representation of example one;And
Fig. 4 shows a kind of MPI function call device embodiment two Structural representation.
Detailed description of the invention
It is more fully described the exemplary embodiment of the disclosure below with reference to accompanying drawings.Although accompanying drawing shows The exemplary embodiment of the disclosure, it being understood, however, that may be realized in various forms the disclosure and not Should be limited by embodiments set forth here.On the contrary, it is provided that these embodiments are able to more thoroughly Understand the disclosure, and complete for the scope of the present disclosure can be conveyed to those skilled in the art.
With reference to Fig. 1, it illustrates the flow process signal of the present invention a kind of MPI function calling method embodiment one Figure, specifically may include that
Step 110, for a calculation server in cloud computing, when described calculation server performs one During task, if the thread in described calculation server sends respective MPI call request and calls OpenMPI MPI function in storehouse, carries out unified sorting operation by described each MPI call request;
For the various MPI functions in the OpenMPI storehouse of prior art, only support single-threaded calling, because of For not considering to be applied to multi-thread environment it designs when, some key data structures do not have There is the protection adding multi-thread access.The public memory array storage running parameter of the most each MPI function, If there is multiple thread request to call multiple MPI function simultaneously, the parameter of the most the plurality of thread can be simultaneously Store in the memory array that this is common, then there may be conflict, cause key data structure to be destroyed. And in the embodiment of the present invention, for each calculation server in cloud computing, for each calculation server When using multithreading to perform operation (i.e. a task) having communication type, send MPI when there being thread When call request calls the MPI function in OpenMPI storehouse, then by the MPI call request of these threads It is ranked up operation.The MPI call request such as sent by each thread uses and synchronizes locking mechanisms, for When the function in OpenMPI storehouse is called in multiple requests simultaneously, only only allow a thread in each moment By (memory array) in key data structure public for parameter read-in.
Step 120, by the order after described sorting operation, calls described MPI call request corresponding one by one MPI function operate.
After each MPI call request is carried out unified sorting operation, then for each MPI call request Then there is sequencing, then the MPI call request after can calling sequence the most one by one operates.I.e. The synchronous working mode making multithreading may conform to OpenMPI when calling the MPI function in OpenMPI storehouse The single-threaded mechanism in storehouse.
With reference to Fig. 2, it illustrates the stream of currently preferred a kind of MPI function calling method embodiment two Journey schematic diagram, specifically may include that
Step 200, builds a queue for OpenMPI storehouse;Each in OpenMPI storehouse MPI function builds and encapsulates function accordingly;Described queue and described encapsulation function are for calculating clothes when described When business device performs a task, if the thread in described calculation server sends respective MPI call request and adjusts With the MPI function in OpenMPI storehouse, described each MPI call request is carried out unified sorting operation.
In the embodiment of the present invention, because each MPI function in OpenMPI storehouse public critical data knot Structure carries out the actions such as the storage of parameter, therefore to avoid the occurrence of thread synchronization to call the situation of MPI function Occur, for whole OpenMPI storehouse build a queue, and in order to make for each MPI function, The multiple MPI call request that may be sent by multiple threads, all put in aforementioned queue, then for each MPI function adds an encapsulation function, and this encapsulation function is for the MPI function that will encapsulate for correspondence MPI call request is processed as the first request and puts in aforementioned queue.
Wherein, the parameter list of described encapsulation function is consistent with the parameter list of corresponding MPI function.
Being specially each OpenMPI function and provide an encapsulation function, for example, MPI_Send function carries For an entitled MPI_ThreadSend function, the parameter list of MPI_ThreadSend and MPI_Send Function keeps consistent.User does not directly invoke MPI_Send function, but passes through MPI_ThreadSend Function, uses the sending function of OpenMPI.MPI_ThreadSend is by correspondence MPI_Send function Call request be processed as the first request and put in described queue.
Step 210, builds the life cycle transmitting-receiving line equal to or more than described task life cycle Journey;Described receiving or sending thread is for by the order after described sorting operation, and calling that described MPI calls one by one please The MPI function asking corresponding operates.
In embodiments of the present invention, for by the first request in aforementioned queue is processed, creating one Life cycle is more than or equal to the transmitting-receiving of described task life cycle (life cycle of the most aforementioned operation) Thread, special the first request read in described queue.This thread reads one by one from queue in order Request, and call corresponding OpenMPI function according to request content.Such as put into queue for aforementioned For MPI_Send first request, this receiving or sending thread reads a MPI_Send's from queue During request, the MPI_Send function just directly invoked in the first request performs.
In embodiments of the present invention, the life cycle that receiving or sending thread is optimum is equal to described task life cycle.
So, a queue is built aforementioned for OpenMPI storehouse;Every in OpenMPI storehouse In the case of individual MPI function builds corresponding encapsulation function, then enter, when in described calculation server Thread sends respective MPI call request when calling the MPI function in OpenMPI storehouse, by described respectively MPI call request carries out unified sorting operation process:
Step 220, the MPI call request for a MPI function sent for each thread, described MPI call request described in the encapsulation intercepting api calls that MPI function is corresponding, and according to described MPI call please Ask first request that is processed as, more described first request is put in described queue.
Foregoing for the encapsulation function of each MPI function, its function is will to encapsulate for correspondence The MPI call request of MPI function is processed as the first request and puts in aforementioned queue.So in this step, When certain thread sends a MPI call request, during certain MPI function of request call, this MPI letter Number for encapsulation function then intercept described MPI call request, then according to described MPI call request It is processed as the first request, more described first request is put in described queue.
Further, the described MPI call request for a MPI function sent for each thread, MPI call request described in the encapsulation intercepting api calls that described MPI function is corresponding, and adjust according to described MPI It is processed as the first request with request, more described queue is put in described first request includes:
Step S221, by MPI call request described in described MPI encapsulation intercepting api calls;
As it was previously stated, the MPI for each MPI function encapsulates function, the parameter row of its encapsulation function Table keeps consistent with actual MPI function.The call request for MPI function that so thread sends, The encapsulation function of this MPI function the most packed intercepts, and thread does not directly invoke the MPI function of reality, Also the key data structure that each MPI function is public would not directly be used.
The most aforementioned call request for MPI_Send, its encapsulation function MPI_ThreadSend is then Call request for MPI_Send is intercepted.
Step S222, is encapsulated into one by the parameter of described MPI call request and the MPI function of correspondence In structure;
By described encapsulation function by the parameter extraction in MPI call request out, then with corresponding MPI Function encapsulates together to a structure, and this structure can be used as the first request.
Step S223, puts into described queue using described structure as the first request.
To put in described queue as the structure of the first request.
Receiving or sending thread based on aforementioned structure, described by the order after described sorting operation, call institute one by one State MPI function corresponding to MPI call request to carry out operation and include:
Step 230, sequentially reads MPI call request by described receiving or sending thread from described queue, and Call corresponding MPI function according to request content to operate.
Foregoing receiving or sending thread exclusively carries out network transmitting-receiving.This thread reads one from queue in order Each and every one request, and call corresponding OpenMPI function according to request content.Such as this thread is from queue When reading first request of a MPI_Send, just directly invoke MPI_Send function and perform (first The parameter comprised in request performs as the parameter of MPI_Send).
It addition, in the embodiment of the present invention, the target MPI function for encapsulation is preferably non-obstruction MPI Function.
The occupation mode of the MPI function of the OpenMPI of the embodiment of the present invention, it is provided that multithreading uses The semanteme of OpenMPI.Owing to corresponding request is placed in queue rank, will not produce and adjust simultaneously By the situation of OpenMPI function, therefore user has only to substitute real OpenMPI letter with encapsulation function Number, it is possible to be used in multi-thread environment.I.e. can call OpenMPI at any time calculating when Function, makes full use of CPU and network interface card resource.
It addition, multiple threads that the present invention can also be for a calculation server use conditional-variable mechanism right The MPI call request of each thread carries out Synchronization Control.Such as each thread, one conditional-variable is set, When the variable of a thread changes into true, then the MPI call request of this thread is allowed to go to call MPI Function, when the variable of thread is false, then the MPI call request refusing this thread goes to call MPI Function, i.e. blocks this thread.So, when there is multiple thread synchronization and sending MPI call request, first The variable first only changing one of them thread is true, other threads then wait in line to send MPI calls please Asking, after this thread completes, its variable changes false into, and the variable of during then notice waits thread changes For true, then this thread can send MPI call request and conduct interviews.It addition, the present invention also can use Semaphore Mechanism carries out Synchronization Control to the MPI call request of multithreading, i.e. for OpenMPI lab setting Quantity i=n of thread can be called, when a thread in n is finished, then i-1, until subtracting simultaneously It is 0, then the semaphore i=1 of the present invention, puts into a thread the most every time and send the visit of MPI call request Asking MPI function, other MPI call request sent then wait, when i is kept to 0, the most again from wait Selecting 1 in thread to put into, i becomes 1, then performs.Multithreading so can also be provided to use The semanteme of OpenMPI, can call OpenMPI function at any time calculating when, make full use of CPU With network interface card resource.
With reference to Fig. 3, it illustrates the structural representation of the present invention a kind of MPI function call device embodiment one, Specifically may include that
Sorting operation module 310, is suitable to for a calculation server in cloud computing, when described calculating When server performs a task, if the thread in described calculation server sends respective MPI call request Call the MPI function in OpenMPI storehouse, described each MPI call request is carried out unified sequence behaviour Make;
Call execution module 320, the order after being suitable to by described sorting operation, call described MPI one by one MPI function corresponding to call request operates.
Multiple calculation server (calculating node) can be there is in the embodiment of the present invention in multiple cloud computings Parallel situation, can include MPI function call device in each calculation server.
Optionally, also include:
First preset module, is suitable to build a queue for OpenMPI storehouse;For OpenMPI storehouse In each MPI function build encapsulate function accordingly;Described queue and described encapsulation function are for working as institute State calculation server when performing a task, if the thread in described calculation server sends respective MPI Call request calls the MPI function in OpenMPI storehouse, described each MPI call request is unified Sorting operation.
Optionally, described sorting operation module includes:
First sequence operation module, is suitable to the MPI for a MPI function for each thread sends and adjusts With request, MPI call request described in the encapsulation intercepting api calls that described MPI function is corresponding, and according to institute State MPI call request and be processed as the first request, more described first request is put in described queue.
Optionally, it is characterised in that described first sequence operation module includes:
Blocking module, is suitable to by MPI call request described in described MPI encapsulation intercepting api calls;
Structure package module, is suitable to the MPI letter of the parameter by described MPI call request and correspondence Number is encapsulated in a structure;
Structure placement module, is suitable to as the first request, described structure is put into described queue.
Optionally, also include:
Second preset module, is suitable to build a life cycle equal to or more than described task life cycle Receiving or sending thread;Described receiving or sending thread, for by the order after described sorting operation, calls described one by one MPI function corresponding to MPI call request operates.
Optionally, call execution module described in include:
First calls execution module, is suitable to sequentially read MPI from described queue by described receiving or sending thread Call request, and call corresponding MPI function according to request content and operate.
Wherein, the parameter list of described encapsulation function is consistent with the parameter list of corresponding MPI function.
With reference to Fig. 4, it illustrates the structural representation of the present invention a kind of MPI function call device embodiment two, Specifically may include that
First preset module 400, is suitable to build a queue for OpenMPI storehouse;For OpenMPI Each MPI function in storehouse builds and encapsulates function accordingly;Described queue and described encapsulation function are for working as When described calculation server performs a task, if the thread in described calculation server sends respective MPI Call request calls the MPI function in OpenMPI storehouse, described each MPI call request is unified Sorting operation.
Second preset module 410, is suitable to build a life cycle equal to or more than described task life The receiving or sending thread in cycle;Described receiving or sending thread, for by the order after described sorting operation, calls institute one by one The MPI function stating MPI call request corresponding operates.
First sequence operation module 420, is suitable to the MPI for a MPI function sent for each thread Call request, MPI call request described in the encapsulation intercepting api calls that described MPI function is corresponding, and according to Described MPI call request is processed as the first request, more described first request is put in described queue.
First calls execution module 430, is suitable to sequentially be read from described queue by described receiving or sending thread MPI call request, and call corresponding MPI function according to request content and operate.
Wherein, optionally, described first sequence operation module includes:
Blocking module, is suitable to by MPI call request described in described MPI encapsulation intercepting api calls;
Structure package module, is suitable to the MPI letter of the parameter by described MPI call request and correspondence Number is encapsulated in a structure;
Structure placement module, is suitable to as the first request, described structure is put into described queue.
Wherein, the parameter list of described encapsulation function is consistent with the parameter list of corresponding MPI function.
Algorithm and display be not solid with any certain computer, virtual system or miscellaneous equipment provided herein Have relevant.Various general-purpose systems can also be used together with based on teaching in this.As described above, Construct the structure required by this kind of system to be apparent from.Additionally, the present invention is also not for any specific Programming language.It is understood that, it is possible to use various programming languages realize the content of invention described herein, And the description done language-specific above is the preferred forms in order to disclose the present invention.
In description mentioned herein, illustrate a large amount of detail.It is to be appreciated, however, that this Inventive embodiment can be put into practice in the case of not having these details.In some instances, not It is shown specifically known method, structure and technology, in order to do not obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help understand in each inventive aspect one Or multiple, above in the description of the exemplary embodiment of the present invention, each feature of the present invention is sometimes It is grouped together in single embodiment, figure or descriptions thereof.But, should be by the disclosure Method be construed to reflect an intention that i.e. the present invention for required protection require ratio in each claim The middle more feature of feature be expressly recited.More precisely, as the following claims reflect As, inventive aspect is all features less than single embodiment disclosed above.Therefore, it then follows Claims of detailed description of the invention are thus expressly incorporated in this detailed description of the invention, the most each right Requirement itself is all as the independent embodiment of the present invention.
Those skilled in the art are appreciated that and can carry out the module in the equipment in embodiment certainly Change adaptively and they are arranged in one or more equipment different from this embodiment.Permissible Module in embodiment or unit or assembly are combined into a module or unit or assembly, and in addition may be used To put them into multiple submodule or subelement or sub-component.Except such feature and/or process or Outside at least some in unit excludes each other, can use any combination that (this specification is included companion With claim, summary and accompanying drawing) disclosed in all features and so disclosed any method or All processes of person's equipment or unit are combined.Unless expressly stated otherwise, this specification (includes companion With claim, summary and accompanying drawing) disclosed in each feature can by provide identical, equivalent or phase Replace like the alternative features of purpose.
Although additionally, it will be appreciated by those of skill in the art that embodiments more described herein include it Some feature included in its embodiment rather than further feature, but the group of the feature of different embodiment Close and mean to be within the scope of the present invention and formed different embodiments.Such as, in following power In profit claim, one of arbitrarily can mode making in any combination of embodiment required for protection With.
The all parts embodiment of the present invention can realize with hardware, or to process at one or more The software module run on device realizes, or realizes with combinations thereof.Those skilled in the art should Understand, microprocessor or digital signal processor (DSP) can be used in practice to realize basis A kind of some or all parts in the MPI function call device of the embodiment of the present invention some or Repertoire.The present invention is also implemented as the part for performing method as described herein or complete The equipment in portion or device program (such as, computer program and computer program).Such reality The program of the existing present invention can store on a computer-readable medium, or can have one or more The form of signal.Such signal can be downloaded from internet website and obtain, or on carrier signal There is provided, or provide with any other form.
The present invention will be described rather than limits the invention to it should be noted above-described embodiment, And those skilled in the art can design replacement without departing from the scope of the appended claims Embodiment.In the claims, any reference marks that should not will be located between bracket is configured to right The restriction required.Word " comprises " and does not excludes the presence of the element or step not arranged in the claims.Position Word "a" or "an" before element does not excludes the presence of multiple such element.The present invention can With by means of including the hardware of some different elements and realizing by means of properly programmed computer. If in the unit claim listing equipment for drying, several in these devices can be by same Individual hardware branch specifically embodies.Word first, second and third use do not indicate that any order. Can be title by these word explanations.
The invention discloses A1, a kind of MPI function calling method, including:
For a calculation server in cloud computing, when described calculation server performs a task, if Thread in described calculation server sends respective MPI call request and calls in OpenMPI storehouse MPI function, carries out unified sorting operation by described each MPI call request;
By the order after described sorting operation, call the MPI letter that described MPI call request is corresponding one by one Number operates;
A queue is built for OpenMPI storehouse;For each MPI function structure in OpenMPI storehouse Build and encapsulate function accordingly;Described queue and described encapsulation function are for performing one when described calculation server During task, if the thread in described calculation server sends respective MPI call request and calls OpenMPI MPI function in storehouse, carries out unified sorting operation, wherein, institute by described each MPI call request Stating encapsulation function please for the MPI call request of the MPI encapsulated for correspondence function is processed as first Ask and put in aforementioned queue.
A2, method as described in A1, described send respective when the thread in described calculation server When MPI call request calls the MPI function in OpenMPI storehouse, described each MPI call request is entered The unified sorting operation of row includes:
The MPI call request for a MPI function sent for each thread, described MPI function pair MPI call request described in the encapsulation intercepting api calls answered, and it is processed as according to described MPI call request One request, more described first request is put in described queue.
A3, method as described in A2, MPI described in the encapsulation intercepting api calls that described MPI function is corresponding Call request, is processed as the first request according to described MPI call request, more described first request is put into Described queue includes:
By MPI call request described in described MPI encapsulation intercepting api calls;
The parameter of described MPI call request and the MPI function of correspondence are encapsulated in a structure;
Described structure is put into described queue as the first request.
A4, method as described in 2, also include:
Build the life cycle receiving or sending thread equal to or more than described task life cycle;Described receipts Hair line journey, for by the order after described sorting operation, calls described MPI call request corresponding one by one MPI function operates.
A5, method as described in A4, it is characterised in that described by the order after described sorting operation, Call MPI function corresponding to described MPI call request one by one to carry out operation and include:
From described queue, MPI call request is sequentially read by described receiving or sending thread, and according to request Content is called corresponding MPI function and is operated.
A6, method as described in A1 or A2,
The parameter list of described encapsulation function is consistent with the parameter list of corresponding MPI function.
The invention also discloses B7, a kind of MPI function call device, including:
Sorting operation module, is suitable to, for a calculation server in cloud computing, calculate service when described When device performs a task, if the thread in described calculation server sends respective MPI call request and calls MPI function in OpenMPI storehouse, carries out unified sorting operation by described each MPI call request;
Call execution module, the order after being suitable to by described sorting operation, call described MPI one by one and adjust The MPI function corresponding with request operates;
First preset module, is suitable to build a queue for OpenMPI storehouse;For OpenMPI storehouse In each MPI function build encapsulate function accordingly;Described queue and described encapsulation function are for working as institute State calculation server when performing a task, if the thread in described calculation server sends respective MPI Call request calls the MPI function in OpenMPI storehouse, described each MPI call request is unified Sorting operation, wherein, described encapsulation function is for the MPI of MPI function that will encapsulate for correspondence Call request is processed as the first request and puts in aforementioned queue.
B8, device as described in B 7, described sorting operation module includes:
First sequence operation module, is suitable to the MPI for a MPI function for each thread sends and adjusts With request, MPI call request described in the encapsulation intercepting api calls that described MPI function is corresponding, and according to institute State MPI call request and be processed as the first request, more described first request is put in described queue.
B9, device as described in B8, described first sequence operation module includes:
Blocking module, is suitable to by MPI call request described in described MPI encapsulation intercepting api calls;
Structure package module, is suitable to the MPI letter of the parameter by described MPI call request and correspondence Number is encapsulated in a structure;
Structure placement module, is suitable to as the first request, described structure is put into described queue.
B10, device as described in B8, also include:
Second preset module, is suitable to build a life cycle equal to or more than described task life cycle Receiving or sending thread;Described receiving or sending thread, for by the order after described sorting operation, calls described one by one MPI function corresponding to MPI call request operates.
B11, device as described in B8, described in call execution module and include:
First calls execution module, is suitable to sequentially read MPI from described queue by described receiving or sending thread Call request, and call corresponding MPI function according to request content and operate.
B12, device as described in B7,
The parameter list of described encapsulation function is consistent with the parameter list of corresponding MPI function.

Claims (10)

1. a MPI function calling method, it is characterised in that including:
For a calculation server in cloud computing, when described calculation server performs a task, if Thread in described calculation server sends respective MPI call request and calls in OpenMPI storehouse MPI function, carries out unified sorting operation by described each MPI call request;
By the order after described sorting operation, call the MPI letter that described MPI call request is corresponding one by one Number operates;
A queue is built for OpenMPI storehouse;For each MPI function structure in OpenMPI storehouse Build and encapsulate function accordingly;Described queue and described encapsulation function are for performing one when described calculation server During task, if the thread in described calculation server sends respective MPI call request and calls OpenMPI MPI function in storehouse, carries out unified sorting operation, wherein, institute by described each MPI call request Stating encapsulation function please for the MPI call request of the MPI encapsulated for correspondence function is processed as first Ask and put in aforementioned queue.
Method the most according to claim 1, it is characterised in that described when described calculation server In thread send respective MPI call request when calling the MPI function in OpenMPI storehouse, by institute State each MPI call request to carry out unified sorting operation and include:
The MPI call request for a MPI function sent for each thread, described MPI function pair MPI call request described in the encapsulation intercepting api calls answered, and it is processed as according to described MPI call request One request, more described first request is put in described queue.
Method the most according to claim 2, it is characterised in that the envelope that described MPI function is corresponding MPI call request described in dress intercepting api calls, is processed as the first request according to described MPI call request, Again described first request is put into described queue to include:
By MPI call request described in described MPI encapsulation intercepting api calls;
The parameter of described MPI call request and the MPI function of correspondence are encapsulated in a structure;
Described structure is put into described queue as the first request.
Method the most according to claim 2, it is characterised in that also include:
Build the life cycle receiving or sending thread equal to or more than described task life cycle;Described receipts Hair line journey, for by the order after described sorting operation, calls described MPI call request corresponding one by one MPI function operates.
Method the most according to claim 1 and 2, it is characterised in that
The parameter list of described encapsulation function is consistent with the parameter list of corresponding MPI function.
6. a MPI function call device, it is characterised in that including:
Sorting operation module, is suitable to, for a calculation server in cloud computing, calculate service when described When device performs a task, if the thread in described calculation server sends respective MPI call request and calls MPI function in OpenMPI storehouse, carries out unified sorting operation by described each MPI call request;
Call execution module, the order after being suitable to by described sorting operation, call described MPI one by one and adjust The MPI function corresponding with request operates;
First preset module, is suitable to build a queue for OpenMPI storehouse;For OpenMPI storehouse In each MPI function build encapsulate function accordingly;Described queue and described encapsulation function are for working as institute State calculation server when performing a task, if the thread in described calculation server sends respective MPI Call request calls the MPI function in OpenMPI storehouse, described each MPI call request is unified Sorting operation, wherein, described encapsulation function is for the MPI of MPI function that will encapsulate for correspondence Call request is processed as the first request and puts in aforementioned queue.
Device the most according to claim 7, it is characterised in that described sorting operation module includes:
First sequence operation module, is suitable to the MPI for a MPI function for each thread sends and adjusts With request, MPI call request described in the encapsulation intercepting api calls that described MPI function is corresponding, and according to institute State MPI call request and be processed as the first request, more described first request is put in described queue.
Device the most according to claim 8, it is characterised in that described first sequence operation module Including:
Blocking module, is suitable to by MPI call request described in described MPI encapsulation intercepting api calls;
Structure package module, is suitable to the MPI letter of the parameter by described MPI call request and correspondence Number is encapsulated in a structure;
Structure placement module, is suitable to as the first request, described structure is put into described queue.
Device the most according to claim 8, it is characterised in that also include:
Second preset module, is suitable to build a life cycle equal to or more than described task life cycle Receiving or sending thread;Described receiving or sending thread, for by the order after described sorting operation, calls described one by one MPI function corresponding to MPI call request operates.
Device the most according to claim 7, it is characterised in that
The parameter list of described encapsulation function is consistent with the parameter list of corresponding MPI function.
CN201610229179.7A 2012-12-05 2012-12-05 MPI function calling method and device Expired - Fee Related CN105930210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610229179.7A CN105930210B (en) 2012-12-05 2012-12-05 MPI function calling method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210518379.6A CN103019843B (en) 2012-12-05 2012-12-05 MPI function calling method and device
CN201610229179.7A CN105930210B (en) 2012-12-05 2012-12-05 MPI function calling method and device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201210518379.6A Division CN103019843B (en) 2012-12-05 2012-12-05 MPI function calling method and device

Publications (2)

Publication Number Publication Date
CN105930210A true CN105930210A (en) 2016-09-07
CN105930210B CN105930210B (en) 2019-02-26

Family

ID=47968474

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201210518379.6A Active CN103019843B (en) 2012-12-05 2012-12-05 MPI function calling method and device
CN201610229179.7A Expired - Fee Related CN105930210B (en) 2012-12-05 2012-12-05 MPI function calling method and device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201210518379.6A Active CN103019843B (en) 2012-12-05 2012-12-05 MPI function calling method and device

Country Status (1)

Country Link
CN (2) CN103019843B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104281489B (en) * 2013-07-12 2017-11-21 上海携程商务有限公司 Multithreading requesting method and system under SOA framework
CN113395358B (en) * 2021-08-16 2021-11-05 贝壳找房(北京)科技有限公司 Network request execution method and execution system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1289962A (en) * 1999-09-23 2001-04-04 国际商业机器公司 Establishment of multiple process spanned communication programme in multiple linear equation running environment
CN101196827A (en) * 2007-12-28 2008-06-11 中国科学院计算技术研究所 Parallel simulator and method
CN101833479A (en) * 2010-04-16 2010-09-15 中国人民解放军国防科学技术大学 MPI (Moldflow Plastics Insight) information scheduling method based on reinforcement learning under multi-network environment
CN101937367A (en) * 2009-06-30 2011-01-05 英特尔公司 The MPI source code program arrives the automatic conversion based on the program of MPI thread
US20120260261A1 (en) * 2011-04-07 2012-10-11 Microsoft Corporation Asynchronous callback driven messaging request completion notification

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102707955A (en) * 2012-05-18 2012-10-03 天津大学 Method for realizing support vector machine by MPI programming and OpenMP programming

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1289962A (en) * 1999-09-23 2001-04-04 国际商业机器公司 Establishment of multiple process spanned communication programme in multiple linear equation running environment
CN101196827A (en) * 2007-12-28 2008-06-11 中国科学院计算技术研究所 Parallel simulator and method
CN101937367A (en) * 2009-06-30 2011-01-05 英特尔公司 The MPI source code program arrives the automatic conversion based on the program of MPI thread
CN101833479A (en) * 2010-04-16 2010-09-15 中国人民解放军国防科学技术大学 MPI (Moldflow Plastics Insight) information scheduling method based on reinforcement learning under multi-network environment
US20120260261A1 (en) * 2011-04-07 2012-10-11 Microsoft Corporation Asynchronous callback driven messaging request completion notification

Also Published As

Publication number Publication date
CN105930210B (en) 2019-02-26
CN103019843B (en) 2016-05-11
CN103019843A (en) 2013-04-03

Similar Documents

Publication Publication Date Title
US11113782B2 (en) Dynamic kernel slicing for VGPU sharing in serverless computing systems
CN103069390B (en) Method and system for re-scheduling workload in a hybrid computing environment
CN106933669B (en) Apparatus and method for data processing
US9251103B2 (en) Memory-access-resource management
CN103999051B (en) Strategy for tinter resource allocation in the minds of shader core
CN104978228B (en) A kind of dispatching method and device of distributed computing system
Iordache et al. Resilin: Elastic mapreduce over multiple clouds
CN105045658A (en) Method for realizing dynamic dispatching distribution of task by multi-core embedded DSP (Data Structure Processor)
CN103595770B (en) Method and device for achieving file downloading through SDK
US20230393879A1 (en) Coordinated Container Scheduling For Improved Resource Allocation In Virtual Computing Environment
CN108196946A (en) A kind of subregion multinuclear method of Mach
CN103744716A (en) Dynamic interrupt balanced mapping method based on current virtual central processing unit (VCPU) scheduling state
CN116541134B (en) Method and device for deploying containers in multi-architecture cluster
CN101751288A (en) Method, device and system applying process scheduler
US8935699B1 (en) CPU sharing techniques
CN106250217A (en) Synchronous dispatching method between a kind of many virtual processors and dispatching patcher thereof
KR102052964B1 (en) Method and system for scheduling computing
US11886898B2 (en) GPU-remoting latency aware virtual machine migration
CN106383747A (en) Method and device for scheduling computing resources
CN103019844B (en) A kind ofly support multithreading to call the method and apparatus of MPI function
CN115686805A (en) GPU resource sharing method and device, and GPU resource sharing scheduling method and device
US9158601B2 (en) Multithreaded event handling using partitioned event de-multiplexers
CN105930210A (en) Method and device for calling MPI function
Leonenkov et al. Introducing new backfill-based scheduler for slurm resource manager
Shen et al. Serpens: A high-performance serverless platform for nfv

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190226

Termination date: 20211205