CN101097527B - Flowpath scheduling method and system of application progress - Google Patents

Flowpath scheduling method and system of application progress Download PDF

Info

Publication number
CN101097527B
CN101097527B CN200610028504XA CN200610028504A CN101097527B CN 101097527 B CN101097527 B CN 101097527B CN 200610028504X A CN200610028504X A CN 200610028504XA CN 200610028504 A CN200610028504 A CN 200610028504A CN 101097527 B CN101097527 B CN 101097527B
Authority
CN
China
Prior art keywords
service
flow
message queue
call
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN200610028504XA
Other languages
Chinese (zh)
Other versions
CN101097527A (en
Inventor
陈逢源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN200610028504XA priority Critical patent/CN101097527B/en
Publication of CN101097527A publication Critical patent/CN101097527A/en
Application granted granted Critical
Publication of CN101097527B publication Critical patent/CN101097527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a flow attempering method of application course to resolve problem that method of flow attempering realized by adopting command scenarios mode is single, and can not fulfill multiple application course attempering under UNIX (Linux)envirenment. Said method setting flow attempering serving program includes: reading prearranged service course information; creating output message queue and input message queue corresponding with said service course configuration message; reading single prearranged course control information, writing into input message queue corresponding with service when executes attempering, reading output message queue when attempering executed is returned. It also includes: setting flow attempering serving program and interface function of application program for application program, developing efficiency is increased greatly. The invention realizes combining attempering in single course/multiple course, synchronization / asynchronism multiple application courses under UNIX (Linux)environment, function of load equilibrium and flow control can be realized.

Description

A kind of process dispatch method of application process and system
Technical field
The program that the present invention relates to computer system is carried out, and particularly relates to the method and system that a kind of a plurality of application process is dispatched according to the predefine flow process.
Background technology
Unix (or Linux) is a multi-user, multi-task operation system, at the same time, can have a plurality of processes to carry out simultaneously.Process is system program or the once operation of application program in internal memory, is the executive routine of the current operation of operating system.Process and program are not corresponding one by one, and a program can be used as a plurality of processes and carries out.Under existing Unix system, process scheduling realizes by system call.The process scheduling of operating system mainly is to realize according to priority each process being carried out the scheduling of time-slotting, so that resource (mainly being CPU) is shared.
The flow scheduling of application process (hereinafter to be referred as application flow) is meant that a plurality of application processes according to the predefine flow process, carry out scheduled for executing.Wherein, described application process is the process of executive utility; Described predefine flow process is in the workflow that realizes particular application services, and the execution in step of pre-defined application process is gathered.
The process scheduling of operating system, realization be the distribution operating position of process to computer resource, the not control function that directly provides application processes to call according to the predefine flow process.The mode of direct realization flow scheduling under operating system generally be to write command script (shell), but there is following shortcoming in the mode of this command script:
Because command script is write at application flow, therefore the corresponding a kind of scheduling flow of each shell can only handle the application process of using same scheduling flow, in case flow process changes the essential invoke script of revising.This just requires the user to be familiar with command script, could corresponding various application flows write different shell scripts, realizes the flow scheduling management of a plurality of application processes.
And for the execution in step with certain logical relation, this logical relation between the step is uncontrollable.Wherein, step had restriction relation before and after described logical relation referred to.For example, at the steps A of particular application services and the execution sequence of B is that A and B begin to carry out simultaneously, carries out A after B finishes again separately, adopts the mode of command script, because the process scheduling of application programs is a serial mode, therefore can't be according to this predefined flow performing.
In addition, application program leaves in the storage medium (as disk) of hardware usually, when operating system is carried out this application program, obtains and the graftabl operation from disk.Adopt the mode of command script, application program can be carried out after being loaded into internal memory immediately, therefore for the application program that needs serial processing, as A and B, B need carry out behind A, can only earlier application A be started and graftabl from disk, again application program B graftabl be carried out after operation finishes.In the process of application program B graftabl, CPU could move after need waiting and being housed to, and this processing mode causes the IO CPU processing free time of operating period, and the performance of system is brought considerable influence.
In a word, the flow scheduling of application process under Unix (or Linux) environment, the mode of utility command script realizes at the multiple scheduling of different application service very difficult, so the process dispatch method under the prior art has significant limitation.
Summary of the invention
Technical matters to be solved by this invention provides a kind of process dispatch method and system of application process, to solve the problem that the process dispatch method that adopts the command script mode to realize under Unix (or Linux) environment is single, can't satisfy multiple application process scheduling according to the predefine flow process.
For solving the problems of the technologies described above, the invention provides a kind of process dispatch method of application process, flow attempering serving program is set, comprising:
Reading prearranged service course information;
Create output message formation and the input message queue corresponding with described service course configuration;
Read the control information of single step predefine flow process one by one, write the input message queue of corresponding with service when carrying out call request, execution is called and is read the output message formation when returning, and described output message formation is unique public output message formation, the corresponding respectively different services of described input message queue.
Wherein, the corresponding different services of described input message queue distribute different application process number.
Also comprise: read the input message queue, the application process that scheduling distributes is carried out, and execution result is write public output message formation.
Wherein, the process of carrying out described flow attempering serving program receives the asynchronous call equate with call request quantity return after, just can change next single step predefine flow process over to.
Wherein, the process of carrying out described flow attempering serving program receives synchronization call return after, just can change next single step predefine flow process over to.
Wherein, described prearranged service course information and the control information of predefine flow process are disposed in database table, described prearranged service course information comprises service identifiers, program filename, message queue reference paper, is configured into number of passes, the control information of described predefine flow process comprise request mark, service identifiers, step number, call parameters, multi-process division of labor sign, send mode and timeout second number, described send mode comprises synchronized transmission and asynchronous transmission.
Also comprise: intercept services request,, then carry out flow scheduling if receive described services request; Otherwise be in waiting status.
Preferably, the described mode of intercepting is the TCP/IP mode.
Also comprise: the time-out time that described single step predefine process step is set separately.
Also comprise: the interface function that described flow attempering serving program and application program are set.
Wherein, described interface function comprises scheduling initialization function:
Whether the input message queue that inspection is called is ready, if recording messages queue number is then returned otherwise make mistakes;
Create the core resident application process;
The input message queue is carried out poll,, then call corresponding function of application function if receive the call request of described flow scheduling service processes;
Execution result is write public output message formation, respond calling of described flow scheduling service processes and return.
Wherein, described interface function also comprises the end of service function.
The present invention also provides a kind of flow scheduling system of application process, comprising:
Storage unit is used for storing predetermined adopted flow process configuration parameter;
Control module is used to read the prearranged service course information that described storage unit is deposited; Create public output message formation and the input message queue corresponding with described service course configuration;
Performance element is used for reading one by one the single step predefine flow process control information that described storage unit is deposited, and writes the input message queue of corresponding with service when carrying out call request, carries out to call and reads public output message formation when returning;
Intercept the unit, be used to intercept services request, if receive described services request, then described performance element carries out flow scheduling; Otherwise be in waiting status.
Wherein, described predefine flow process configuration parameter comprises prearranged service course information and the control information of single step predefine flow process.
Compared with prior art, the present invention has the following advantages:
At first, transform through coding, flow attempering serving program is set, by pre-configured flow process controlled variable in database table, and the cooperation message queue mechanism, a plurality of processes under Unix (or Linux) environment of having realized are carried out the control function of one process/multi-process, synchronous/asynchronous combined schedule according to the predefine flow process.This combination is called can be according to different scheduling flows, and pre-defined flow process configuration information is carried out the flow scheduling service processes, thereby satisfied multiple call flow; And in the scheduling of a plurality of processes, owing to adopted the message queue controlling mechanism,, can pass through the logic control between the combination method of calling performing step of one process/multi-process, synchronous/asynchronous for execution in step with logical relation; In addition, predefine flow process and combined schedule can be controlled the execution of process, therefore realized a plurality of application programs are started from disk in advance simultaneously the mode of graftabl, when dispatching, directly carry out by needs from scheduling memory, avoid the CPU of I0 operating period to handle the free time, improved the execution efficient of system.
Secondly, the application program of same function can be disposed a plurality of processes, by the concurrent scheduling of message queue implementation process, improves treatment effeciency.Because the service processing time much larger than the sweep time to message queue, has therefore been realized the function of load balancing.
Once more and since the process number be configured in the predefine flow process time specify by parameter, so the deployment process number can be configured according to the operating position of system resource, realized flow control function.
Once more, the invention provides application programming interfaces (API) function of flow attempering serving program and application services.By the api function mode, application services does not need to be concerned about the process scheduling process, is transparent to using service routine promptly.And it is essential that function of scheduling initialization is only arranged in the api function, makes things convenient for application programming, has greatly improved development efficiency.
Description of drawings
Fig. 1 is the process scheduling principle schematic of dispatching method of the present invention;
Fig. 2 is the process flow diagram that synchronous/asynchronous of the present invention is called combination;
Fig. 3 is the flow chart of steps of flow scheduling service of the present invention;
Fig. 4 is the schematic flow sheet of scheduling initialization api function of the present invention;
Fig. 5 is the flow scheduling system chart of application process of the present invention.
Embodiment
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
Core concept of the present invention is: technology realizes based on two parts, flow scheduling service (and configuration parameter) and application programming interfaces (API) function.By the mode realization flow predefine of configuration parameter data in database table, message queue is adopted in the communication between application service process, and all services are carried out uniform dispatching by the flow scheduling service processes according to described predefine flow process; Realization by interface (API) function between application services and described flow attempering serving program, application services only needs to import correlation parameter into described api function, can realize the scheduled for executing of predefine flow process, and not need to be concerned about the process scheduling process.
The present invention has realized that a plurality of application processes carry out the control function of one process/multi-process, synchronous/asynchronous combined schedule according to the predefine flow process.Wherein, described flow process predefine realizes by the mode of configuration parameter data in database table, comprise service course configuration and flow process control information: described service course configuration comprises service identifiers, program filename, message queue reference paper, is configured into number of passes etc., and described flow process control information comprises contents such as request mark, service identifiers, step number, call parameters, multi-process division of labor sign, send mode (synchronous/asynchronous), timeout second number.Flow process predefine can be configured by final user or service routine exploitation side.
Described one process/multi-process is called and is belonged to the function that the present invention realizes.One process calls to same service and only disposes a process, has only a process to respond this service; Multi-process is called and is a plurality of processes of same service arrangement, same service is divided into a plurality of requests sends simultaneously, has a plurality of processes to respond this service.One process/multi-process rule belongs to the strategy of predefine flow process, and taking the multi-process strategy generally is in order to carry out concurrent scheduling under the condition of system resource permission, to improve treatment effeciency and performance as far as possible.Because the same service of a plurality of process associated treatment, therefore must carry out " division of labor " to same service content, service has diverse ways according to difference to divide construction method, no matter any method, all need when calling, specify " multi-process division of labor sign " parameter, so that a plurality of process is correctly carried out " division of labor ".
Described synchronous/asynchronous is called and is also belonged to the function that the present invention realizes.Synchronization call is can only initiate the same time once to call, treat this call return after, just can call next time; Asynchronous call for send once call after, do not wait for returning to send once more and call, promptly be equivalent to the same time can send a plurality of calling.The synchronous/asynchronous rule belongs to the strategy of predefine flow process, in general, exists the service of strict logical relation successively to adopt synchronization call, when the front and back step concerns without limits, can carry out asynchronous call.Asynchronous call can be initiated a plurality of calling simultaneously, can only adopt asynchronous call under the multi-process mode.System supports " receiving asynchronous returning " function simultaneously, is used to control the synchronization call of asynchronous call subsequent step.
Among the present invention, the application process of same function can be disposed a plurality of, to carry out concurrent scheduling, improves treatment effeciency.When the front and back step concerns without limits, can carry out asynchronous call, otherwise can carry out synchronization call.No matter synchronously or asynchronous call, each step can be provided with time-out time separately, if some steps are overtime, then whole flow process is ended, and error code is set is " flow performing is overtime ".Because time-out time is arranged on the single step, therefore overtime control granulometric facies can satisfy demand in most cases when thin.
With reference to Fig. 1, be the process scheduling principle schematic of dispatching method of the present invention.ProcMan among the figure (process Manager) 101 is the flow scheduling service processes, it is the process of carrying out flow attempering serving program, be the kernel process that the present invention is provided with, only needing to start this process can start up all application processes that define in the flow process automatically.
The alphabetical P of process 102 usefulness represents, as figure P 11, P 122 copy processes for the application program 1 that realizes same service function; P 21, P N1Be the application program 2 that realizes different service functions and the process of application program n.The different process copies that the same function of the 1st identical expression of subscript is used, the 1st difference of subscript promptly represented the different process copies that difference in functionality is used.Among the figure, application program 1 has disposed two processes, therefore can two process concurrent processing, improve the single step handling property.Application program 2 and application program n have only disposed 1 process respectively, need not or can not concurrent processing.Described process number configuration at different service functions according to the system resource operating position, is specified by parameter when the predefine flow process, has realized flow control function.
For other services except that flow scheduling service 101, the message queue of notice flow scheduling is called the output message formation, and the message queue that receives from flow scheduling is called the input message queue.Among the present invention, system is provided with an input message queue for other the same group of service except that flow scheduling service 101, and many group services are provided with different input message queues separately, and all services are provided with same public output message formation.Among the figure, Q I1, Q 12, Q 1n Input message queue 103 for the different application service.One group of process of same application service (P for example 11And P 12) use same input message queue, the process of different application service is used different input message queues.During multi-process, procMan obtains the multi-process sign from the predetermined configuration parameter and is transmitted to invoked service processes, and this sign is used for realizing between a plurality of processes of same service the division of labor.Q oBe the shared public output message formation 104 of all application processes except that flow scheduling service procMan.Only need a public output message formation in the system.All processes all write output message this formation, and the flow scheduling service is read service call from this formation and returned.
Scheduling flow shown in Figure 1 is, when flow scheduling service procMan need call application service 1, at the input message queue Q of its correspondence I1In write message, two process P of this service arrangement 11And P 12, the idle process P in two processes 12(also may be P 11) obtain this message at random and handle.Because the service processing time much larger than the sweep time to message queue, therefore can be accomplished load balancing.P 12(also may be P 11) after process finishes dealing with, will finish message and be written to public output message formation Q oIn.The flow scheduling service obtains the application service performance by this message queue of scanning.
According to the method described above, as what call is application service 2, then request message is write message queue P 21In, but performance is still from Q oIn read.Each process is returned self process identification (PID) ID as type of message, because process identification (PID) ID is unique in a period of time in operating system, and length during this period of time depends on the process situation of change, but generally much larger than 24 hours, can satisfy demand for services in most cases, therefore can determine unique type of message, thus although Q oBe shared, but the process of returning still can obtain distinguishing.
The above-mentioned message call that writes is two processes fully independently with reading return messages, therefore, if after the return messages that obtain step 1, the message call of write step two again, then step 1 is " synchronization call "; Otherwise, if write the message call of a plurality of step 1, perhaps behind the message call of write step one, do not read return messages, directly the message call of write step two, then realization " asynchronous call " again.That is, by the Writing condition of control message call, flow scheduling procMan can realize the asynchronous function such as return of synchronization call, asynchronous call, the reception to other services; By to outlet the reading and resolving of message, flow scheduling procMan can realize asking flow control and load balancing, thereby is implemented in the combined schedule of application processes under Unix (Linux) environment.
With reference to Fig. 2, be the process flow diagram that synchronous/asynchronous of the present invention is called combination, represented among the figure that synchronous/asynchronous calls the more common a kind of situation of combination, execution in step is as follows:
Step 201, asynchronous call: flow scheduling procMan sends the request of a plurality of (being assumed to 10) service 1, serves 1 two pre-configured application process P 11, P 12Respond respectively, at P 11After finishing dealing with, read next request from message queue again, to P 12In like manner.Therefore, P 11And P 12Between realize load balancing; In addition, it is adjustable to serve 1 number of processes, and deployment process is several to be configured according to the system resource operating position, has realized flow control function.
Step 202 receives asynchronous returning: flow scheduling procMan receives asynchronous the returning that equates with request quantity, and then changes next procedure over to, has realized the conversion of asynchronous call to synchronization call.For example, flow scheduling procMan is from public output message formation Q oRead to call and return, receive 10 asynchronous returning after, just send the call request of next step.
Step 203, synchronization call: according to service 2 functional characteristic, carry out between the step of this service and have certain logical restriction relation, therefore serving 2 can't carry out parallel processing, only disposes one process P 21, use the method for synchronization to call.Only at P 21After returning, just can enter next procedure.
Step 204, synchronization call: according to the operating position of system resource, service 3 does not need to carry out parallel processing, only disposes one process P 31, use the method for synchronization to call.At P 31After returning, do not have subsequent step, flow process finishes.
Complete realization of the present invention is based on two parts: flow scheduling service (and configuration parameter) and application services API (Application Programming Interface application programming interfaces) function.
With reference to Fig. 3, be the flow chart of steps of flow scheduling service of the present invention.Flow scheduling service overall process flow is as follows:
Step 301, process initialization is finished work such as environmental preparation.For example, transaction flow control table address initial work in shared drive etc.
Step 302 reads service course configuration from database, comprising: service identifiers, program filename, message queue reference paper, be configured into number of passes etc.Wherein, described message queue reference paper refers to be used to generate unique key assignments of identification message formation.Described service course configuration can be configured in advance by final user or application services exploitation side.
Step 303 is created message queue according to the service course configuration that obtains in the step 302.Different input message queues is used in different services, but public same public output message formation.
Step 304 starts all service processess.The service routine of carrying out in the predefine flow process is all called in internal memory from disk, when calling, directly read, realize I0 operation and the parallel processing that CPU calculates, improved execution efficient from internal memory.
Step 305, the TCP/IP services request is intercepted.If receive services request, then change corresponding flow scheduling over to, otherwise be in the TCP/IP services request state of waiting for.The existing flow scheduling service that realizes receives services request by the TCP/IP mode, and other modes such as database poll, message queue poll that also can make into as required realize.
Step 306, receive services request after, will change corresponding flow scheduling over to.
Step 307 after process scheduling receives request, is returned the request successful information to caller, and expression begins to handle this request.
Step 308 is obtained predefined flow process control information from database.Described predefined flow process control information comprises contents such as request mark, service identifiers, step number, call parameters, multi-process division of labor sign, send mode (synchronous/asynchronous), timeout second number.Flow process predefine can be configured in advance by final user or application services exploitation side.
Step 309,310, read a flow process control information, according to synchronization call, asynchronous call, return to call and carry out the message queue read or write.If request call, the message queue notice respective service that then writes corresponding with service begins to handle; If return, then read the return messages formation, and carry out overtime control.If there is process to handle failure, then enter fault processing, stop current flow process.
Step 311, record or update mode daily record in database.
Step 312, the control information of single step flow process is read in circulation, returns step 309.Owing in the predefine flow process, preestablished step number, therefore when all preset step process intact after, services request processing end.
In the above-mentioned steps, described service course configuration and flow process control information are predefined flow process configuration parameter, obtain relevant configuration information in different step.And according to the difference of service identifiers, the single step flow process quantity of flow performing is also inequality.Therefore the present invention can satisfy multiple call flow.
The present invention also is provided with the interface function of application program and flow attempering serving program, and by the api function mode, application services does not need to be concerned about the process scheduling process, is transparent to using service routine promptly.The interface of described api function and application services comprises two functions: scheduling initialization (glbQueueInit) function and end of service function.Wherein, described scheduling initialization function is a unique api function that application services must call.If application services has been specified end of service function (by the function pointer parameter) simultaneously when calling scheduling initialization function, then the end of service function will call in system when application service process is terminated; As not specifying, then carry out the end of service function of acquiescence, promptly stop process by the exit system call.
With reference to Fig. 4, be the schematic flow sheet of scheduling initialization api function of the present invention.The treatment scheme of scheduling initialization api function is as follows:
Whether step 401-404 checks whether the message queue that calls is ready, comprises whether message queue is created, read-write.As not ready, then make mistakes and return, the termination process; As ready, then obtain and the recording messages queue number.Step 401 is obtained public output message formation configuration parameter, comprises parameters such as message queue number; Step 402 checks whether public output message formation can be write, so that application process writes public output message formation with the result after executing.Step 403 is obtained input message queue configuration parameter, comprises parameters such as message queue number; Step 404 checks whether the input message queue is readable, so that application process is from correspondence input message queue request of obtaining and execution.
Step 405 by system call (fork) the establishment subprocess of deriving, is created successful stepfather's process and is withdrawed from.Like this, application service process just becomes guards (daemon) process, uninterruptedly carries out message queue poll and response request in the memory-resident.Wherein, the effect of described fork system call is to duplicate a process.When a process transfer it, two almost living processes just appear after finishing, the another one process of duplicating out is called as subprocess, process originally is called parent process.Described finger daemon is long a kind of process life cycle, and they are independent of control terminal, and periodically carry out certain task or etc. pending some event.They usually start when system bootstrap is packed into, stop when system closing.There are a lot of finger daemons in the Unix system, and great majority are all realized with finger daemon based on the server capability of Unix (or Linux) operating system, such as network service inetd, Web service http etc.Simultaneously, many system tasks also are to be finished by the finger daemon of system, such as work planning process crond, print progress 1qd etc.
Step 406-407, message queue is carried out poll, as receive message, be that procMan sends call request, write message request to correspondence input message queue, then execution in step 407, read message from this input message queue, call the utility function of appointment in this function entrance parametric function pointer; Otherwise continue the polling message formation.
Step 408 after utility function returns, writes return message in public output message formation.ProcMan returns by calling, and reads public output message formation.Return step 406, continue message queue is carried out poll.
It is essential that the scheduling initialization is only arranged in the api function, makes things convenient for the application process programming, has greatly improved development efficiency.Following examples are use case of api function among the present invention:
The C language function prototype of scheduling initialization glbQueueInit: int glbQueueInit (char*myId, int (* myFunc) (SVC_MSG_IN_DEF*svcMsgIn), void (* termFunc) ())
Application services myApp calls scheduling initialization glbQueueInit function in the main function, write the function of * myFunc appointment and handle services request.When service processes is ended, handle other affairs as need, can specify * termFunc, otherwise parameter NULL.It is complete that to call example as follows:
/ * application services myApp example */
The service identifiers of/* definition myApp, so as flow scheduling procMan carry out service dispatch */
#define?MYAPP_ID“1001”
/ * finish application service code */
int?myApp(char*buf)
{
/ * service processing code */
}
Call during/* procedure termination */
int?mySvcTerm(void)
{
Handle code during/* out-of-service, for example * such as turn-off data storehouse connection/
}
/ * principal function inlet */
int?main(int?argc,char*argv[])
{
/ * setup code, for example * such as database connection/
glbQueue?Init(MYAPP_ID,myApp,mySvcTerm);
}
In the example: scheduling initialization glbQueueInit function imports three parameters into, and first MYAPP_ID parameter is the service identifiers of myApp, is used for flow scheduling service procMan to the identification of this service with call; Second myApp parameter is function pointer myApp, points to the function of realizing this service function, write by the application developer; The 3rd mySvcTerm parameter is function pointer mySvcTerm, needs the function of contents processing when pointing to procedure termination, and promptly the out-of-service function also is to be write by the application developer, as not needing, can import sky (NULL) parameter into.
In above-mentioned application services myApp, realized the flow scheduling of application process by the glbQueueInit function, but for the myApp program, and do not know how concrete scheduling flow is carried out, only need import parameter into, carry out the flow scheduling service processes by the glbQueueInit interface function, finish respective service and handle.
The process dispatch method of corresponding described application process, the present invention also provides a kind of flow scheduling system of application process.With reference to Fig. 5 is the flow scheduling system chart of application process of the present invention.Described system comprises:
Storage unit 501 is used for storing predetermined adopted flow process configuration parameter, and described predefine flow process configuration parameter comprises prearranged service course information and the control information of single step predefine flow process.;
Control module 502 is used to read the prearranged service course information that described storage unit 501 is deposited; Create public output message formation and the input message queue corresponding with described service course configuration;
Performance element 503 is used for reading one by one the single step predefine flow process control information that described storage unit 501 is deposited, and writes the input message queue of corresponding with service when carrying out call request, carries out to call and reads public output message formation when returning.
Intercept unit 504, be used to intercept services request, if receive described services request, then described performance element 503 carries out flow scheduling; Otherwise be in waiting status.
In sum, the present invention has realized according to the predefine flow process a plurality of application processes being carried out under the Unix environment combined schedule of one process/multi-process, synchronous/asynchronous.Adopt message queue and multiprocess scheduling to realize the function of load balancing; The process number is set in flow process predefine has realized flow control function; By the api function mode, application services does not need to be concerned about the process scheduling process, is transparent to using service routine promptly.It is essential that function of scheduling initialization is only arranged in the api function, makes things convenient for application programming, has greatly improved development efficiency.Transform through coding, the method for the invention and system are applicable to the operating system of other give information formation and process system calls, as Unix system and other classes Unix system (linux).
More than the process dispatch method and the system of a kind of application process provided by the present invention is described in detail, used specific case herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, part in specific embodiments and applications all can change.In sum, this description should not be construed as limitation of the present invention.

Claims (14)

1. the process dispatch method of an application process is characterized in that, flow attempering serving program is set, and comprising:
Reading prearranged service course information;
Create output message formation and the input message queue corresponding with described service course configuration;
Read the control information of single step predefine flow process one by one, write the input message queue of corresponding with service when carrying out call request, execution is called and is read the output message formation when returning, and described output message formation is unique public output message formation, the corresponding respectively different services of described input message queue.
2. dispatching method according to claim 1 is characterized in that: the corresponding different services of described input message queue, distribute different application process number.
3. dispatching method according to claim 1 and 2 is characterized in that, also comprises: read the input message queue, the application process that scheduling distributes is carried out, and execution result is write public output message formation.
4. dispatching method according to claim 1 is characterized in that: the process of carrying out described flow attempering serving program receives the asynchronous call equate with call request quantity return after, just can change next single step predefine flow process over to.
5. dispatching method according to claim 1 is characterized in that: the process of carrying out described flow attempering serving program receives synchronization call return after, just can change next single step predefine flow process over to.
6. dispatching method according to claim 1, it is characterized in that: described prearranged service course information and the control information of predefine flow process are disposed in database table, described prearranged service course information comprises service identifiers, program filename, message queue reference paper, is configured into number of passes, the control information of described predefine flow process comprise request mark, service identifiers, step number, call parameters, multi-process division of labor sign, send mode and timeout second number, described send mode comprises synchronized transmission and asynchronous transmission.
7. dispatching method according to claim 1 is characterized in that, also comprises: intercept services request, if receive described services request, then carry out flow scheduling; Otherwise be in waiting status.
8. dispatching method according to claim 7 is characterized in that: the described mode of intercepting is the TCP/IP mode.
9. dispatching method according to claim 1 is characterized in that, also comprises: the time-out time that described single step predefine process step is set separately.
10. dispatching method according to claim 1 is characterized in that, also comprises: the interface function that described flow attempering serving program and application program are set.
11. dispatching method according to claim 10 is characterized in that, described interface function comprises scheduling initialization function:
Whether the input message queue that inspection is called is ready, if recording messages queue number is then returned otherwise make mistakes;
Create the core resident application process;
The input message queue is carried out poll,, then call corresponding function of application function if receive the call request of described flow scheduling service processes;
Execution result is write public output message formation, respond calling of described flow scheduling service processes and return.
12. dispatching method according to claim 10 is characterized in that: described interface function also comprises the end of service function.
13. the flow scheduling system of an application process is characterized in that, comprising:
Storage unit is used for storing predetermined adopted flow process configuration parameter;
Control module is used to read the prearranged service course information that described storage unit is deposited; Create public output message formation and the input message queue corresponding with described service course configuration;
Performance element is used for reading one by one the single step predefine flow process control information that described storage unit is deposited, and writes the input message queue of corresponding with service when carrying out call request, carries out to call and reads public output message formation when returning;
Intercept the unit, be used to intercept services request, if receive described services request, then described performance element carries out flow scheduling; Otherwise be in waiting status.
14. dispatching system according to claim 13 is characterized in that: described predefine flow process configuration parameter comprises prearranged service course information and the control information of single step predefine flow process.
CN200610028504XA 2006-06-27 2006-06-27 Flowpath scheduling method and system of application progress Active CN101097527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200610028504XA CN101097527B (en) 2006-06-27 2006-06-27 Flowpath scheduling method and system of application progress

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200610028504XA CN101097527B (en) 2006-06-27 2006-06-27 Flowpath scheduling method and system of application progress

Publications (2)

Publication Number Publication Date
CN101097527A CN101097527A (en) 2008-01-02
CN101097527B true CN101097527B (en) 2011-11-30

Family

ID=39011374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200610028504XA Active CN101097527B (en) 2006-06-27 2006-06-27 Flowpath scheduling method and system of application progress

Country Status (1)

Country Link
CN (1) CN101097527B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102148848B (en) * 2010-02-10 2014-07-16 中兴通讯股份有限公司 Data management method and system
CN102279774A (en) * 2011-08-22 2011-12-14 中兴通讯股份有限公司 Method and device for realizing multi-thread message interaction by using synchronous function calling mechanism
CN102663543A (en) * 2012-03-22 2012-09-12 北京英孚斯迈特信息技术有限公司 Scheduling system used for enterprise data unification platform
CN103514036B (en) * 2012-06-20 2017-07-25 中国银联股份有限公司 A kind of scheduling system and method triggered for event with batch processing
CN105700950B (en) * 2014-11-25 2019-11-22 深圳市腾讯计算机系统有限公司 A kind of data communications method and device
CN105183854B (en) * 2015-09-08 2018-07-13 浪潮(北京)电子信息产业有限公司 A kind of dispatching method of database unloading data
CN106055322A (en) * 2016-05-26 2016-10-26 中国银联股份有限公司 Flow scheduling method and device
CN107493312B (en) * 2016-06-12 2020-09-04 中国移动通信集团安徽有限公司 Service calling method and device
CN108958903B (en) * 2017-05-25 2024-04-05 北京忆恒创源科技股份有限公司 Embedded multi-core central processor task scheduling method and device
CN107277022B (en) * 2017-06-27 2020-03-13 中国联合网络通信集团有限公司 Process marking method and device
CN107948051B (en) * 2017-11-14 2018-10-12 北京知行锐景科技有限公司 A kind of real-time messages method for pushing and system based on Socket technologies
CN109842651B (en) * 2017-11-27 2021-11-26 中国移动通信集团上海有限公司 Uninterrupted service load balancing method and system
CN108228880B (en) * 2018-01-24 2020-07-14 上海达梦数据库有限公司 Method, device, equipment and medium for database management system to call external function
CN108536544B (en) * 2018-03-21 2021-06-25 微梦创科网络科技(中国)有限公司 Consumption method, device, server and medium based on database message queue
CN108932284B (en) * 2018-05-22 2020-11-24 中国银行股份有限公司 General logic scheduling method, electronic device and readable storage medium
CN109376015A (en) * 2018-10-23 2019-02-22 苏州思必驰信息科技有限公司 Solution and system are blocked in log for task scheduling system
CN109725984B (en) * 2018-12-24 2023-03-24 中电福富信息科技有限公司 Method for remotely stopping executing Shell command
CN110311974A (en) * 2019-06-28 2019-10-08 东北大学 A kind of cloud storage service method based on asynchronous message
CN111404930A (en) * 2020-03-13 2020-07-10 北京思特奇信息技术股份有限公司 Composite instruction processing method and system
CN111752725A (en) * 2020-06-29 2020-10-09 上海通联金融服务有限公司 Method and system for improving performance of financial credit system
CN112860449B (en) * 2021-01-08 2023-01-10 苏州浪潮智能科技有限公司 Method, system, equipment and medium for preventing restart caused by message overtime
CN113254240B (en) * 2021-06-21 2021-10-15 苏州浪潮智能科技有限公司 Method, system, device and medium for managing control device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1790270A (en) * 2005-12-14 2006-06-21 浙江大学 Java virtual machine implementation method supporting multi-process

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1790270A (en) * 2005-12-14 2006-06-21 浙江大学 Java virtual machine implementation method supporting multi-process

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谭琦.嵌入式操作系统通信和同步机制的研究.中国优秀硕士学位论文全文数据库.2005,36-43. *

Also Published As

Publication number Publication date
CN101097527A (en) 2008-01-02

Similar Documents

Publication Publication Date Title
CN101097527B (en) Flowpath scheduling method and system of application progress
US6434594B1 (en) Virtual processing network enabler
CN101248405B (en) Multithreading with concurrency domains
US6138168A (en) Support for application programs in a distributed environment
CN101567013B (en) Method and apparatus for implementing ETL scheduling
US6625638B1 (en) Management of a logical partition that supports different types of processors
TW406242B (en) System and method for maximizing usage of computer resources in scheduling of application tasks
US5925098A (en) Apparatus and method for dispatching client method calls within a server computer system
US7058950B2 (en) Callback event listener mechanism for resource adapter work executions performed by an application server thread
US7581225B2 (en) Multithreading with concurrency domains
US20020016809A1 (en) System and method for scheduling execution of cross-platform computer processes
JP2009522647A (en) Workflow object model
US20050044173A1 (en) System and method for implementing business processes in a portal
US8743387B2 (en) Grid computing system with virtual printer
Vinoski Chain of responsibility
US7206843B1 (en) Thread-safe portable management interface
Bagrodia Parallel languages for discrete-event simulation models
GB2456201A (en) Notification of a background processing event in an enterprise resource planning system
CN115480904B (en) Concurrent calling method for system service in microkernel
Doeppner et al. C++ on a parallel machine
CN115220887A (en) Processing method of scheduling information, task processing system, processor and electronic equipment
CN113485812B (en) Partition parallel processing method and system based on large-data-volume task
Guo et al. Decomposing and executing serverless applications as resource graphs
US20090217290A1 (en) Method and System for Task Switching with Inline Execution
Kimbleton et al. A perspective on network operating systems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant