CN102012840A - Batch data scheduling method and system - Google Patents

Batch data scheduling method and system Download PDF

Info

Publication number
CN102012840A
CN102012840A CN2010106025269A CN201010602526A CN102012840A CN 102012840 A CN102012840 A CN 102012840A CN 2010106025269 A CN2010106025269 A CN 2010106025269A CN 201010602526 A CN201010602526 A CN 201010602526A CN 102012840 A CN102012840 A CN 102012840A
Authority
CN
China
Prior art keywords
intermediate server
data
corresponding task
control end
described intermediate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010106025269A
Other languages
Chinese (zh)
Inventor
牛志嘉
刘旭
温良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN2010106025269A priority Critical patent/CN102012840A/en
Publication of CN102012840A publication Critical patent/CN102012840A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a batch data scheduling method and system. The method comprises the following steps: acquiring batch task data to be processed by a master control end and generating a corresponding task scheduling instruction; sending the task scheduling construction and the corresponding task data by the master control end to an intermediate server, and saving all task scheduling instructions in the intermediate server; and visiting the intermediate server by each application end in an application cluster to acquire the corresponding task scheduling instructions, and executing processing of the corresponding task data. In the invention, the data required to be subject to batch processing is shared by a plurality of application ends, thereby greatly improving the efficiency of batch data processing.

Description

A kind of lot size scheduling method and system of data
Technical field
The present invention relates to technical field of data processing, more particularly, relate to a kind of lot size scheduling method and system of data processing.
Background technology
Batch jobs (or batch program) generally are meant to be placed on running background and not need carries out batch program mutual, big data quantity with the user.And lot size scheduling is exactly that these a series of batch jobs are organized by the execution sequence that pre-defines, give hardware resource and go operation, and the general name of management entire job operational process.
In a lot of large and medium-sized enterprise, especially in the middle of the infosystem in fields such as finance, telecommunications, all there is the robotization batch processing business demand of a large amount of complexity.The multifarious task that significantly increases has replaced previous batch jobs, and the scale of operation and complicacy are still raising up of continuing.Face the needs of the develop rapidly of IT application in enterprises, aspects such as batch system is needed badly in performance, Development Framework and unitized, O﹠M monitoring provide new technical solution.
The employing that at present main batch system is more big machine pattern, all batch jobs are deployed on the machine, possess simple sequencing in scheduling, if adjust execution sequence, need to revise code.The part batch system separates deployment with database server, file server with batch system in order to raise the efficiency; The introducing that also has multithreading, promptly on the main frame that the separate unit computing power is given prominence to (large scale computer),, make the concurrent execution of operation of order execution by opening up a large amount of threads simultaneously, under the scope that the CPU of main frame, I/O ability allow, also can improve operational efficiency in batches effectively.
For traditional main frame class batch system, mostly there are the following problems and shortcoming:
1. most systems still operate to the master with host mode, and system extension is poor.Be difficult to time cost, hardware cost are controlled flexibly, can't settling time bottleneck and low two problems that condition each other of resource utilization;
2. lack unified basic framework, batch system between the disparity items independently makes up, make this just complicated batch system infrastructure construction serious duplication of labour phenomenon occur, the batch jobs between project simultaneously possess intercommunity hardly, need design and develop new functional module and finish;
3. O﹠M difficulty, single O﹠M personnel can only win the O﹠M work of any batch system, and whenever newly-increased system often needs to relearn, and time and human cost are too high;
Also have minority system to dispose batch system in the cluster server of oneself, still there is obvious defects in this processing mode:
When the inadequate resource of big machine when satisfying the demand of some or numerous batch systems, whole deployment model does not possess extended capability, has only a large amount of stronger physical equipment of fund purchasing power of cost, perhaps spend more manpower a part of batch jobs are transferred on other equipment, the way of this kind transfer also is powerless in the face of the high system of data dependency degree the time;
Even realized the batch system of cluster mode certainly in detailed programs, this system does not often possess the versatility of framework character, and cost of development is very high.In case there have the operation of interdepartmental system to need to be mutual, will develop new program, scale is big slightly, just equals to want a system newly developed.The risk and cost that this brought all is unmeasurable.
Summary of the invention
In view of this, the embodiment of the invention provides a kind of lot size scheduling method and system of data processing, to improve the efficient that data processing is moved in batches.
The embodiment of the invention provides a kind of lot size scheduling method of data processing, and described method comprises:
The master control end obtains pending batch tasks data, and generates corresponding task scheduling instruction;
The master control end is sent to intermediate server with described task scheduling instruction and corresponding task data, preserves all task scheduling instructions by described intermediate server;
Visit described intermediate server by each application end in the application cluster, obtain corresponding task scheduling instruction, carry out the processing of corresponding task data.
Preferably, described method also comprises:
After described application end was finished the processing of corresponding task data, described application end continued the described intermediate server of visit, obtained other task scheduling instruction, carried out the processing of corresponding task data.
To described intermediate server feedback result information;
By described intermediate server described object information is uploaded to the master control end, carries out real time record by the master control end.
Preferably, after described application end is finished the processing of corresponding task data, also comprise:
To described intermediate server feedback result information;
By described intermediate server described object information is uploaded to the master control end, carries out real time record by the master control end.
Preferably, described method also comprises:
Initiate the Mission Monitor request by monitoring client, and described Mission Monitor request is sent to described intermediate server, described Mission Monitor request is forwarded to described main control end by described intermediate server;
, control described intermediate server and feed back corresponding task status information according to described Mission Monitor request by described main control end to described monitoring client.
Preferably, described intermediate server is any way of realization in message server, polling data storehouse server or the distribution of document server.
A kind of lot size scheduling system of data processing, described system comprises: master control end, intermediate server and application cluster; Wherein,
Described master control end is used to obtain pending batch tasks data, and generates corresponding task scheduling instruction, and described task scheduling instruction and corresponding task data are sent to described intermediate server;
Described intermediate server is used to preserve all task scheduling instructions;
Described application cluster is used for visiting described intermediate server by each application end of application cluster, obtains corresponding task scheduling instruction, carries out the processing of corresponding task data.
Preferably, described application end is after the processing of finishing the corresponding task data, to described intermediate server feedback result information;
Then described intermediate server is uploaded to the master control end with described object information, carries out real time record by the master control end.
Preferably, after described application end is finished the processing of corresponding task data, continue the described intermediate server of visit, obtain other task scheduling instruction, carry out the processing of corresponding task data.
Preferably, described system also comprises:
Monitoring client is used to initiate the Mission Monitor request, and described Mission Monitor request is sent to described intermediate server, by described intermediate server described Mission Monitor request is forwarded to described main control end;
Then described main control end is controlled described intermediate server and is fed back corresponding task status information to described monitoring client according to described Mission Monitor request.
Preferably, described intermediate server is any way of realization in message server, polling data storehouse server or the distribution of document server.
Compare with prior art, the embodiment of the invention provides a kind of lot size scheduling framework of realizing data processing, dispose each application end with trunking mode, main control end is set, transmission by task data between messenger service realization main control end and each application end and task scheduling instruction, thereby the data of batch processing are shared to a plurality of application end, can improve the efficient that batch data is handled greatly; In addition, by of the separate flexible deployment and the expansion that realize hardware resource of master control end, thereby exchange higher batch execution efficient for the dynamic expansion of hardware with application end.
Description of drawings
In order to be illustrated more clearly in the technical scheme of the embodiment of the invention, to do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art below, apparently, accompanying drawing in describing below only is some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain other accompanying drawing according to these accompanying drawings.
The lot size scheduling method step synoptic diagram of a kind of data processing that Fig. 1 provides for the embodiment of the invention one;
The lot size scheduling method step synoptic diagram of the another kind of data processing that Fig. 2 provides for the embodiment of the invention two;
The lot size scheduling method step synoptic diagram of another data processing that Fig. 3 provides for the embodiment of the invention three;
The network design synoptic diagram that Fig. 4 provides for the embodiment of the invention four;
Fig. 5 is the lot size scheduling system architecture synoptic diagram of a kind of data processing of providing of the embodiment of the invention five;
Fig. 6 is the lot size scheduling system architecture synoptic diagram of the another kind of data processing that provides of the embodiment of the invention six.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those of ordinary skills belong to the scope of protection of the invention not making the every other embodiment that is obtained under the creative work prerequisite.
The embodiment of the invention provides a kind of lot size scheduling method and system of data processing, improves the efficient that data processing is moved in batches.For the ease of fully understanding to embodiment of the invention technical scheme, make that purpose of the present invention, technical scheme and advantage are clearer, below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is carried out clear, complete description.
Embodiment one
The embodiment of the invention provides a kind of lot size scheduling method of data processing, as shown in Figure 1, is the steps flow chart of this method, and this method can may further comprise the steps:
Step 101, master control end obtain pending batch tasks data, and generate corresponding task scheduling instruction;
Step 102, master control end are sent to intermediate server with described task scheduling instruction and corresponding task data, preserve all task scheduling instructions by described intermediate server;
Step 103, visit described intermediate server, obtain corresponding task scheduling instruction, carry out the processing of corresponding task data by each application end in the application cluster.
The implementation that batch data of the prior art is handled mainly is based on host mode, and promptly all batch jobs move on outstanding main frame of performance or large scale computer.And the lot size scheduling method of data processing provided by the invention mainly is based on cluster mode with respect to prior art, and operation distribution and operation are carried out from physically separating, and makes the operation actuating equipment can dynamic expansion.
The master control end is kernel scheduling framework (Center Schedule Framework, abbreviation " CSF ") abbreviation, be effector and the promoter that batch data is handled, be used to finish the operation dispatching of all pending batch tasks data, for example: the scheduling figure is configured to and the disassembling of node task, assemble, distribution etc.
Application end is application schedules framework (Application schedule Framework is called for short " ASF ").It both can receive the action command of SF end.Be the reception and the executor of node task or asynchronous task.
The present invention disposes intermediate server between master control end and application end, be used for the job information transmission between the two ends.By intermediate server is set, the function that has main frame in the host mode now is transferred to master control end and application end respectively.
Wherein, the master control end can obtain the processing sequence of batch tasks data from the scheduling model storehouse, and distributed tasks operation according to this, generates corresponding task scheduling instruction.Here, preserved the order information of batch tasks data when carrying out in the scheduling model storehouse, this scheduling model storehouse can generate in the process that generates the batch tasks data thereupon.When if current task need be split into the plurality of sub task when carrying out operation, then task scheduling is instructed usually and is presented relation one to one between the subtask, promptly splits out a subtask, then to producing a task scheduling instruction in the subtask.The master control end is sent to intermediate server with the task scheduling instruction that is generated, and preserves all task scheduling instructions by intermediate server, and intermediate server can not export the task scheduling instruction that is received to connected application cluster.In intermediate server, all task scheduling instructions can form service chart in batches.
Application side in the embodiment of the invention is a cluster mode, and it has comprised a plurality of application end, and each application end then is responsible for the described intermediate server of visit, obtains corresponding task scheduling instruction, carries out the processing of corresponding task data.Here need to prove that application end is obtained the instruction of corresponding task scheduling and the corresponding task data are the mode that a kind of active request is obtained in fact from middle server, be not by intermediate server initiatively issue, the passive mode of obtaining execution of application end.
Thus, when storing task scheduling instruction and corresponding task data in the middle server, application end just can active request be obtained, thereby the processing of corresponding task data is carried out in instruction according to task scheduling.When a plurality of application end were visited described intermediate server, then these a plurality of application end can be opened the processing to task data simultaneously.
This shows, the embodiment of the invention provides a kind of lot size scheduling framework of realizing data processing, dispose each application end with trunking mode, main control end is set, transmission by task data between messenger service realization main control end and each application end and task scheduling instruction, thereby the data of batch processing are shared to a plurality of application end, can improve the efficient that batch data is handled greatly; In addition, by of the separate flexible deployment and the expansion that realize hardware resource of master control end, thereby exchange higher batch execution efficient for the dynamic expansion of hardware with application end.
Embodiment two
The implementation of corresponding task data is carried out in the instruction of acquisition request task scheduling because each application end of application cluster is taken the initiative, therefore, in order further to improve the efficient that batch data is handled, in another embodiment of the present invention, as shown in Figure 2, described method can may further comprise the steps:
Step 201, master control end obtain pending batch tasks data, and generate corresponding task scheduling instruction;
Step 202, master control end are sent to intermediate server with described task scheduling instruction and corresponding task data, preserve all task scheduling instructions by described intermediate server;
Step 203, visit described intermediate server, obtain corresponding task scheduling instruction, carry out the processing of corresponding task data by each application end in the application cluster;
Step 204, after described application end is finished the processing of corresponding task data, described application end continues the described intermediate server of visit, obtains other task scheduling instruction, carries out the processing of corresponding task data.
In the embodiment of the invention, identical in the execution content of step 201~203 and the previous embodiment, different is to have increased the content of step 204 with aforementioned embodiment.
In the step 204, after described application end was finished the processing of corresponding task data, described application end continued the described intermediate server of visit, obtained other task scheduling instruction, carried out the processing of corresponding task data.
Usually, each application end in the application cluster is independent of each other, and carries out the task of operation separately according to the operation conditions of self.By being provided with, make each application end after the processing of finishing the corresponding task data, continue the described intermediate server of visit, obtain other task scheduling instruction, carry out the processing of corresponding task data, therefore, the more than enough Mission Operations of getting of application end energy that surplus resources is high, the application end of resource anxiety then can stop to obtain of new Mission Operations, this makes application cluster can keep the concurrent of high saturation always, the application end task that can not have is overweight, and the application end that has has been improved the efficient that batch data is handled better by idle this performance bottleneck situation.
Embodiment three
For the ease of the control of main control end to all batch tasks, in another embodiment of the present invention, step 303 as shown in Figure 3 and 304, in the above-described embodiments in the step 203 and 204, after described application end is finished the processing of corresponding task data, to described intermediate server feedback result information; By described intermediate server described object information is uploaded to the master control end, carries out real time record by the master control end.
In the embodiment of the invention, by each application end finishing after corresponding task data handles, all to described intermediate server feedback result information.The object information here can comprise the result of application end execution corresponding task, can comprise: the correct execution feedback of task is known the mistake execution feedback of task, the embodiment of the invention is not done concrete qualification to the particular content of feedback result information, and those skilled in the art can specifically be provided with according to the practical application scene.
By each application end feedback result information to described intermediate server after finishing corresponding task data processing, main control end can be obtained the situation that application end is handled batch data, is convenient to the scheduling controlling of main control end to batch data.
Embodiment four
In the prior art, in the batch data processing procedure based on host mode, the monitoring capacity that main frame is handled for batch data is relatively poor usually, generally can only be by the text analyzing in the daily record, grasp ruuning situation in batches, can't carry out visual real-time monitoring, therefore, be difficult to the batch jobs that mistake occurs are directly positioned.
Above-mentioned defective occurs for fear of technical solution of the present invention, in this embodiment, increase monitoring client, this monitoring client connects intermediate server.For example: dispatching and monitoring control desk (the Schedule Monitor Console that connects intermediate server is set, be called for short " SMC "), this dispatching and monitoring control desk promptly becomes the supervisor of whole dispatching system, carries out the monitoring of visual running status at whole dispatch environment.Network design figure among this embodiment as shown in Figure 4.
In the process of specific implementation monitoring, can initiate the Mission Monitor request by monitoring client, and described Mission Monitor request is sent to described intermediate server, by described intermediate server described Mission Monitor request is forwarded to described main control end; After main control end receives this Mission Monitor request, under situation about allowing, according to described Mission Monitor request, control described intermediate server and feed back corresponding task status information to described monitoring client by described main control end.Like this, monitoring client just can be realized the task of main control end approval is monitored.
Monitoring client is not must disposing of batch task scheduling, and only plug and play gets final product when needs are monitored.
When monitoring client need be realized monitoring, according to the current batch service chart that gets access to from middle server, monitoring request is mail to the task transmit queue of intermediate server, by described intermediate server described Mission Monitor request is forwarded to main control end, main control end receives monitoring request in real time, control described intermediate server and feed back corresponding task status information, give monitoring client with corresponding task data and monitor to described monitoring client.Can be abandoned by intermediate server for the monitoring request that is in listening state.
In monitor procedure, in case capturing the running status of corresponding task, monitoring client changes, just immediately Status Change information is mail to intermediate server.If the monitoring request type is " single request ", then at monitoring client after current executing state changes feedback and finishes to task, by main control end with this monitoring request deletion; If " continuity monitoring " then continues to monitor the variation of task status.
After monitoring client gets access to Status Change information, just can finish playing up of the monitoring page, reach the effect of real-time listening at this.
When monitoring client left current monitored object, then monitoring client sent the cancellation monitoring request to intermediate server, and the master control termination is received this cancellation monitoring request, directly monitoring request deletion was before got final product.
In the various embodiments described above, described intermediate server is any way of realization in message server, polling data storehouse server or the distribution of document server.
In addition, can there be various data customizations and preserving type in the scheduling model storehouse, for example: the xml file of band schema, serializing file etc.Can also develop a web system, perhaps graphic application system is finished the customization in visual scheduling model storehouse.Can be set to the development mode of C/S () according to actual needs for monitoring client.And also can adopt cluster mode to dispose for master control end and message server itself.
By the foregoing description as seen, the present invention disposes with trunking mode, transmit information by messenger service, by of the separate flexible deployment and the expansion that realize hardware resource of master control end with application end, dynamic expansion with hardware exchanges higher batch execution efficient for, allow each application end in the framework under nearly state of saturation, move simultaneously, with this bottleneck and the simultaneous awkward situation of resources idle settling time; In addition, provide unified Development Framework and deployment mode, framework has shielded a large amount of bottom technology, accelerated the structure speed of new system, removed the misery and the risk of developing again from, and can be directly integrated between the operation of interdepartmental system, need not the new module of redeveloping; Reduce O﹠M cost, alleviate O﹠M personnel burden, all exploitation deployment under Unified frame of disparity items make the O﹠M personnel only learn a cover O﹠M method and can be competent at the batch system that all adopt the present invention's research and development, greatly reduce learning cost and human cost; In addition, this framework also provides a monitoring client based on browser, support can be carried out real-time visual control fully, be convenient to the location of problem operation, be supported in the spanned item order and carry out unified monitoring, human intervention abilities such as can also heavily running, skip active job has improved developer's work efficiency greatly.
Embodiment five
The lot size scheduling method embodiment of corresponding above-mentioned data processing, the embodiment of the invention also provides a kind of lot size scheduling system of data processing, and as shown in Figure 5, described system comprises: master control end 501, intermediate server 502 and application cluster 503; Wherein,
Described master control end 501 is used to obtain pending batch tasks data, and generates corresponding task scheduling instruction, and described task scheduling instruction and corresponding task data are sent to described intermediate server 502;
Described intermediate server 502 is used to preserve all task scheduling instructions;
Described application cluster 503 is used for the described intermediate server 502 of each application end 504 visits by application cluster 503, obtains corresponding task scheduling instruction, carries out the processing of corresponding task data.
The present invention disposes intermediate server between master control end and application end, be used for the job information transmission between the two ends.By intermediate server is set, the function that has main frame in the host mode now is transferred to master control end and application end respectively.
Application side in the embodiment of the invention is a cluster mode, and it has comprised a plurality of application end, and each application end then is responsible for the described intermediate server of visit, obtains corresponding task scheduling instruction, carries out the processing of corresponding task data.Here need to prove that application end is obtained the instruction of corresponding task scheduling and the corresponding task data are the mode that a kind of active request is obtained in fact from middle server, be not by intermediate server initiatively issue, the passive mode of obtaining execution of application end.
Thus, when storing task scheduling instruction and corresponding task data in the middle server, application end just can active request be obtained, thereby the processing of corresponding task data is carried out in instruction according to task scheduling.When a plurality of application end were visited described intermediate server, then these a plurality of application end can be opened the processing to task data simultaneously.
This shows, the embodiment of the invention provides a kind of lot size scheduling framework of realizing data processing, dispose each application end with trunking mode, main control end is set, transmission by task data between messenger service realization main control end and each application end and task scheduling instruction, thereby the data of batch processing are shared to a plurality of application end, can improve the efficient that batch data is handled greatly; In addition, by of the separate flexible deployment and the expansion that realize hardware resource of master control end, thereby exchange higher batch execution efficient for the dynamic expansion of hardware with application end.
In the said system, after described application end was finished the processing of corresponding task data, described application end continued the described intermediate server of visit, obtained other task scheduling instruction, carried out the processing of corresponding task data.
Usually, each application end in the application cluster is independent of each other, and carries out the task of operation separately according to the operation conditions of self.By being provided with, make each application end after the processing of finishing the corresponding task data, continue the described intermediate server of visit, obtain other task scheduling instruction, carry out the processing of corresponding task data, therefore, the more than enough Mission Operations of getting of application end energy that surplus resources is high, the application end of resource anxiety then can stop to obtain of new Mission Operations, this makes application cluster can keep the concurrent of high saturation always, the application end task that can not have is overweight, and the application end that has has been improved the efficient that batch data is handled better by idle this performance bottleneck situation.
After described application end is finished the processing of corresponding task data, to described intermediate server feedback result information; By described intermediate server described object information is uploaded to the master control end, carries out real time record by the master control end.
In the system embodiment of the present invention, by each application end finishing after corresponding task data handles, all to described intermediate server feedback result information.The object information here can comprise the result of application end execution corresponding task, can comprise: the correct execution feedback of task is known the mistake execution feedback of task, the embodiment of the invention is not done concrete qualification to the particular content of feedback result information, and those skilled in the art can specifically be provided with according to the practical application scene.
By each application end feedback result information to described intermediate server after finishing corresponding task data processing, main control end can be obtained the situation that application end is handled batch data, is convenient to the scheduling controlling of main control end to batch data.
Embodiment six
In the prior art, in the batch data processing procedure based on host mode, the monitoring capacity that main frame is handled for batch data is relatively poor usually, generally can only be by the text analyzing in the daily record, grasp ruuning situation in batches, can't carry out visual real-time monitoring, therefore, be difficult to the batch jobs that mistake occurs are directly positioned.
Above-mentioned defective occurs for fear of technical solution of the present invention, in this system embodiment, as shown in Figure 6, increase monitoring client 505, this monitoring client connects intermediate server 502.
In the process of specific implementation monitoring, can initiate the Mission Monitor request by monitoring client, and described Mission Monitor request is sent to described intermediate server, by described intermediate server described Mission Monitor request is forwarded to described main control end; After main control end receives this Mission Monitor request, under situation about allowing, according to described Mission Monitor request, control described intermediate server and feed back corresponding task status information to described monitoring client by described main control end.Like this, monitoring client just can be realized the task of main control end approval is monitored.
System embodiment described above only is schematic, wherein said unit as the separating component explanation can or can not be physically to separate also, the parts that show as the unit can be or can not be physical locations also, promptly can be positioned at a place, perhaps also can be distributed on a plurality of network element.Can select wherein some or all of module to realize the purpose of present embodiment scheme according to the actual needs.Those of ordinary skills promptly can understand and implement under the situation of not paying creative work.
To the above-mentioned explanation of the disclosed embodiments, make this area professional and technical personnel can realize or use the present invention.Multiple modification to these embodiment will be conspicuous concerning those skilled in the art, and defined herein General Principle can realize under the situation of the spirit or scope that do not break away from the embodiment of the invention in other embodiments.Therefore, the embodiment of the invention will can not be restricted to these embodiment shown in this article, but will meet and principle disclosed herein and features of novelty the wideest corresponding to scope.

Claims (10)

1. the lot size scheduling method of a data processing is characterized in that, described method comprises:
The master control end obtains pending batch tasks data, and generates corresponding task scheduling instruction;
The master control end is sent to intermediate server with described task scheduling instruction and corresponding task data, preserves all task scheduling instructions by described intermediate server;
Visit described intermediate server by each application end in the application cluster, obtain corresponding task scheduling instruction, carry out the processing of corresponding task data.
2. the lot size scheduling method of data processing according to claim 1 is characterized in that, described method also comprises:
After described application end was finished the processing of corresponding task data, described application end continued the described intermediate server of visit, obtained other task scheduling instruction, carried out the processing of corresponding task data.
To described intermediate server feedback result information;
By described intermediate server described object information is uploaded to the master control end, carries out real time record by the master control end.
3. the lot size scheduling method of data processing according to claim 2 is characterized in that, after described application end is finished the processing of corresponding task data, also comprises:
To described intermediate server feedback result information;
By described intermediate server described object information is uploaded to the master control end, carries out real time record by the master control end.
4. the lot size scheduling method of data processing according to claim 1 is characterized in that, described method also comprises:
Initiate the Mission Monitor request by monitoring client, and described Mission Monitor request is sent to described intermediate server, described Mission Monitor request is forwarded to described main control end by described intermediate server;
, control described intermediate server and feed back corresponding task status information according to described Mission Monitor request by described main control end to described monitoring client.
5. according to the lot size scheduling method of each described data processing in the claim 1~4, it is characterized in that described intermediate server is any way of realization in message server, polling data storehouse server or the distribution of document server.
6. the lot size scheduling system of a data processing is characterized in that, described system comprises: master control end, intermediate server and application cluster; Wherein,
Described master control end is used to obtain pending batch tasks data, and generates corresponding task scheduling instruction, and described task scheduling instruction and corresponding task data are sent to described intermediate server;
Described intermediate server is used to preserve all task scheduling instructions;
Described application cluster is used for visiting described intermediate server by each application end of application cluster, obtains corresponding task scheduling instruction, carries out the processing of corresponding task data.
7. the lot size scheduling system of data processing according to claim 6 is characterized in that, described application end is after the processing of finishing the corresponding task data, to described intermediate server feedback result information;
Then described intermediate server is uploaded to the master control end with described object information, carries out real time record by the master control end.
8. the lot size scheduling system of data processing according to claim 7, it is characterized in that, after described application end is finished the processing of corresponding task data, continue the described intermediate server of visit, obtain other task scheduling instruction, carry out the processing of corresponding task data.
9. the lot size scheduling system of data processing according to claim 6 is characterized in that, described system also comprises:
Monitoring client is used to initiate the Mission Monitor request, and described Mission Monitor request is sent to described intermediate server, by described intermediate server described Mission Monitor request is forwarded to described main control end;
Then described main control end is controlled described intermediate server and is fed back corresponding task status information to described monitoring client according to described Mission Monitor request.
10. according to the lot size scheduling system of each described data processing in the claim 6~9, it is characterized in that described intermediate server is any way of realization in message server, polling data storehouse server or the distribution of document server.
CN2010106025269A 2010-12-23 2010-12-23 Batch data scheduling method and system Pending CN102012840A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010106025269A CN102012840A (en) 2010-12-23 2010-12-23 Batch data scheduling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010106025269A CN102012840A (en) 2010-12-23 2010-12-23 Batch data scheduling method and system

Publications (1)

Publication Number Publication Date
CN102012840A true CN102012840A (en) 2011-04-13

Family

ID=43843016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010106025269A Pending CN102012840A (en) 2010-12-23 2010-12-23 Batch data scheduling method and system

Country Status (1)

Country Link
CN (1) CN102012840A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102226890A (en) * 2011-06-10 2011-10-26 中国工商银行股份有限公司 Monitoring method and device for host batch job data
CN102495838A (en) * 2011-11-03 2012-06-13 成都市华为赛门铁克科技有限公司 Data processing method and data processing device
CN102523124A (en) * 2011-12-26 2012-06-27 北京蓝汛通信技术有限责任公司 Method and apparatus for carrying out batch processing on lots of hosts in CDN network
CN103150218A (en) * 2013-03-28 2013-06-12 广州供电局有限公司 Resource scheduling server, intelligent terminals and scheduling method thereof
CN103186418A (en) * 2011-12-30 2013-07-03 北大方正集团有限公司 Method and system for distributing tasks
CN103197960A (en) * 2013-04-12 2013-07-10 中国银行股份有限公司 Scheduling method and scheduling system for batch job system
CN103336720A (en) * 2013-06-17 2013-10-02 湖南大学 SLURM-based job execution method with data dependency
CN103853719A (en) * 2012-11-28 2014-06-11 成都勤智数码科技股份有限公司 Extensible mass data collection system
CN103853713A (en) * 2012-11-28 2014-06-11 成都勤智数码科技股份有限公司 Efficient storage method of mass data
CN104793994A (en) * 2015-04-27 2015-07-22 中国农业银行股份有限公司 Batch job processing method, device and system
CN107563942A (en) * 2016-06-30 2018-01-09 阿里巴巴集团控股有限公司 A kind of logistics data batch processing method, logistics processing system and processing unit
CN108924214A (en) * 2018-06-27 2018-11-30 中国建设银行股份有限公司 A kind of load-balancing method of computing cluster, apparatus and system
CN109491779A (en) * 2018-11-23 2019-03-19 南京云帐房网络科技有限公司 A kind of batch is declared dutiable goods method and apparatus
CN109582451A (en) * 2018-11-21 2019-04-05 金色熊猫有限公司 Method for scheduling task, system, equipment and readable medium
CN104991821B (en) * 2015-06-29 2019-12-06 北京奇虎科技有限公司 method and device for processing monitoring tasks in batches
CN110704210A (en) * 2019-09-20 2020-01-17 天翼电子商务有限公司 Script task calling method, system, medium and device
CN111010313A (en) * 2019-12-05 2020-04-14 深圳联想懂的通信有限公司 Batch processing state monitoring method, server and storage medium
CN111414198A (en) * 2020-03-18 2020-07-14 北京字节跳动网络技术有限公司 Request processing method and device
CN111679920A (en) * 2020-06-08 2020-09-18 中国银行股份有限公司 Method and device for processing batch equity data
CN112328408A (en) * 2020-10-21 2021-02-05 中国建设银行股份有限公司 Data processing method, device, system, equipment and storage medium
CN113485815A (en) * 2021-07-27 2021-10-08 中国银行股份有限公司 Job batch processing system, method, device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1874576A (en) * 2005-04-28 2006-12-06 索尼爱立信移动通信日本株式会社 Software update system and software update management apparatus
CN101114937A (en) * 2007-08-02 2008-01-30 上海交通大学 Electric power computation gridding application system
CN101291337A (en) * 2008-05-30 2008-10-22 同济大学 Grid resource management system and method
CN101808121A (en) * 2010-02-24 2010-08-18 深圳市五巨科技有限公司 Method and device for writing server log of mobile terminal into database

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1874576A (en) * 2005-04-28 2006-12-06 索尼爱立信移动通信日本株式会社 Software update system and software update management apparatus
CN101114937A (en) * 2007-08-02 2008-01-30 上海交通大学 Electric power computation gridding application system
CN101291337A (en) * 2008-05-30 2008-10-22 同济大学 Grid resource management system and method
CN101808121A (en) * 2010-02-24 2010-08-18 深圳市五巨科技有限公司 Method and device for writing server log of mobile terminal into database

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102226890A (en) * 2011-06-10 2011-10-26 中国工商银行股份有限公司 Monitoring method and device for host batch job data
CN102495838A (en) * 2011-11-03 2012-06-13 成都市华为赛门铁克科技有限公司 Data processing method and data processing device
CN102495838B (en) * 2011-11-03 2014-09-17 华为数字技术(成都)有限公司 Data processing method and data processing device
CN102523124B (en) * 2011-12-26 2014-09-10 北京蓝汛通信技术有限责任公司 Method and apparatus for carrying out batch processing on lots of hosts in CDN network
CN102523124A (en) * 2011-12-26 2012-06-27 北京蓝汛通信技术有限责任公司 Method and apparatus for carrying out batch processing on lots of hosts in CDN network
CN103186418A (en) * 2011-12-30 2013-07-03 北大方正集团有限公司 Method and system for distributing tasks
CN103853713B (en) * 2012-11-28 2018-04-24 勤智数码科技股份有限公司 The efficient storage method of mass data
CN103853719A (en) * 2012-11-28 2014-06-11 成都勤智数码科技股份有限公司 Extensible mass data collection system
CN103853713A (en) * 2012-11-28 2014-06-11 成都勤智数码科技股份有限公司 Efficient storage method of mass data
CN103853719B (en) * 2012-11-28 2018-05-22 勤智数码科技股份有限公司 Easily extension mass data collection system
CN103150218A (en) * 2013-03-28 2013-06-12 广州供电局有限公司 Resource scheduling server, intelligent terminals and scheduling method thereof
CN103197960A (en) * 2013-04-12 2013-07-10 中国银行股份有限公司 Scheduling method and scheduling system for batch job system
CN103197960B (en) * 2013-04-12 2016-06-22 中国银行股份有限公司 Dispatching method and system for batch job system
CN103336720A (en) * 2013-06-17 2013-10-02 湖南大学 SLURM-based job execution method with data dependency
CN104793994A (en) * 2015-04-27 2015-07-22 中国农业银行股份有限公司 Batch job processing method, device and system
CN104991821B (en) * 2015-06-29 2019-12-06 北京奇虎科技有限公司 method and device for processing monitoring tasks in batches
CN107563942A (en) * 2016-06-30 2018-01-09 阿里巴巴集团控股有限公司 A kind of logistics data batch processing method, logistics processing system and processing unit
CN107563942B (en) * 2016-06-30 2021-06-18 菜鸟智能物流控股有限公司 Logistics data batch processing method, logistics processing system and processing device
CN108924214A (en) * 2018-06-27 2018-11-30 中国建设银行股份有限公司 A kind of load-balancing method of computing cluster, apparatus and system
CN109582451A (en) * 2018-11-21 2019-04-05 金色熊猫有限公司 Method for scheduling task, system, equipment and readable medium
CN109491779A (en) * 2018-11-23 2019-03-19 南京云帐房网络科技有限公司 A kind of batch is declared dutiable goods method and apparatus
CN110704210A (en) * 2019-09-20 2020-01-17 天翼电子商务有限公司 Script task calling method, system, medium and device
CN110704210B (en) * 2019-09-20 2023-10-10 天翼电子商务有限公司 Script task calling method, system, medium and device
CN111010313A (en) * 2019-12-05 2020-04-14 深圳联想懂的通信有限公司 Batch processing state monitoring method, server and storage medium
CN111010313B (en) * 2019-12-05 2021-03-19 深圳联想懂的通信有限公司 Batch processing state monitoring method, server and storage medium
CN111414198A (en) * 2020-03-18 2020-07-14 北京字节跳动网络技术有限公司 Request processing method and device
CN111679920A (en) * 2020-06-08 2020-09-18 中国银行股份有限公司 Method and device for processing batch equity data
CN112328408A (en) * 2020-10-21 2021-02-05 中国建设银行股份有限公司 Data processing method, device, system, equipment and storage medium
CN113485815A (en) * 2021-07-27 2021-10-08 中国银行股份有限公司 Job batch processing system, method, device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN102012840A (en) Batch data scheduling method and system
Bui et al. Work queue+ python: A framework for scalable scientific ensemble applications
CN103092698B (en) Cloud computing application automatic deployment system and method
CN105095327A (en) Distributed ELT system and scheduling method
CN107943577A (en) Method and apparatus for scheduler task
CN107729139A (en) A kind of method and apparatus for concurrently obtaining resource
CN103279385A (en) Method and system for scheduling cluster tasks in cloud computing environment
CN103279390A (en) Parallel processing system for small operation optimizing
CN101652750A (en) Data processing device, distributed processing system, data processing method, and data processing program
CN107807815A (en) The method and apparatus of distributed treatment task
CN112579267A (en) Decentralized big data job flow scheduling method and device
De et al. Task management in the new ATLAS production system
CN105096181A (en) E-commerce transaction method and E-commerce transaction system for big data
CN107807854A (en) The method and rendering task processing method of a kind of Automatic dispatching Node station
CN108984496A (en) The method and apparatus for generating report
CN113010598A (en) Dynamic self-adaptive distributed cooperative workflow system for remote sensing big data processing
CN101356503B (en) Data processing system and data processing method
CN103678488A (en) Distributed mass dynamic task engine and method for processing data with same
Liu et al. KubFBS: A fine‐grained and balance‐aware scheduling system for deep learning tasks based on kubernetes
CN109271238A (en) Support the task scheduling apparatus and method of a variety of programming languages
CN107766137A (en) A kind of task processing method and device
CN106445634A (en) Container monitoring method and device
CN102299820A (en) Federate node device and implementation method of high level architecture (HLA) system framework
CN103777593A (en) Automatic product control and production system and realizing method thereof
CN114896049A (en) Method, system, equipment and medium for scheduling operation tasks of electric power artificial intelligence platform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20110413