CN107832146A - Thread pool task processing method in highly available cluster system - Google Patents

Thread pool task processing method in highly available cluster system Download PDF

Info

Publication number
CN107832146A
CN107832146A CN201711018504.6A CN201711018504A CN107832146A CN 107832146 A CN107832146 A CN 107832146A CN 201711018504 A CN201711018504 A CN 201711018504A CN 107832146 A CN107832146 A CN 107832146A
Authority
CN
China
Prior art keywords
task
thread
idle
thread pool
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711018504.6A
Other languages
Chinese (zh)
Inventor
李世巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Computer Technology and Applications
Original Assignee
Beijing Institute of Computer Technology and Applications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Computer Technology and Applications filed Critical Beijing Institute of Computer Technology and Applications
Priority to CN201711018504.6A priority Critical patent/CN107832146A/en
Publication of CN107832146A publication Critical patent/CN107832146A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5019Workload prediction

Abstract

The invention discloses the thread pool task processing method in a kind of highly available cluster system, wherein, including:The idle worker thread of certain amount is pre-created first, and idle worker thread is in conditional blocking state when initial;Form task queue;Thread pool main thread sequentially enters the circulation of searching work task, inspection thread pool state and the thread that shared out the work for task, head obtains pending task from task queue, it is successful then into next step, if, then keep taking the state of task;If the busy thread in current thread pond exceedes certain accounting of sum, work at present task is not at this moment handled;Thread pool state is checked, when current idle Thread Count is less than idle minimum, the idle thread of certain amount is created, to maintain the poised state of thread pool;When idle line number of passes is more than idle peak, then the idle thread of certain amount is discharged;A worker thread is distributed for pending task.

Description

Thread pool task processing method in highly available cluster system
Technical field
The present invention relates to a kind of computer task processing method, the thread pool in more particularly to a kind of highly available cluster system Task processing method.
Background technology
In the epoch that present computer technology and internet develop rapidly, big data analysis, supercomputing are increasingly becoming respectively The mainstream research direction of individual research team.Equally, as accurate industrial and commercial bank of the national defence section industry calculated is needed, for calculating wanting for performance Seek also more and more higher.Calculate performance in order to improve and meet the degree of operational need to reach, application clustering technology, by multiple services Device or PC are jointly processed by a large-scale complicated calculations task by network connection.
The advantage of group system in can by complexity calculating task be distributed in each PC or server.But The specific method of salary distribution needs to propose algorithm by understanding the practical problem for needing to solve.The core concept of load balancing is to allow Each unit in system is all assigned to appropriate task to perform.Realizing the method for load balancing also has various ways, establishes Which kind of need to determine mode used in view of the advantage and disadvantage of each mode during model.What this patent was described in detail is exactly load balancing A kind of application of the algorithm of entitled Thread Pool Technology in algorithm in cluster field.
Early in twentieth century the seventies and eighties, Digital equipment companies and Tandem Computer Companies begin to The research and development work of group system.Group system use operating system mainly have VMS, UNIX, WindowsNT and Linux。
The nineties latter stage, (SuSE) Linux OS constantly move to maturity, and its robustness constantly strengthens, and provides GNU softwares and PVM, MPI message passing mechanism of standardization, it is most important that Linux is provided in ordinary PC to high property The support of energy network, has thus promoted the cluster system based on Linux significantly
The development of system.
China started to introduce analog cluster system in 1989, and nineteen ninety comes into operation.With the hair of digital communication technology Exhibition, trunked communication system also begin to the digital technology development to the second generation, and most important feature is to employ TDMA (time-division is more Location) and CDMA (CDMA) communication mode.But Chinese cluster communication application is main also to rest on analogue technique level, The application of Digital Clustering is less.
Now, Thread Pool Technology is a research direction in load-balancing technique, on its research also constantly Among exploration.At present, some famous major companies all especially have an optimistic view of this technology, and apply be somebody's turn to do in their product already Technology.For example IBM WebSphere, IONA Orbix 2000 is in SUN Jini, Microsoft MTS (Microsoft Transaction Server 2.0), COM+ etc..
Domestic aspect, there is presently no the commercialized case using Thread Pool Technology.But still there are many researchers Participate in using Thread Pool Technology as in the correlative study of core.
Traditional multithreading is using the strategy for creating, destroying immediately immediately, i.e., whenever server receives request Afterwards, server can create a new thread, and the task of this request is performed by the thread.After tasks carrying finishes, move back Go out thread.Thread compared to the process in operating system for, create or destroy the expense that all can significantly reduce system. But when creating and destroying a new thread, system still will be that thread is distributed on storehouse, and the application of thread The resource such as hereafter.Although the system resource very little of these consumption, for real-time parallel processing system, such system System expense can not be ignored.If the quantity of task is excessive and the small execution of each task is got up quickly, then Server will be in and ceaselessly create thread, the state of destroying threads.And thread can also consume system money when activity Source, system can be caused due to consuming excessive internal memory or causing system due to ceaselessly switch threads by creating excessive thread Efficiency reduces.
The content of the invention
It is an object of the invention to provide the thread pool task processing method in a kind of highly available cluster system, for solving Above-mentioned problem of the prior art.
Thread pool task processing method in a kind of highly available cluster system of the present invention, wherein, including:Thread pool main thread After system operation, the idle worker thread of certain amount is pre-created first, and idle worker thread is in bar when initial Part blocked state;The task of user's input is received, forms task queue;Thread pool main thread sequentially enters lookup work Make the circulation of task, inspection thread pool state and the thread that shared out the work for task, including:Circulation starts, and appoints from work Head in queue of being engaged in obtains pending task, it is successful then into next step, if, then holding takes the shape of task State;If the busy thread in current thread pond exceedes certain accounting of sum, work at present task is not at this moment handled, and to system Alert and report current processing status;Thread pool state is checked, when current idle Thread Count is less than idle minimum, creates one Fixed number purpose idle thread, to maintain the poised state of thread pool;When idle line number of passes is more than idle peak, then discharge The idle thread of certain amount;A worker thread is distributed for pending task, and is swashed to worker thread conditioned signal Worker thread living starts to perform task;Task queue head pointer continues cycling through processing to moving down;Task team In row without task after, the circulation of main thread stops, and discharges the system resource of application.
One embodiment of the thread pool task processing method in the highly available cluster system of the present invention, wherein, this one Accounting is determined for 80%.
One embodiment of the thread pool task processing method in the highly available cluster system of the present invention, wherein, create Vacant working number of threads depend on the scale of program and to needing task complexity to be processed to estimate..
One embodiment of the thread pool task processing method in the highly available cluster system of the present invention, wherein, work Task queue is the input of user, the calculating task proposed according to user, big calculating task is resolved into what can quickly be performed Tiny task, then each small task is packaged into a structure, and task queue is stored in a manner of first in first out, works The head of queue is the task structure body pointer being not previously allocated away of foremost in work at present queue, when the acquisition pointer When typically have two kinds of situations, success or failure, the pointer of pending task is returned to if success, it is on the contrary Then fail, wait new correct task pointer.
One embodiment of the thread pool task processing method in the highly available cluster system of the present invention, wherein, taking After obtaining correct task structure pointer, the working condition of current thread pond all working thread can be checked, if worked as The preceding busy thread of thread pool exceedes the 80% or 20% less than sum of sum, then thread pool working condition is abnormal, then at this moment The task currently got of pause processing, and to System Reports current thread pond state.
One embodiment of the thread pool task processing method in the highly available cluster system of the present invention, wherein, currently The busy thread of thread pool exceedes certain accounting of sum, then thread pool creates the idle thread of certain amount, when idle line number of passes During more than idle peak, then the idle thread of certain amount is discharged.
One embodiment of the thread pool task processing method in the highly available cluster system of the present invention, wherein, to treat The task of execution distributes a worker thread, and starts to perform work times to worker thread conditioned signal activation worker thread Business includes:Task to get distributes a worker thread, and starts to worker thread conditioned signal activation worker thread Perform task, the conditioned signal of worker thread be the Status Flag of worker thread is set to it is busy, while will work appoint Business pointer transmission is come in, and is allowed worker thread to start execution task, is no longer on conditional blocking state but is in busy condition.
One embodiment of the thread pool task processing method in the highly available cluster system of the present invention, wherein, it will take The task gone out is deleted in task queue, as task queue not for sky, go to circulation and start, judge whole works Making task, all whether processing processing is completed, and after whole tasks all handle completion, is terminated, otherwise, is continued waiting for, if New task is inputted, then new task is added to the tail of the queue of task queue, and goes to circulation and starts.
One embodiment of the thread pool task processing method in the highly available cluster system of the present invention, wherein, go to Before circulation starts, task queue head pointer is moved down into task queue on next task, here task Queue head pointer is carried out when task distribution is completed every time, while also a task queue tail refers to Pin, there is new task to be moved to when entering task queue behind new task every time, when head pointer etc. When tail pointer, suspend read work task, and continue to judge whether to have in task queue at a time interval New task arrives;Conversely, when head pointer is not equal to tail pointer, processing is continued cycling through.
The present invention proposes corrective measure for existing high-availability cluster technology.The present invention can use the acceptance of the bid Qi on basis Unicorn high-availability system and software, for network communication protocol therein and task distribution processing side on the basis of the cluster network Formula implements improvement.TIPC network protocol stacks are improved to improve real-time Data Transmission efficiency and realize using Thread Pool Technology negative Carrying equilibrium enables the more efficient processing task of high-availability cluster processing system.
Brief description of the drawings
Fig. 1 show a kind of schematic diagram of highly available cluster system;
Fig. 2 show the structure chart of system cluster;
Fig. 3 show access point server and realizes general frame figure.
Embodiment
To make the purpose of the present invention, content and advantage clearer, with reference to the accompanying drawings and examples, to the present invention's Embodiment is described in further detail.
Fig. 1 show a kind of schematic diagram of highly available cluster system, as shown in figure 1, high-availability cluster composition includes:1st, one Need to include 2 to 16 servers in individual cluster;2nd, the server in cluster is both needed to installation acceptance of the bid kylin high-availability cluster software;3、 (optional) shared disk;4th, (optional) optical fiber stores;5th, the server being connected in cluster needs to ensure more than at least two Mode can connect the server of acceptance of the bid kylin high-availability cluster software, and this includes:Ethernet, serial connection and shared magnetic Disk link etc..
As shown in figure 1, wherein user terminal is available to the interactive interface of user, user can operate above oneself Calculating task is simultaneously assigned to load-balancing device by program.
As shown in figure 1, load-balancing device operates above the load-balancing algorithm of task distribution scheduling in responsible cluster Program, the equipment are the core control equipments of group system, and the requirement to its reliability is very high.Typically using two-shipper mutually it is standby, Or the method for two-node cluster hot backup ensures its reliability.
As shown in figure 1, the main body of application server i.e. cluster server, is mainly responsible for calculating task, will be by loading The calculating task that balancing equipment is distributed performs, and implementing result is fed back upwards, finally be presented to after collecting by integration User.
As shown in figure 1, an easy highly available cluster system can be built in laboratory environments, using ordinary PC (windowXP systems) is used as user terminal, is used as using 2 FT2000 servers of soaring (acceptance of the bid kylin 5.0 system) in cluster Application server, 1 boiling FT2000 servers connected as load-balancing device, all machines by switch network, Inter-net communication (is improved) using TIPC agreements by original ICP/IP protocol.Calculated below from network communication protocol and load balancing The group system is described the aspect of method two.
As shown in figure 1, the thread pool task processing method in highly available cluster system, load-balancing method are to enable load It is enough as average as possible in multimachine real-time parallel processing system that processing is shared by each computer or server.These are negative Load is probably that processing load is calculated caused by application program performs, it is also possible to data transfer caused by network data transmission Load.Such technology is highly suitable to be applied in high-availability cluster processing system, and substantial amounts of computer, server operation are same Kind program each completes the calculating task of distribution.Each node can handle a part of load, and can be among the nodes Dynamically distributes load, to realize balance.
As shown in figure 1, the core of the thread pool task processing method in highly available cluster system of the present invention is load balancing Algorithm, main purpose are to utilize certain strategy or mechanism, it then follows certain principle, in real time according to the load of each node in system Situation, it is balanced that task is assigned on each server or computer node, so as to reach balance system load, improve System overall disposal ability and response speed.When load-balancing algorithm is selected, the difference of service request is considered Type, the different disposal ability of server, load caused by being randomly choosed in terms of distribute the problems such as uneven.Final choice is born Carry the data transmission capabilities that equalization algorithm is required to correctly reflect each server handling ability and network.The present embodiment master Load balancing is realized by application thread pool.
In thread pool task processing method in highly available cluster system of the present invention, if needed for a calculating task when Between be T, then this time includes:
T1:Create the time of thread;
T2:Perform the time of calculating task;
T3:Thread scheduling and synchronous time;
T4:The time of thread destruction;
T2 is task required execution time in itself among these, related to calculating task itself.T1, T4 are threads itself Time overhead, T3 is the time that system is consumed in switch threads, if thread present in system will excessively make this Time overhead be can not ignore depending on part.As long as so T1, T3 and T4 time overhead are preferably minimized, it becomes possible to maximize Lifting system performance.
Thread pool task processing method in highly available cluster system includes:
The main purpose of thread pool is management and coordinates thread, makes the thread of free time obtain calling so as to complete real-time task Processing.Therefore thread pool must include thread pool manager and thread pool worker thread queue this two parts.Thread pool management As long as managing the thread in thread pool and monitoring its state, worker thread is mainly used in completing calculating task device.
Thread pool task processing method in highly available cluster system, which is realized, mainly includes the design of thread pool main thread, free time The establishment of thread, pending request task is searched in task queue, idle thread is distributed for task, release destruction idle line The detection etc. of journey, thread pool working condition.
The master-plan of thread pool using a main thread and multiple worker threads pattern, main thread with system operation and Create and run, main thread is used for obtaining task, the thread that shares out the work, recovery operation thread and the shape for safeguarding thread pool State;Worker thread is responsible for handling the task requests of main thread distribution.Thread pool task processing method in highly available cluster system The workflow of realization is as follows:
After 1. thread pool main thread is with system operation, the idle worker thread of certain amount is pre-created first, it is idle Conditional blocking state is in when worker thread is initial, does not consume system resource substantially;
2. and then thread pool main thread is operated the lookup of task, checks thread pool state, divides for task successively Circulation with worker thread;
3. after entering circulation, head obtains pending task first from task queue, if success Return to the pointer of pending task, it is on the contrary then block;
4. if the busy thread in current thread pond exceedes the 80% of sum, at this moment do not handle work at present task, and to ALM simultaneously reports current processing status;
5. checking thread pool state, when current idle Thread Count is less than idle minimum, the free time of certain amount is created Thread, to maintain the poised state of thread pool;When idle line number of passes is more than idle peak, then the sky of certain amount is discharged Idle thread;
6. finally distributing a worker thread for pending task, and give the activation work of worker thread conditioned signal Thread starts to perform task;
7. task queue head pointer is to moving down;Return to the 3rd step and continue cycling through processing;
8. the circulation of main thread can stop, the system resource of application is discharged, and is quit a program
Thread pool task processing method in highly available cluster system specifically includes:
After 1. thread pool main thread is with system operation, the idle worker thread of certain amount is pre-created first, now Worker thread is in conditional blocking state, it is therefore an objective to and system resource is not consumed and can wait upcoming task, Worker thread is independently of the presence outside main thread, interaction between main thread be present but can be carried out simultaneously when running. The worker thread number of establishment depends on the scale of program and programmer estimates to need task complexity to be processed;
2. receiving the task of user's input first, task queue is formed;Then thread pool main thread enters successively Enter searching work task, check thread pool state, the circulation for the thread that shared out the work for task, the mode of circulation is following (i.e. 3~7 steps in flow);
3rd, head obtains pending task from task queue, successful then enter step 4, if, then protect Capture the state of task;
Here task queue is equivalent to the input of user, and for the angle of whole system, user, which can provide, to be needed Being calculated for task (i.e. total working task), this task should be that huge, complexity is high in theory, so just there is application The meaning of Thread Pool Technology.Then the calculating task proposed according to user, programmer carry out rational split and appoint big calculating The tiny task that can quickly perform is resolved into business, and then each small task is packaged into a structure, in a manner of first in first out It is stored in task queue.The head of so-called work queue be in work at present queue foremost be not previously allocated away Task structure body pointer, two kinds of situations, success or failure are typically had when the pointer is obtained.Returned if success The pointer of pending task, it is on the contrary then fail, for obtain the in the case of of failure have it is a variety of may, it may be possible to work is appointed Business queue has been sky, it is also possible to which the reason such as task structure content missing, now program, which can block, waits newly Correct task pointer arrives;
4. after correct task structure pointer is obtained, the work of current thread pond all working thread can be checked Make state.If the busy thread in current thread pond exceedes the 80% or 20% less than sum of sum, thread pool work shape State is abnormal, then the task that at this moment pause processing is currently got, and to the current current thread pond state of System Reports, Zhi Houhui Perform the operation of the 5th step.If check that the working condition of thread pool is normal, then carry out the operation of the 6th step;
5. check thread pool state, when current idle Thread Count is less than idle minimum (busy thread up to 80%), Thread pool creates the idle thread of certain amount, to maintain the poised state of thread pool;When idle line number of passes is more than the free time most During high level, then the idle thread of certain amount is discharged;
6. the task to get distributes a worker thread, and is opened to worker thread conditioned signal activation worker thread Begin to perform task, that is, cancel blocked state.Here the conditioned signal to worker thread is by the state mark of worker thread Will is set to busy, while task pointer transmission is come in, allows worker thread to start execution task, be no longer on conditional blocking State but be in busy condition;
7. the task of taking-up is deleted in task queue, if task queue is not sky, step is gone to 3, otherwise, judge that whether all whole tasks complete by processing processing, after whole tasks all handle completion, performs step Rapid 8, otherwise, continue waiting for, if the task that input is new, new task is added to the tail of the queue of task queue, and perform Step 3;
Task queue head pointer is moved down into task queue on next task, task queue here Head pointer is carried out when task distribution is completed every time, while an also task queue tail pointer, The pointer is that having new task to be moved to when entering task queue behind new task every time.When head refers to (represent work queue for sky) when pin is equal to tail pointer, suspend read work task, and persistently sentence at a time interval Whether there is new task to arrive in disconnected task queue;Conversely, when head pointer is not equal to tail pointer, return the 3rd step after Continuous circular treatment;
8. the circulation of main thread can stop, the system resource of application is discharged, and is quit a program.
In thread pool task processing method in highly available cluster system of the present invention, in addition to the side to thread pool optimization Method, including:
Thread pool can effectively improve the resource utilization of real-time system, according to the task character of processing needed for system and Type never Tongfang can optimize in face of thread pool, such as optimization of optimization data store organisation, thread pool thread number, Optimization of implementation strategy etc..Some optimize the characteristics of needing to combine actual treatment task to be optimized among these, but some Aspect is not needed then, and such as the worker thread number in thread pool is optimized.
The selection of the maximum thread purpose of the Thread Count of initial creation and thread pool is extremely important in the design in thread pool The problem of.The foundation of optimum performance is kept when initial number is system operation, and the maximum thread of thread pool is then system dimension Hold normal operation important references.The two numerical value " too small " tasks of will appear from are unable to the situation of timely processing, and " excessive " can then go out The problem of now inter-thread synchronization expense is too big, causes the advantage of thread pool can not play very well, so as to reduce the performance of system And efficiency.
The optimal size of worker thread number pre-created in thread pool depends on number and the work team of available processors The property of task in row.Request that can not possibly be all in actual task can be performed immediately, always occur that some tasks please Ask in the situation for waiting thread to handle.Number of threads number just have impact on whole system perform task efficiency.According to estimating Calculate and count, it is assumed that the ratio between the stand-by period (WT) and service time (ST) of the request of certain type tasks.If by this Ratio is referred to as WT/ST, then has the system of N number of processor, it is necessary to set about N × (1+WT/ST) individual line for one Journey keeps the processor to be fully used.
The thread pool of fixed initialization number of threads maintains pond thread invariable number all the time in operation, and its benefit has Beneficial to the stabilization for maintaining systematic function, its drawback is effectively can not really to reflect the change of systematic function, such as some time etching system When having pop-up mission, load can not be handled timely and effectively, and system is in when gently loading for a long time, and system must also safeguard this The expense of a little threads.For adapt to service request burst type change, optimize thread pool in worker thread number and dynamic increase and Thread in destroying threads pond is inevitable choice.
System also creates the thread of certain amount when initial, and number can be set to N × (1+WT/ST), and the thread in pond can To meet the needs of service request, dispatched when request happens suddenly to change when pond thread can not meet business demand using number of threads Optimized algorithm, batch increase number of threads to adapt to business demand, depending on the upper limit of the number of growth will be according to the performance of system, System will also be request task number determine a threshold value, when more than some threshold value, just force it is other it is any newly arrive please Ask and wait until always untill obtaining an available thread, so as to prevent inadequate resource;Most of thread when the system is idle All be constantly in suspended state, system can record the free time of each thread, when idle between reach some predetermined threshold value When, batch destroying threads, recovery system resource, maintenance system steadily efficiently runs.
As shown in figure 1, in an embodiment of thread pool task processing method in highly available cluster system of the present invention, user The machine of terminal is general purpose PC, by network connection to load-balancing device, load-balancing device again by cluster network with Application server in cluster is connected, and is also interconnected between each application server, application server and load-balancing device Using the FT2000 servers operation operating system of kylin 5.0 of soaring.
Fig. 2 show the structure chart of system cluster, and Fig. 3 show access point server and realizes general frame figure, as Fig. 2 with And shown in Fig. 3, the request task of request queue, send to access point server (i.e. load-balancing device in Fig. 1), access point After server simple process, calculation server is submitted to, result is submitted into access point server after server process, by Result is submitted to client by access point server again.In the balanced cluster model, a server master in calculation server cluster Calculating task is completed, performance mainly depends on the algorithm with task character and selection;And the access point server of front end is mainly born The reception and transmission, session establishment, session status conversion, message sink and transmission of task requests are blamed, therefore is that system is realized Core cell.
After system operation, if request to create receiving thread, response are sent thread, thread pool main thread and worked by system Thread, message send thread, result receiving thread etc..When initial, all threads do not consume system substantially all in blocked state Resource.When new task requests arrive, message sink thread catches the request message, and after its simple process, system can judge Whether the session is a new dialogue, then distributes a new session control block if a new dialogue and hangs request message In the message queue of session control block, otherwise if current sessions are existing and hung in the queue of the message of the session. It is revised as waiting running status if current sessions state is inoperative state.Thread pool obtains the session and from worker thread queue The current session of middle one thread process in idle condition of selection, worker thread is to the message in the message queue of the session The computer for sending this result to a progress calculating task after being handled carries out data operation and processing, will after processing As a result current server is returned to, server is hung to the session queue, response processing thread after handling it result returning to visitor Family.Thread pool processing procedure relates generally to the distribution of task requests, worker thread distribution and scheduling, load balancing, thread state The processing such as conversion.Worker thread mainly completes the processing of session status machine, the calculating of request message processing etc..
By the system performance testing of reality, including:
Test environment:For operating system using WindowsXP and the acceptance of the bid operating system of kylin 5.0, network is gigabit ether Net, TIPC host-host protocols.Respectively to being tested using Thread Pool Technology and without using the cluster of Thread Pool Technology.
System using when system the cycle be used as synchronous machine co-ordination.Periodic Control central data calculate node of uniting when each and The information of message center, dynamic adjust task list and broadcasted.Calculate node receives preservation table after table, records relevant entries, it is determined that The information such as data that this node should be handled, processing mode, results messages whereabouts.Software is assessed to this load by clustering performance Three kinds of situations of the real-time performance of algorithm point have made a large amount of tests.As a result such as following table:
Form 1
Test event Use thread pool Without using thread pool
Task just sub-distribution 419us 485us
Task scheduling 276us 309us
Network transmission is delayed 604us 605us
Result of calculation updates 13.2ms 22.7ms
It can be seen that the influence being delayed using Thread Pool Technology to network transmission is little, but task distribution, task can be improved Scheduling and the used time of result of calculation renewal, so as to accomplish the efficiency of raising system.
The advantage of highly available cluster system is that the computing capability of individual server is limited, and whole calculating task is distributed Independently resolved to each server and collect result so as to complete calculating task, computing capability can be increased substantially, avoided Use the expenditures of higher performance server.But meanwhile multimachine real-time parallel system also has the limitation of its own, is exactly Each server when performing calculating task, it is necessary to synchronization between the process of progress, and the equilibrium assignment work of load.This part The memory cost and time overhead of work, exponential growth is added to system scale, limits large-scale cluster system Application.The performance of system can be improved to a certain extent by being shared out the work by application Thread Pool Technology carrying out load balancing.
Thread pool task processing method in highly available cluster system mainly solves system money by following feature The problem of source utilization rate:
(1) thread pool after thread pool starts, will be created a number of thread and be put into idle team using the technology that pre-creates Row, the thread being just created are in blocked state, and under this state, thread is almost not take up cpu resource.
(2) after asking to arrive, thread pool root selects an idle thread in the allocation strategy fixed in advance, thus at thread The request is managed, because thread has been present when request arrives, therefore eliminates and creates time overhead caused by thread.
(3) after calculating task execution terminates, thread is not destroyed, but in thread pool under blocked state waits It is secondary scheduled, therefore also eliminate time overhead caused by destroying threads.So make system response faster, improve system Real-time.
Described above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, without departing from the technical principles of the invention, some improvement and deformation can also be made, these are improved and deformation Also it should be regarded as protection scope of the present invention.

Claims (9)

  1. A kind of 1. thread pool task processing method in highly available cluster system, it is characterised in that including:
    After thread pool main thread is with system operation, the idle worker thread of certain amount, idle active line are pre-created first Conditional blocking state is in when journey is initial;
    The task of user's input is received, forms task queue;
    Thread pool main thread sequentially enters searching work task, checks thread pool state and shared out the work thread for task Circulation, including:
    Circulation starts, and head obtains pending task from task queue, it is successful then into next step, if, Then keep taking the state of task;
    If the busy thread in current thread pond exceedes certain accounting of sum, work at present task is not at this moment handled, and to being System alerts and reports current processing status;
    Thread pool state is checked, when current idle Thread Count is less than idle minimum, creates the idle thread of certain amount, with Maintain the poised state of thread pool;When idle line number of passes is more than idle peak, then the idle thread of certain amount is discharged;
    A worker thread is distributed for pending task, and starts to hold to worker thread conditioned signal activation worker thread Row task;Task queue head pointer continues cycling through processing to moving down;
    In task queue without task after, the circulation of main thread stops, and discharges the system resource of application.
  2. 2. the thread pool task processing method in highly available cluster system as claimed in claim 1, it is characterised in that this is certain Accounting is 80%.
  3. 3. the thread pool task processing method in highly available cluster system as claimed in claim 1, it is characterised in that establishment Vacant working number of threads depends on the scale of program and to needing task complexity to be processed to estimate.
  4. 4. the thread pool task processing method in highly available cluster system as claimed in claim 1, it is characterised in that work is appointed Be engaged in queue be user input, according to user propose calculating task, big calculating task is resolved into can quickly perform it is thin Small task, then each small task is packaged into a structure, and task queue, work team are stored in a manner of first in first out The head of row is the task structure body pointer being not previously allocated away of foremost in work at present queue, when obtaining the pointer When typically have two kinds of situations, success or failure return to the pointer of pending task if success, it is on the contrary then Failure, waits new correct task pointer.
  5. 5. the thread pool task processing method in highly available cluster system as claimed in claim 4, it is characterised in that obtaining After correct task structure pointer, the working condition of current thread pond all working thread can be checked, if currently The busy thread of thread pool exceedes the 80% or 20% less than sum of sum, then thread pool working condition is abnormal, then at this moment temporary Stop the task currently got of processing, and to System Reports current thread pond state.
  6. 6. the thread pool task processing method in highly available cluster system as claimed in claim 1, it is characterised in that work as front The busy threads of Cheng Chi exceed certain accounting of sum, then thread pool creates the idle thread of certain amount, when idle line number of passes is big When the peak of free time, then the idle thread of certain amount is discharged.
  7. 7. the thread pool task processing method in highly available cluster system as claimed in claim 1, it is characterised in that to wait to hold Capable task distributes a worker thread, and starts to perform task to worker thread conditioned signal activation worker thread Including:
    Task to get distributes a worker thread, and starts to perform to worker thread conditioned signal activation worker thread Task, the conditioned signal of worker thread are to be set to the Status Flag of worker thread busy, while task are referred to Pin transmission is come in, and allows worker thread to start execution task, is no longer on conditional blocking state but is in busy condition.
  8. 8. the thread pool task processing method in highly available cluster system as claimed in claim 1, it is characterised in that will take out Task deleted in task queue, as task queue not for sky, go to circulation and start, judge whole work All task whether complete by processing processing, after whole tasks all handle completion, terminates, otherwise, continues waiting for, if defeated Enter new task, then new task is added to the tail of the queue of task queue, and go to circulation and start.
  9. 9. the thread pool task processing method in highly available cluster system as claimed in claim 8, it is characterised in that go to and follow Before ring starts, task queue head pointer is moved down into task queue on next task, here task team Row head pointer is carried out when task distribution is completed every time, while also a task queue tail refers to Pin, there is new task to be moved to when entering task queue behind new task every time, when head pointer etc. When tail pointer, suspend read work task, and continue to judge whether to have in task queue at a time interval New task arrives;Conversely, when head pointer is not equal to tail pointer, processing is continued cycling through.
CN201711018504.6A 2017-10-27 2017-10-27 Thread pool task processing method in highly available cluster system Pending CN107832146A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711018504.6A CN107832146A (en) 2017-10-27 2017-10-27 Thread pool task processing method in highly available cluster system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711018504.6A CN107832146A (en) 2017-10-27 2017-10-27 Thread pool task processing method in highly available cluster system

Publications (1)

Publication Number Publication Date
CN107832146A true CN107832146A (en) 2018-03-23

Family

ID=61649793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711018504.6A Pending CN107832146A (en) 2017-10-27 2017-10-27 Thread pool task processing method in highly available cluster system

Country Status (1)

Country Link
CN (1) CN107832146A (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920258A (en) * 2018-06-26 2018-11-30 北京中电普华信息技术有限公司 A kind of transaction methods and application service middleware
CN109165086A (en) * 2018-08-13 2019-01-08 深圳市特康生物工程有限公司 Task executing method and single-chip microcontroller
CN109491780A (en) * 2018-11-23 2019-03-19 鲍金龙 Multi-task scheduling method and device
CN109491895A (en) * 2018-10-26 2019-03-19 北京车和家信息技术有限公司 Server stress test method and device
CN109542605A (en) * 2018-11-27 2019-03-29 长沙智擎信息技术有限公司 A kind of container group life cycle management method based on Kubernetes system architecture
CN110187985A (en) * 2019-05-30 2019-08-30 苏州浪潮智能科技有限公司 A kind of communication means, system and device
CN110515672A (en) * 2018-05-21 2019-11-29 阿里巴巴集团控股有限公司 Business datum loading method, device and electronic equipment
CN110633133A (en) * 2018-06-21 2019-12-31 中兴通讯股份有限公司 Task processing method and device and computer readable storage medium
CN110784350A (en) * 2019-10-25 2020-02-11 北京计算机技术及应用研究所 Design method of real-time available cluster management system
CN110827125A (en) * 2019-11-06 2020-02-21 兰州领新网络信息科技有限公司 Periodic commodity transaction management method
CN110908794A (en) * 2019-10-09 2020-03-24 上海交通大学 Task stealing method and system based on task stealing algorithm
CN111069062A (en) * 2019-12-13 2020-04-28 中国科学院重庆绿色智能技术研究院 Multi-channel visual inspection control method, system software architecture and construction method
CN111124643A (en) * 2019-12-20 2020-05-08 浪潮电子信息产业股份有限公司 Task deletion scheduling method, system and related device in distributed storage
CN111240862A (en) * 2020-01-09 2020-06-05 软通动力信息技术(集团)有限公司 Universal interface platform and data conversion method
CN111240749A (en) * 2018-11-28 2020-06-05 中国移动通信集团浙江有限公司 Suspension control method and device for instance in cluster system
CN111459754A (en) * 2020-03-26 2020-07-28 平安普惠企业管理有限公司 Abnormal task processing method, device, medium and electronic equipment
CN111782293A (en) * 2020-06-28 2020-10-16 珠海豹趣科技有限公司 Task processing method and device, electronic equipment and readable storage medium
CN112114877A (en) * 2020-09-28 2020-12-22 西安芯瞳半导体技术有限公司 Method for dynamically compensating thread bundle warp, processor and computer storage medium
CN112395063A (en) * 2020-11-18 2021-02-23 云南电网有限责任公司电力科学研究院 Dynamic multithreading scheduling method and system
CN112416584A (en) * 2020-11-18 2021-02-26 捷开通讯(深圳)有限公司 Process communication method, device, storage medium and mobile terminal
CN113194040A (en) * 2021-04-28 2021-07-30 王程 Intelligent control method for instantaneous high-concurrency server thread pool congestion
CN113391927A (en) * 2021-07-08 2021-09-14 上海浦东发展银行股份有限公司 Method, device and system for processing business event and storage medium
CN113452554A (en) * 2021-06-18 2021-09-28 上海艾拉比智能科技有限公司 Online OTA differential packet making system and method based on queuing mechanism
CN113722078A (en) * 2021-11-02 2021-11-30 西安热工研究院有限公司 High-concurrency database access method, system and equipment based on thread pool
CN113886057A (en) * 2020-07-01 2022-01-04 西南科技大学 Dynamic resource scheduling method based on parsing technology and data flow information on heterogeneous many-core
CN114500159A (en) * 2022-02-17 2022-05-13 杭州老板电器股份有限公司 Wired communication method and device of central range hood system and electronic equipment
CN117453363A (en) * 2023-11-06 2024-01-26 北京明朝万达科技股份有限公司 Accessory processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753451A (en) * 2009-12-23 2010-06-23 卡斯柯信号有限公司 Network load balance track traffic signal equipment state collection method and device
CN102262564A (en) * 2011-08-16 2011-11-30 天津市天祥世联网络科技有限公司 Thread pool structure of video monitoring platform system and realizing method
CN102591721A (en) * 2011-12-30 2012-07-18 北京新媒传信科技有限公司 Method and system for distributing thread execution task
CN102752136A (en) * 2012-06-29 2012-10-24 广东东研网络科技有限公司 Method for operating and scheduling communication equipment
CN103218264A (en) * 2013-03-26 2013-07-24 广东威创视讯科技股份有限公司 Multi-thread finite state machine switching method and multi-thread finite state machine switching device based on thread pool

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753451A (en) * 2009-12-23 2010-06-23 卡斯柯信号有限公司 Network load balance track traffic signal equipment state collection method and device
CN102262564A (en) * 2011-08-16 2011-11-30 天津市天祥世联网络科技有限公司 Thread pool structure of video monitoring platform system and realizing method
CN102591721A (en) * 2011-12-30 2012-07-18 北京新媒传信科技有限公司 Method and system for distributing thread execution task
CN102752136A (en) * 2012-06-29 2012-10-24 广东东研网络科技有限公司 Method for operating and scheduling communication equipment
CN103218264A (en) * 2013-03-26 2013-07-24 广东威创视讯科技股份有限公司 Multi-thread finite state machine switching method and multi-thread finite state machine switching device based on thread pool

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110515672A (en) * 2018-05-21 2019-11-29 阿里巴巴集团控股有限公司 Business datum loading method, device and electronic equipment
CN110633133A (en) * 2018-06-21 2019-12-31 中兴通讯股份有限公司 Task processing method and device and computer readable storage medium
CN108920258A (en) * 2018-06-26 2018-11-30 北京中电普华信息技术有限公司 A kind of transaction methods and application service middleware
CN109165086A (en) * 2018-08-13 2019-01-08 深圳市特康生物工程有限公司 Task executing method and single-chip microcontroller
CN109491895A (en) * 2018-10-26 2019-03-19 北京车和家信息技术有限公司 Server stress test method and device
CN109491780A (en) * 2018-11-23 2019-03-19 鲍金龙 Multi-task scheduling method and device
CN109491780B (en) * 2018-11-23 2022-04-12 鲍金龙 Multi-task scheduling method and device
CN109542605A (en) * 2018-11-27 2019-03-29 长沙智擎信息技术有限公司 A kind of container group life cycle management method based on Kubernetes system architecture
CN111240749A (en) * 2018-11-28 2020-06-05 中国移动通信集团浙江有限公司 Suspension control method and device for instance in cluster system
CN111240749B (en) * 2018-11-28 2023-07-21 中国移动通信集团浙江有限公司 Suspending control method, device, equipment and storage medium of instance in cluster system
CN110187985A (en) * 2019-05-30 2019-08-30 苏州浪潮智能科技有限公司 A kind of communication means, system and device
CN110908794A (en) * 2019-10-09 2020-03-24 上海交通大学 Task stealing method and system based on task stealing algorithm
CN110908794B (en) * 2019-10-09 2023-04-28 上海交通大学 Task stealing method and system based on task stealing algorithm
CN110784350B (en) * 2019-10-25 2022-04-05 北京计算机技术及应用研究所 Design method of real-time high-availability cluster management system
CN110784350A (en) * 2019-10-25 2020-02-11 北京计算机技术及应用研究所 Design method of real-time available cluster management system
CN110827125A (en) * 2019-11-06 2020-02-21 兰州领新网络信息科技有限公司 Periodic commodity transaction management method
CN111069062A (en) * 2019-12-13 2020-04-28 中国科学院重庆绿色智能技术研究院 Multi-channel visual inspection control method, system software architecture and construction method
CN111124643A (en) * 2019-12-20 2020-05-08 浪潮电子信息产业股份有限公司 Task deletion scheduling method, system and related device in distributed storage
CN111240862A (en) * 2020-01-09 2020-06-05 软通动力信息技术(集团)有限公司 Universal interface platform and data conversion method
CN111459754A (en) * 2020-03-26 2020-07-28 平安普惠企业管理有限公司 Abnormal task processing method, device, medium and electronic equipment
CN111459754B (en) * 2020-03-26 2022-09-27 平安普惠企业管理有限公司 Abnormal task processing method, device, medium and electronic equipment
CN111782293A (en) * 2020-06-28 2020-10-16 珠海豹趣科技有限公司 Task processing method and device, electronic equipment and readable storage medium
CN113886057A (en) * 2020-07-01 2022-01-04 西南科技大学 Dynamic resource scheduling method based on parsing technology and data flow information on heterogeneous many-core
CN112114877A (en) * 2020-09-28 2020-12-22 西安芯瞳半导体技术有限公司 Method for dynamically compensating thread bundle warp, processor and computer storage medium
CN112416584A (en) * 2020-11-18 2021-02-26 捷开通讯(深圳)有限公司 Process communication method, device, storage medium and mobile terminal
CN112395063B (en) * 2020-11-18 2023-01-20 云南电网有限责任公司电力科学研究院 Dynamic multithreading scheduling method and system
CN112395063A (en) * 2020-11-18 2021-02-23 云南电网有限责任公司电力科学研究院 Dynamic multithreading scheduling method and system
CN112416584B (en) * 2020-11-18 2023-12-19 捷开通讯(深圳)有限公司 Process communication method and device, storage medium and mobile terminal
CN113194040A (en) * 2021-04-28 2021-07-30 王程 Intelligent control method for instantaneous high-concurrency server thread pool congestion
CN113452554A (en) * 2021-06-18 2021-09-28 上海艾拉比智能科技有限公司 Online OTA differential packet making system and method based on queuing mechanism
CN113391927A (en) * 2021-07-08 2021-09-14 上海浦东发展银行股份有限公司 Method, device and system for processing business event and storage medium
CN113722078A (en) * 2021-11-02 2021-11-30 西安热工研究院有限公司 High-concurrency database access method, system and equipment based on thread pool
CN114500159A (en) * 2022-02-17 2022-05-13 杭州老板电器股份有限公司 Wired communication method and device of central range hood system and electronic equipment
CN114500159B (en) * 2022-02-17 2023-12-15 杭州老板电器股份有限公司 Wired communication method and device of central range hood system and electronic equipment
CN117453363A (en) * 2023-11-06 2024-01-26 北京明朝万达科技股份有限公司 Accessory processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107832146A (en) Thread pool task processing method in highly available cluster system
Sharma et al. Performance analysis of load balancing algorithms
US10423451B2 (en) Opportunistically scheduling and adjusting time slices
Rajguru et al. A comparative performance analysis of load balancing algorithms in distributed system using qualitative parameters
US20180246755A1 (en) Task Scheduling for Highly Concurrent Analytical and Transaction Workloads
US8893145B2 (en) Method to reduce queue synchronization of multiple work items in a system with high memory latency between processing nodes
US20180210753A1 (en) System and method for supporting a scalable thread pool in a distributed data grid
US20190220319A1 (en) Usage instrumented workload scheduling
EP2466460B1 (en) Compiling apparatus and method for a multicore device
CN101986272A (en) Task scheduling method under cloud computing environment
CN103135943B (en) Self-adaptive IO (Input Output) scheduling method of multi-control storage system
CN103581313B (en) Connection establishment method for processing equipment and cluster server and processing equipment
CN103838621B (en) Method and system for scheduling routine work and scheduling nodes
CN106557369A (en) A kind of management method and system of multithreading
CN107291550B (en) A kind of Spark platform resource dynamic allocation method and system for iterated application
CN110795254A (en) Method for processing high-concurrency IO based on PHP
JP2011123881A (en) Performing workflow having a set of dependency-related predefined activities on a plurality of task servers
CN109257399B (en) Cloud platform application program management method, management platform and storage medium
CN105159769A (en) Distributed job scheduling method suitable for heterogeneous computational capability cluster
CN115840631B (en) RAFT-based high-availability distributed task scheduling method and equipment
CN109039929A (en) Business scheduling method and device
CN107977271A (en) A kind of data center's total management system load-balancing method
US8631086B2 (en) Preventing messaging queue deadlocks in a DMA environment
CN110442454B (en) Resource scheduling method and device and computer equipment
CN109117247B (en) Virtual resource management system and method based on heterogeneous multi-core topology perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180323

RJ01 Rejection of invention patent application after publication