Summary of the invention
Technical problems to be solved in this application are to provide a kind of method of parallel processed messages, in order to solve the single-threaded technical matters affecting the real-time of Message Processing and the handling capacity of server got message and cause in prior art.
Present invention also provides a kind of device of parallel processed messages, node and server cluster, in order to ensure said method implementation and application in practice.
In order to solve the problem, this application discloses a kind of method of parallel processed messages, comprising:
Thread allocation rule and message allocation rule is obtained from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed;
Multiple threads corresponding to each processing node are created according to described thread allocation rule;
The multiple threads triggering described establishment get message according to described message allocation rule from message source, row relax of going forward side by side.
Preferably, a described thread gets corresponding message according to described message allocation rule from message source, comprising:
The user ID numbering of request processing messages is carried out modulo operation according to described number of threads;
The message that described user ID of the thread that thread number is mated with described operation result getting from described message source triggers.
Preferably, also comprise:
Described thread allocation rule and message allocation rule are upgraded.
Preferably, also comprise:
According to the CPU quantity of described processing node and/or memory parameters, described thread allocation rule is set.
This application discloses a kind of device of parallel processed messages, comprising:
Acquisition module, for obtaining thread allocation rule and message allocation rule from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed;
Creation module, for creating multiple threads corresponding to each processing node according to described thread allocation rule;
Trigger module, gets message according to described message allocation rule from message source, row relax of going forward side by side for the multiple threads triggering described establishment.
Preferably, described trigger module comprises operator module and triggers module, wherein:
Described operator module is used for the user ID numbering of request processing messages to carry out modulo operation according to described number of threads;
Described triggers module gets for the thread triggering thread number and mate with described operation result the message that described user ID triggers from described message source.
Preferably, also comprise:
Update module, for upgrading described thread allocation rule and message allocation rule.
Preferably, also comprise:
Module is set, for arranging described thread allocation rule according to the CPU quantity of described processing node and/or memory parameters.
This application discloses a kind of node of parallel processed messages, comprising: the device of aforementioned any one parallel processed messages.
This application discloses a kind of server cluster, comprising: the node of at least two aforesaid parallel processed messages.
Compared with prior art, the application comprises following advantage:
In this application, by setting up thread and comprising it and need associating of the ordered subsets of processing messages, thus the concurrency that thread gets message process can be realized, improve the real-time of node processing message, thus bring the handling capacity of server cluster and the lifting of real-time.Further, can also more convenient and efficient realization to server cluster dilatation, be more adapted to current network application scene.Certainly, the arbitrary product implementing the application might not need to reach above-described all advantages simultaneously.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present application, be clearly and completely described the technical scheme in the embodiment of the present application, obviously, described embodiment is only some embodiments of the present application, instead of whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all belong to the scope of the application's protection.
The application can be used in numerous general or special purpose calculation element environment or configuration.Such as: personal computer, server computer, handheld device or portable set, laptop device, multi-processor device, the distributed computing environment comprising above any device or equipment etc.
The application can describe in the general context of computer executable instructions, such as program module.Usually, program module comprises the routine, program, object, assembly, data structure etc. that perform particular task or realize particular abstract data type.Also can put into practice the application in a distributed computing environment, in these distributed computing environment, be executed the task by the remote processing devices be connected by communication network.In a distributed computing environment, program module can be arranged in the local and remote computer-readable storage medium comprising memory device.
One of main thought of the application can comprise, by setting up thread and comprising it and need associating of the ordered subsets of processing messages, thus the concurrency that thread gets message process can be realized, improve the real-time of node processing message, thus bring the handling capacity of server cluster and the lifting of real-time.
With reference to figure 1, show the process flow diagram of the embodiment of the method 1 of a kind of parallel processed messages of the application, can comprise the following steps:
Step 101: obtain thread allocation rule and message allocation rule from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed.
Configuration database preset in the embodiment of the present application, be used for specially preserving pre-configured thread allocation rule and message allocation rule, the number of threads that described thread allocation rule has for each processing node represented in cluster, such as, when the number of threads that each processing node in the cluster that thread allocation rule represents has is 80, be used for parallel processed messages with regard to needing for the processing node in server cluster configures 80 threads.Described message allocation rule is for representing pending message by which thread is processed, and such as, user A asks the thread that the message of process is 38 by thread number to process.
Message allocation rule wherein needs the integrated performance index considering node when configuring, comprise software environment and hardware environment.If be applicable to the scene that computation requirement is higher, the number of threads of each node can be distributed by the quantity of CPU core, such as, peer distribution 80 threads of 8 cores, and machine assignment 160 threads of 16 cores; If be applicable to the scene that memory requirements is higher, then can distribute by memory parameters (performance state of such as internal memory and capacity), also can consider the actual conditions such as internal memory, CPU, input and output (IO) or network.
Message allocation rule wherein when configuring, then can be calculated by HASH delivery and realize, such as, be obtained by following formula:
Message_lable_id%total_thread_count==thread_id, " message_lable_id " in this formula represents the mark of message sequence subset, the message of same user is assigned in same ordered subsets, to ensure that the message of same user can be processed in order, the message of different user does not then need to process in order; The number of threads that " total_thread_count " is wherein each node, " thread_id " is wherein thread number.Namely be that then the message of this message sequence subset is got by the thread that this thread number is corresponding and processed if the mark of message sequence subset is thread number to total Thread Count delivery result.When specific implementation, message allocation rule can have various ways, independently can choose any mode of message distributed uniform that can make and realize.
Step 102: create multiple threads corresponding to each processing node according to described thread allocation rule.
Each processing node in cluster will create several threads according to thread allocation rule when initialization, and such as each processing node creates 50 threads etc., can getting from message source and processing messages according to message allocation rule for thread is follow-up of establishment.
Step 103: the multiple threads triggering described establishment get message according to described message allocation rule from message source, row relax of going forward side by side.
Message source mentioned in the present embodiment, be the set that all users received in a server cluster ask the message processed, any one thread all needs to get message in this data source.
Wherein, a described thread gets the step of corresponding message from message source according to described message allocation rule, shown in figure 2, specifically can comprise:
Step 201: the user ID numbering of request processing messages is carried out modulo operation according to described number of threads;
In the present embodiment, because the message of same user request belongs to same ordered subsets, and the message of different user request belongs to different ordered subsets, so also can carry out delivery according to the user ID numbering of request processing messages to number of threads.Wherein, user ID numbering can increase progressively according to the natural modes such as 1,2,3, generally can realizing from increasing major key by database, can be used for identifying different ordered subsets.
Step 202: the thread that thread number is mated with described operation result gets the message that described user ID triggers from described message source.
If the operation result of step 201 is identical with some thread number, then the thread that this thread number is corresponding is just responsible for the message that user corresponding to process above-mentioned user ID numbering asks.Visible, by the order of user oneself being associated with modulo operation, what can realize that message gets is parallel, is namely that different threads can get message according to modulo operation result in principal and subordinate's message source.
Adopt the embodiment of the present invention, by setting up thread and comprising it and need associating of the ordered subsets of processing messages, thus the concurrency that thread gets message process can be realized, improve the real-time of node processing message, thus bring the handling capacity of server cluster and the lifting of real-time.
With reference to figure 3, show the process flow diagram of the embodiment of the method 2 of a kind of parallel processed messages of the application, can comprise the following steps:
Step 301: described thread allocation rule is set according to the CPU quantity of processing node in server cluster and/or memory parameters.
First according to actual conditions such as the CPU quantity of processing node and/or memory parameters in the present embodiment, described thread allocation rule is set.Concrete set-up mode can description in reference example 1, does not repeat them here.
Step 302: message allocation rule is set, and described thread allocation rule and message allocation rule are stored to configuration database.
Meanwhile, the message allocation rule each thread is being set needs to follow, and the thread allocation rule set and message allocation rule are stored to configuration database.
Step 303: obtain thread allocation rule and message allocation rule from described configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed.
Step 304: create multiple threads corresponding to each processing node according to described thread allocation rule.
Suppose that thread allocation rule is each Joint Enterprise 80 threads, then for each Joint Enterprise in server cluster is 80 threads, if totally 10 nodes in server cluster, then thread number can be then 1 ~ 800.
Step 305: multiple threads of described establishment get message according to described message allocation rule from message source, row relax of going forward side by side.
When described multiple thread gets message, by the number of threads delivery of user ID numbering by each node, result equals those message of thread number by composition ordered subsets, and the message in this ordered subsets is processed by the order getting out by the thread that this thread number is corresponding.Such as, node A thread is numbered the thread of 5, and when getting, high-ranking military officer gets the message in the ordered subsets of the user ID numbering of Cust_id%800=5, and wherein " Cust_id " represents that user ID is numbered.
Step 306: described thread allocation rule and message allocation rule are upgraded.
It should be noted that, when subsequent applications, can also upgrade pre-configured thread allocation rule and message allocation rule.Such as, size of message increase too fast need to carry out dilatation to server cluster time, can modify to the thread allocation rule in preset configuration database and message allocation rule, and this specific implementation dynamically updated can be: call the instruction being deployed in each node in advance, no longer message is got after the complete current message of each node processing in order in triggered clusters, and reinitialize each node in order, thread allocation rule and message allocation rule, each node may stop service a period of time (1-5 minute) in actual applications, but this of short duration time does not have impact to requirement of real-time in the application of a minute level (1-5 minute).
The present embodiment not only can solve the technical matters affecting the real-time of Message Processing and the handling capacity of server got message and cause, can also more convenient and efficient realization to server cluster dilatation, be more adapted to current network application scene.
Shown in figure 4, disclosed in the embodiment of the present application, the method for parallel processed messages is at the application schematic diagram of practical application, wherein, a server cluster includes n processing node, a processing node then creates M thread according to the thread allocation rule in preset configuration database, and simultaneously each thread is got message according to message allocation rule and to be gone forward side by side row relax from message source.
For aforesaid each embodiment of the method, in order to simple description, therefore it is all expressed as a series of combination of actions, but those skilled in the art should know, the application is not by the restriction of described sequence of movement, because according to the application, some step can adopt other orders or carry out simultaneously.Secondly, those skilled in the art also should know, the embodiment described in instructions all belongs to preferred embodiment, and involved action and module might not be that the application is necessary.
Corresponding with the method that the embodiment of the method 1 of a kind of parallel processed messages of above-mentioned the application provides, see Fig. 5, present invention also provides a kind of device embodiment 1 of parallel processed messages, in the present embodiment, this device can comprise:
Acquisition module 501, for obtaining thread allocation rule and message allocation rule from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed.
Creation module 502, for creating multiple threads corresponding to each processing node according to described thread allocation rule.
Trigger module 503, gets message according to described message allocation rule from message source, row relax of going forward side by side for the multiple threads triggering described establishment.
Wherein, shown in figure 6, described trigger module 503 specifically can comprise operator module 601 and triggers module 602, and described operator module 601 may be used for the user ID numbering of request processing messages to carry out modulo operation according to described number of threads; Described triggers module 602 may be used for triggering the thread that mates with described operation result of thread number from described message source, gets the message that described user ID triggers.Adopt the device described in the embodiment of the present invention, can by setting up thread and comprising it and need associating of the ordered subsets of processing messages, thus the concurrency that thread gets message process can be realized, improve the real-time of node processing message, thus bring the handling capacity of server cluster and the lifting of real-time.
Corresponding with the method that the embodiment of the method 2 of a kind of parallel processed messages of above-mentioned the application provides, see Fig. 7, present invention also provides a kind of device embodiment 2 of parallel processed messages, in the present embodiment, this device can comprise:
Module 701 is set, for arranging described thread allocation rule according to the CPU quantity of described processing node and/or memory parameters.
Acquisition module 501, for obtaining thread allocation rule and message allocation rule from preset configuration database; The number of threads that described thread allocation rule has for each processing node represented in cluster, described message allocation rule is for representing pending message by which thread is processed;
Creation module 502, for creating multiple threads corresponding to each processing node according to described thread allocation rule;
Trigger module 503, gets message according to described message allocation rule from message source, row relax of going forward side by side for the multiple threads triggering described establishment.
Update module 702, for upgrading described thread allocation rule and message allocation rule.
It should be noted that, when the method described in the application adopts software simulating, as a function newly-increased in node, also can write separately corresponding program, the implementation of the application's not definition of said device.
By device disclosed in the present embodiment, not only can solve the technical matters affecting the real-time of Message Processing and the handling capacity of server got message and cause, can also more convenient and efficient realization to server cluster dilatation, be more adapted to current network application scene.
In addition, the embodiment of the present application also discloses a kind of node of parallel processed messages, and described node specifically can comprise: aforementioned means embodiment 1 or the device described in embodiment 2.Related introduction for device with reference to preceding method embodiment and device embodiment, can not repeat them here.
The embodiment of the present application also discloses a kind of server cluster simultaneously, and described server cluster specifically can comprise: the node of described parallel processed messages disclosed at least two the embodiment of the present application.Related introduction for node with reference to preceding method embodiment and device embodiment, can not repeat them here.
It should be noted that, each embodiment in this instructions all adopts the mode of going forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar part mutually see.For device class embodiment, due to itself and embodiment of the method basic simlarity, so description is fairly simple, relevant part illustrates see the part of embodiment of the method.
Finally, also it should be noted that, term " comprises ", " comprising " or its any other variant are intended to contain comprising of nonexcludability, thus make to comprise the process of a series of key element, method, article or equipment and not only comprise those key elements, but also comprise other key elements clearly do not listed, or also comprise by the intrinsic key element of this process, method, article or equipment.When not more restrictions, the key element limited by statement " comprising ... ", and be not precluded within process, method, article or the equipment comprising described key element and also there is other identical element.
Above the method for a kind of parallel processed messages that the application provides, device, node and server cluster are described in detail, apply specific case herein to set forth the principle of the application and embodiment, the explanation of above embodiment is just for helping method and the core concept thereof of understanding the application; Meanwhile, for one of ordinary skill in the art, according to the thought of the application, all will change in specific embodiments and applications, in sum, this description should not be construed as the restriction to the application.