Embodiment
In the embodiment of the invention, when the message of obtaining from reader, at first according to the content of this message, determine the processing operation of this message correspondence, then this processing is operated in buffer memory in the thread pool, wait for the allocation process thread, when being assigned to processing threads, this message is carried out corresponding processing operation, and the information that processing obtains carried out buffer memory in message queue, wait for the processing of dispatch thread,, give one or more modules the distribution of information that processing obtains by carrying out one or more dispatch threads.
Below in conjunction with accompanying drawing the preferred embodiments of the present invention are described, should be appreciated that preferred embodiment described herein only is used for description and interpretation the present invention, and be not used in qualification the present invention.
Figure 1A is the structural drawing according to the dispensing device of the RFID label of the embodiment of the invention, and Figure 1B is the structural drawing of the dispensing device of RFID label according to the preferred embodiment of the invention.
Shown in Figure 1A, comprise according to the dispensing device of the RFID label of the embodiment of the invention: determination module 10, first processing module 12, second processing module 14 and distribution module 16.Below further combined with accompanying drawing above-mentioned each module is described.
(1) determination module 10, are used to obtain the message that carries the RFID label information from reader, and according to the content of message, determine the processing operation of message correspondence.
Particularly, shown in Figure 1B, determination module 10 can comprise: obtain submodule 100 and definite submodule 102.Wherein,
Obtain submodule 100, be used to obtain content from the message of reader.
Determine submodule 102, and obtain submodule 100 and be connected, be used for, determine that the processing corresponding with the content of obtaining the message that submodule 100 obtains operate according to content of setting up in advance and the corresponding relation of handling operation.
(2) first processing modules 12 are connected with determination module 10, are used for that the processing that determination module 10 is determined is operated in thread pool and carry out buffer memory, are processing operated allocated processing threads, and message is carried out this processing operation (comprising the encoding and decoding processing).
Particularly, shown in Figure 1B, first processing module 12 can comprise: judge submodule 120, distribution sub module 122 and sub module stored 124.Wherein,
Judge submodule 120, be used for operating in after the thread formation carries out buffer memory, judge the current the processing threads whether free time is arranged in the processing that determination module 10 is determined.
Distribution sub module 122 is connected with judging submodule 120, is used for judging that submodule 120 determines currently when idle processing threads is arranged, and be processing operated allocated processing threads.
Sub module stored 124 is connected with judging submodule 120, is used for putting into the processing queue of thread pool judging that submodule 120 determines currently when not having idle processing threads with handling operation, waits for the thread distribution.
Further, sub module stored 124 can comprise judging unit 1240 and expansion unit 1242 again.Wherein,
Judging unit 1240, be used for will handle the operation put into the processing queue of thread pool before, whether the judgment processing formation full.
Expansion unit 1242 is connected with judging unit 1240, is used for expanding the capacity of processing queue according to default parameter when judging unit 1240 determines that processing queue has expired, increases the quantity of processing threads, and the retention time under the idle condition of processing threads.
(3) second processing modules 14 are connected with first processing module 12, are used for that first processing module 12 is handled the information that obtains (comprise encoding and decoding after RFID label information) and carry out buffer memory in message queue.
Further, shown in Figure 1B, dispensing device according to the RFID label of the embodiment of the invention can also comprise: recycling module 18, be connected with second processing module 14, the information that is used in second processing module 14 processing being obtained is after buffer memory is carried out in message queue, and the processing threads of operation is handled in recycling.
(4) distribution module 16, be connected with second processing module 14, be used for by carrying out one or more dispatch threads, the distribution of information that the processing of second processing module, 14 buffer memorys is obtained is to one or more modules, wherein, each dispatch thread all is numbered sign with the module of correspondence.
Particularly, shown in Figure 1B, distribution module 16 can comprise one of following or its combination: expansion submodule 160 and reclaim submodule 162.Wherein,
Expansion submodule 160, the information that is used for comprising when message queue surpasses predetermined threshold, or the distribution processor time of one of them information surpass the schedule time during threshold value, carry out the operation that increases dispatch thread quantity.
Reclaim submodule 162, be used for surpassing the schedule time during threshold value, the dispatch thread of free time is carried out recovery or union operation when the time of dispatch thread free time.
In specific implementation process, this device can be realized by the device among Fig. 2 20.This device comprises: network element communication module 200, general adaptation engine 202, task processing threads pond 204, unblock data queue module 206 and message processing module 208.Wherein,
Network element communication module 200, what be equivalent among Figure 1B cover half piece 10 really obtains submodule 100, is used for carrying out with reader 22 interactive module of indifference communication mode, obtains the message that reader 22 sends.
General adaptation engine 202, be connected with network element communication module 200, be equivalent among Figure 1B really cover half piece 10 stator modules 102 really, the content that is used for the message obtained according to network element communication module 202, obtain the type and the version of the reader that sends this message, and command code that carry out to handle operation, carry out then adaptive, the processing operation of obtaining this message correspondence.
Task processing threads pond 204, be connected with general adaptation engine 202, be equivalent to first processing module 12 among Figure 1A and Figure 1B, thread pool is put in the processing operation that is used for general adaptation engine 202 is obtained, for handling the operated allocated processing threads, and message is carried out this processing operate, and the size of processing threads quantity and wait processing queue can be dynamically adjusted in task processing threads pond 204 according to current treatment capacity.
Unblock data queue module 206, be connected with task processing threads pond 204, be equivalent to second processing module and distribution module among Figure 1A and Figure 1B, the information cache that is used for obtaining after the 204 execution processing operations of task processing threads pond is in message queue, and, the distribution of information that obtains is given the module of this information of processing by carrying out one or more dispatch threads.
Message processing module 208 is connected with unblock data queue module 206, is used for the information that task processing threads pond 204 obtains is handled, and may comprise particularly: check module, be used for according to rule information being divided into groups, being filtered; Writing module is used for write operation is carried out in each data field of label.Except that these two modules, message processing module can also comprise other module.
Engage above-mentioned device below, the distribution method of the RFID label that the embodiment of the invention is provided is elaborated, as shown in Figure 3, the main flow process according to the distribution method of the RFID label of the embodiment of the invention comprises following processing (step S300-step S304):
Step S300: obtain the message that carries the RFID label information,, determine the processing operation of this message correspondence according to the content of this message from reader;
Step S302: the processing of determining among the step S300 is operated in thread pool carry out buffer memory, when this handles operated allocated to the processing thread, the message of obtaining is carried out this processing operation, and the information that processing obtains is carried out buffer memory in message queue; Wherein, message queue is with the relatively also unblock formation of exchange (Compare And Set abbreviates CAS as) algorithm realization, and the CAS algorithm can be realized the failure of a thread or hang up failure or the hang-up that does not influence other threads.And, in message queue, each information of preservation with its from the unique identification of reader corresponding;
Step S304:, give one or more modules with the distribution of information that processing obtains by carrying out one or more dispatch threads.
Below further describe each details of above-mentioned processing.
(1) step S300
Before the processing of carrying out step S300, reader sends radiofrequency signal, the information of reading tag, send the message that is packaged with label information to the RFID middleware then, simultaneously, the content of this message includes but not limited to following information: the type of reader, the version number of reader and the command code of handling the processing operation of this message.
In specific implementation process, among the step S300, the processing of determining message is operated this step and can be comprised following processing:
(1) obtains the content of described message.When obtaining the message that reader sends, at first from this message, parse the content of this message, thereby obtain sending type, the version number of the reader of this message, and/or command code.Wherein, command code is that the code of handling operation is carried out in indication.
(2), determine the processing operation corresponding with the content of this message according to content of setting up in advance and the corresponding relation of handling operation.Wherein, content is to set up according to the pattern of setting with the corresponding relation of handling operation, comprise the type according to reader, the version of reader, and/or command code is determined corresponding process operations.
(2) step S302
In this step, thread pool is realized according to predefined strategy, it provides the caching mechanism of handling operation, comprise that enlivening thread (promptly handles the thread of the processing operation of putting into thread pool, below be called processing threads) and deposit operation wait for the formation (promptly deposit the formation of the processing operation of waiting for scheduling, below be called processing queue) of scheduling; And thread pool can be according to configuration, changes the quantity of wherein active processing threads and deposits the size of the processing queue of the operation of waiting for scheduling.
Like this, in step S300, successfully obtain the processing operation of (determining) message after, in step S302, at first should handle operation and put into thread pool, the next message of main processing threads wait reception.
Thread pool is at first judged the current idle processing threads that whether has when receiving the processing operation, and takes appropriate measures according to the result who judges, based on this, step S302 specifically can comprise following processing:
Step 1: judge the current idle processing threads that whether has, if any, enter step 6, otherwise, step 2 continued.
Step 2: judge whether current processing queue is full, if, then continue step 3, otherwise, step 5 entered.
Step 3:, expand the capacity of processing queue according to default parameter.Full when processing queue, then need according to default parameter, the upper limit of formation is promoted, expanding the capacity of processing queue, otherwise, can't put into processing queue with handling operation.
Step 4: increase the quantity of processing threads, and the retention time of processing threads under idle condition.If current processing queue is full, illustrate that then the quantity of current pending processing operation has surpassed predetermined threshold value, must increase the quantity of current processing threads, with the pending task of timely processing, reduce task etc. the pending time.
Step 5: should handle operation and put into processing queue, and wait for scheduling.
Processing threads is according to the principle of first in first out (First In First Out abbreviates FIFO as), and the task in the processing queue is left in scheduling in one by one, and after the task before this processing operation all is scheduled execution, this processing operation will be assigned to processing threads.
Step 6: processing threads is handled operation types according to this, and message is carried out corresponding process operations, and the information that processing obtains is put into message queue; Wherein, the processing operation that message is advanced to carry out comprises: encoding and decoding are handled; Correspondingly, the information that obtains comprises the RFID label information after the encoding and decoding.
In specific implementation process, for ensure and reader between reliable transmission, processing threads can earlier carry out pre action according to handling operation types, is that the information that obtains increases general message header, and after processing, carry out post action, increase the information that guarantees reliability transmission.
Finishing corresponding process operations when processing threads is, reclaims this processing threads, to handle the next operation of handling.
(3) step S304
After information before leaving the information that step S302 obtains in the message queue in is all handled, by one or more dispatch threads that are numbered sign with RFID middleware internal module, this information is distributed, thereby can guarantee that the needed information of each module can not blocked by other module.
In specific implementation process, the information content that comprises in message queue surpasses predetermined threshold, or distribution processor time of one of them information is surpassed the schedule time during threshold value, can increase the quantity of dispatch thread, to reduce the obstruction of information processing; And have idle thread in the dispatch thread that ought distribute, and the idle time surpass the schedule time during threshold value, can merge or reclaim dispatch thread.
The process flow diagram of a preferred embodiment of the distribution method of the RFID label that Fig. 4 provides for the embodiment of the invention.In conjunction with Fig. 2, as shown in Figure 4, mainly may further comprise the steps:
Step S400: the network element communication module obtains the message that reader sends, and after testing, this message is sent to general adaptation engine; Wherein, comprise the RFID label information that reader reads in this message.
Step S402: general adaptation engine reads the content of this message from this message, and according to content of setting up in advance and the corresponding relation of handling operation, determines the processing operation corresponding with the content of this message.
In specific operation process, may there be situation about obtaining less than the operational processes corresponding with message content, in this case, flow process directly finishes, and this message is not processed, and waits for receiving next message.
Step S404: the thread pool in task processing threads pond is put in the processing operation that general adaptation engine will be determined.After will handling operation and putting into thread pool, main processing threads begins wait for to receive next message, can guarantee the unblock of receiving thread.
Step S406: the current idle thread that whether has is judged in task processing threads pond, if, then enter step S416, otherwise, step S408 continued.
Step S408: judge whether current processing queue is full, if, continue step S410, otherwise, step S414 entered.
Step S410:, expand the capacity of processing queue according to the parameter of setting.Wherein, what place in the processing queue pending task such as is, promptly handles operation.
Step S412: increase the quantity of processing threads, and the time-to-live of handling the processing threads under the idle condition.Wherein, the time-to-live of processing threads is meant that processing threads is in idle condition and just reclaims after how long, i.e. the retention time of processing threads under idle condition.
Because processing queue has been full, illustrate that current pending task quantity is more, therefore, need to increase the quantity of processing threads, the quantity of pending task such as to reduce.
Step S414: will handle operation and put into processing queue, and wait pending.When the task before coming this processing operation in the processing queue has all been carried out, this processing operation will be assigned to processing threads.
Step S416: processing threads is carried out this processing operation according to handling operation types to message.Particularly, processing threads carries out pre action according to its action type (forward direction, reverse message) earlier, and carries out post action after processing, as to forward direction message, can increase the universal information head in pre action.
Step S418: the information cache that processing is obtained reclaims the processing threads of carrying out this time processing in message queue.
Step S420:, give one or more modules with the distribution of information that processing obtains by carrying out one or more dispatch threads.Wherein, each dispatch thread all is numbered sign with the module of correspondence.
In specific implementation process, can be as the case may be, be that each resume module is distributed a dispatch thread, also can the shared dispatch thread of a plurality of resume module, or a plurality of dispatch thread is carried out a resume module.
And, as mentioned above, the information that comprises in message queue surpasses predetermined threshold, or distribution processor time of one of them information surpassed the schedule time during threshold value, can increase the quantity of dispatch thread,, and have idle thread in the dispatch thread that ought distribute with the obstruction of minimizing information processing, and the idle time surpasses the schedule time during threshold value, can merge or reclaims dispatch thread.
In the embodiment of the invention, when the message of obtaining from reader,, determine the processing operation of this message correspondence at first according to the content of this message, then this processing is operated in buffer memory in the thread pool, wait for the allocation process thread, when being assigned to processing threads, this message is carried out processing behaviour, and the information that processing obtains carried out buffer memory in message queue, wait for the processing of dispatch thread,, give one or more modules the distribution of information that processing obtains by carrying out one or more dispatch threads.By the present invention, can be fast, effectively, in real time label information is handled, is distributed, thereby reduce the obstruction of RFID label distribution, reduce the needed time of distribution, improve the efficient of distribution, reach the purpose of unblock, the processing of high-performance ground, distribution label, and then improve the overall performance of RFID middleware.And the embodiment of the invention can dynamically be adjusted the quantity of processing threads and dispatch thread according to current treatment capacity, therefore, can adapt to big data quantity to greatest extent, and the environment of high concurrency guarantees the clog-free forwarding of data.
The above is the preferred embodiments of the present invention only, is not limited to the present invention, and for a person skilled in the art, the present invention can have various changes and variation.Within the spirit and principles in the present invention all, any modification of being done, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.