CN101282303A - Method and apparatus for processing service packet - Google Patents

Method and apparatus for processing service packet Download PDF

Info

Publication number
CN101282303A
CN101282303A CNA2008101118857A CN200810111885A CN101282303A CN 101282303 A CN101282303 A CN 101282303A CN A2008101118857 A CNA2008101118857 A CN A2008101118857A CN 200810111885 A CN200810111885 A CN 200810111885A CN 101282303 A CN101282303 A CN 101282303A
Authority
CN
China
Prior art keywords
service message
module
unit
pending service
newly
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101118857A
Other languages
Chinese (zh)
Other versions
CN101282303B (en
Inventor
卢胜文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Technologies Co Ltd
Original Assignee
Hangzhou H3C Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou H3C Technologies Co Ltd filed Critical Hangzhou H3C Technologies Co Ltd
Priority to CN2008101118857A priority Critical patent/CN101282303B/en
Publication of CN101282303A publication Critical patent/CN101282303A/en
Application granted granted Critical
Publication of CN101282303B publication Critical patent/CN101282303B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a service processing method. The method caches the to-be transferred service to the subsequence stream array; scheduling the to-be processed service message in the subsequence stream array and the predetermined new-built stream array to the processing engine according to the proportion of the predetermined scheduling number; the scheduling number proportion is more than 1; while the processing engine judges the to-be processed service message in the subsequence stream array never targets the conversation table, caching the to-be transferred service to the new-built stream array; while judging the to-be processed service message in the subsequence stream array never targets the conversation table, implementing the subsequence processing operation according to the targeted conversation table; the processing engine implements the subsequence processing operation for the to-be transferred service message in the new-built stream array. The embodiment of the invention also discloses a service processing device. The invention can enhance the transferring functions for the subsequence package while a large amount service streams is coming.

Description

Service message processing method and device
Technical field
The present invention relates to the message treatment technology, be specifically related to be applied to the service message processing method and the service message processing unit of multiprocessing engine concurrent processing service message.
Background technology
In the message repeating process, service message is carried out Business Processing can consume a large amount of execution resources, in order to improve the disposal ability of business datum, the common at present network processing unit (NP that adopts the multi-core CPU technology or have multi-stage pipeline, Network Processor) technology realizes the Business Processing and the forwarding of service message, is the concurrent processing technology that multi-core CPU or NP are based on a plurality of processing engine.In the multi-core CPU technology, each processing engine finishes that element extraction, the conversational list of service message in the message repeating process are searched, the conversational list lookup result is handled and message encapsulation and send the processing of this four-stage, and 4 stages branches that NP handles service message are handled in 4 streamlines, each streamline all has a plurality of processing engine, carries out the processing operation that this streamline is responsible for.
With the NP by streamline work is example, and Fig. 1 shows the multi-stage pipeline of NP.As shown in Figure 1, the service message of NP transmit to be handled and to be divided into 4 streamline ranks, and each level production line finishes respectively that element extraction, conversational list are searched, the conversational list lookup result is handled and message encapsulation and send the processing of this four-stage.Every level production line has a plurality of processing engine participation work, finishes the scheduling of processing engine automatically by NP hardware, and usually through scheduling, the message of same Business Stream can be handled by identical processing engine.Lattice among Fig. 4 is represented processing engine.
Adopt the streamline shown in Fig. 1 to such as fire compartment wall, network address translation (NAT, Net AddressTranslation), conversation-based Business Stream such as session statistics carries out service message and transmits when handling, at first service message to be transmitted is buffered in (this message queue is not shown in Fig. 1) in the message queue, will service message be transmitted be dispatched to by the buffer memory order and transmits processing among the NP.Processing procedure comprises: the processing engine in the first order streamline is extracted the IP five-tuple from waiting to transmit the service message; The IP five-tuple that processing engine in the streamline of the second level is extracted with first order streamline is that index carries out conversational list and searches; Processing engine in the third level streamline is carried out corresponding service according to the conversational list lookup result and is handled; The service message of processing engine in the fourth stage streamline after with Business Processing encapsulates and sends.
In the processing procedure of third level streamline, if processing engine is judged when hitting conversational list according to the conversational list lookup result, represent that service message current to be transmitted is the subsequent packet of Business Stream, carry out Business Processing that conversational list indicate to service message this moment, then the service message after the Business Processing sent to fourth stage streamline; If judging according to the conversational list lookup result, processing engine do not hit conversational list, represent that service message current to be transmitted is the first packet of new service flow, need session entry this moment according to the establishment new service flow correspondence that imposes a condition, and then carry out Business Processing according to the session entry of creating, the service message after the Business Processing is sent to fourth stage streamline.
Wherein, the session entry creation operation is realized by revising session table entry.Usually, the list item retouching operation is finished by the modification engine of the list item in the processing engine (referring to Fig. 1).If there is not special list item to revise engine, the processing that then must lock to each processing engine of third level streamline guarantees that synchronization has only a processing engine to revise list item.When a large amount of new service flow arrived, the most of processing engine of preceding three class pipeline were occupied by first packet, waited for the free time of the processing engine of being responsible for the list item modification.
Fig. 2 shows in the prior art, possesses the operating position that multi-stage pipeline and list item are revised NP each pipeline processes engine when a large amount of new service flow arrive of engine.The small rectangle of the filling vertical moulding among Fig. 2 is represented the processing engine that occupied by first packet, and the small rectangle of filling the oblique line bar is represented the processing engine that occupied by subsequent packet, and blank small rectangle is represented idle processing engine.As shown in Figure 2, when a large amount of new service flow arrives, when the first packet of each new service flow arrives third level streamline, all to revise engine and carry out list item modification processing by list item, and list item is revised the engine negligible amounts, and it is very slow to revise list item speed, as long as the quantity of new service flow is revised engine quantity greater than list item, to cause each new service flow to occupy most of processing engine of preceding three class pipeline, wait for the free time of list item modification engine, have only the small part processing engine to be used to handle subsequent packet, cause subsequent packet in time not to be forwarded.And service message is stuck in the third level streamline, delays to reach fourth stage streamline, causes the free time of processing engine in the fourth stage streamline.Finally, the forwarding performance of subsequent packet also approaches the forwarding performance of first packet in the new service flow, and under the normal condition forwarding speed of first packet than the forwarding speed of subsequent packet more than slow ten times.
As seen, present service message is transmitted scheme, and when a large amount of new service flow arrived, the forwarding performance of subsequent packet reduced greatly, causes the instability of whole message forwarding performance.
Summary of the invention
In order to the invention provides a kind of service message processing method, the forwarding performance of subsequent packet when improving a large amount of new service flow and arriving.
This method is applied to multiprocessing engine concurrent processing service message, comprising:
Service message to be transmitted is buffered in the subsequent flows formation;
In the scheduling times ratio of setting, the pending service message in subsequent flows formation and the default newly-built flow queue is dispatched to processing engine, described scheduling times ratio is greater than 1;
Described processing engine should be cached in the described newly-built flow queue by pending service message when judging from the miss conversational list of pending service message in the subsequent flows formation; When judgement is hit conversational list from the pending service message in the subsequent flows formation, carry out post-treatment operations by the conversational list that hits;
To the pending service message in the newly-built flow queue, described processing engine is carried out post-treatment operations.
Wherein, described scheduling times ratio is: transmit Business Stream subsequent packet and the independent forwarding performance ratio of transmitting the Business Stream first packet separately.
Wherein, described forwarding performance ratio is: the unit interval is transmitted the ratio of quantity with the quantity of described unit interval forwarding first packet of subsequent packet.
In having the processor of multi-stage pipeline, described when judging from the miss conversational list of pending service message in the subsequent flows formation, should pending service message be cached in the described newly-built flow queue and be:
Carry out the processing engine in the second level streamline of conversational list search operation, when the miss conversational list of judging from the subsequent flows formation of pending service message, should pending service message be cached in the described newly-built flow queue and withdraw from streamline;
Perhaps, execution conversational list lookup result is handled the processing engine in the third level streamline of operating, when the miss conversational list of judging according to the checking result of second level streamline from the subsequent flows formation of pending service message, should pending service message be cached in the described newly-built flow queue and withdraw from streamline.
Wherein, to the clear text in the newly-built flow queue, described processing engine is carried out post-treatment operations and is specially:
To carrying out message element extraction and conversational list search operation from the pending service message in the newly-built flow queue; From the miss conversational list of pending service message in the newly-built flow queue time, create session entry according to this pending service message, and carry out post-treatment operations according to the session entry of creating.
The present invention also provides a kind of service message processing unit, the forwarding performance of subsequent packet when improving a large amount of new service flow and arriving.
This device comprises:
But the processing engine unit of subsequent flows queue unit, newly-built flow queue unit, scheduling unit and a plurality of service messages of concurrent processing; Wherein,
Described subsequent flows queue unit is used for the to be transmitted service message of buffer memory from the outside;
Described newly-built flow queue unit is used for the new service flow message that the caching process engine unit is determined;
Described scheduling unit is used for according to the scheduling times ratio of setting the pending service message in subsequent flows queue unit and the newly-built flow queue unit being dispatched in the processing engine unit;
Described processing engine unit is used for should being defined as the new service flow message by pending service message, and being cached in the described newly-built flow queue unit when the miss conversational list of judging from the subsequent flows queue unit of pending service message; When judging that pending service message from the subsequent flows queue unit hits conversational list, carry out post-treatment operations by the conversational list that hits; To pending service message, carry out post-treatment operations from described newly-built flow queue unit.
Preferably, described scheduling unit is further used for receiving the forwarding performance ratio that the place device is transmitted the Business Stream subsequent packet separately and transmitted the Business Stream first packet separately from the outside, and the forwarding performance ratio that receives is set at described scheduling times ratio.
Wherein, described processing engine unit comprises first module, second module, three module and four module;
Described first module is used to extract the IP five-tuple from the pending service message of described scheduling unit, and described pending service message and IP five-tuple thereof are sent to described second module;
Described second module is used for searching conversational list according to the IP five-tuple of the pending service message of receive, and checking result is sent to three module;
Described three module is used for should being cached in the described newly-built flow queue unit by pending service message when described checking result shows the miss conversational list of pending service message from the subsequent flows queue unit; When described checking result shows that pending service message from the subsequent flows queue unit hits conversational list, carry out by the conversational list that hits and to handle operation, the service message after handling is sent to described four module; To carrying out the respective handling operation, the service message after handling is sent to described four module according to checking result from the pending service message of newly-built flow queue unit;
Described four module is used for the service message from three module is encapsulated and sends.
Wherein, described processing engine unit comprises first module, second module, three module and four module;
Described first module is used to extract the IP five-tuple from the pending service message of described scheduling unit, and described pending service message and IP five-tuple thereof are sent to described second module;
Described second module, be used for searching conversational list according to the IP five-tuple of the pending service message of receive, when the miss conversational list of judging from the subsequent flows queue unit of pending service message, should be cached in the described newly-built flow queue unit by pending service message; Judging that the pending service message from the subsequent flows queue unit hits conversational list, perhaps pending service message is during from described newly-built flow queue unit, should pending service message and checking result send to three module;
Described three module is used for according to described checking result the message of managing business that receives being carried out follow-up business and handles, and the service message after the described Business Processing is sent to described four module;
Described four module is used for the service message after the described Business Processing is encapsulated and sends.
Wherein, described processing engine unit has multi-stage pipeline, described first module is the processing engine of the first order streamline of execution message element extraction, described second module is the processing engine of the second level streamline carrying out conversational list and search, described three module is the processing engine of the third level streamline of execution conversational list lookup result processing, and described four module is the processing engine of the fourth stage streamline of encapsulation of execution message and transmission.
According to above technical scheme as seen, the embodiment of the invention is handled according to greater than 1 scheduling times ratio the pending service message in subsequent flows formation and the newly-built flow queue being dispatched to processing engine, thereby increased the probability that subsequent packet is dispatched to processing engine, limited the probability that first packet is dispatched to processing engine.When a large amount of new service flow arrive, owing to limited the probability that first packet is dispatched to engine, reduced first packet to the taking of processing engine, make subsequent packet can access more processing engine, thereby in time be forwarded.As seen, compared with prior art, under the situation that a large amount of new service flow arrive, the forwarding performance of subsequent packet is improved.
Preferably, the scheduling times ratio is set to transmit separately subsequent packet and the independent forwarding performance ratio of transmitting first packet.So, handle in the newly-built flow queue during the first packet at processing engine A, other processing engine is all handled in the forwarding of carrying out subsequent packet, when taking turns to processing engine B and handle another first packet in the newly-built flow queue, processing engine A disposes, and can be used for handling subsequent packet.As seen, Jue Dabufen processing engine is taken by subsequent packet.When a large amount of new service flow arrived, the forwarding performance of subsequent packet can not impacted, and has guaranteed the stability of service message forwarding performance.
Description of drawings
Fig. 1 is the multi-stage pipeline schematic diagram of NP in the prior art.
Fig. 2 be in the prior art when a large amount of new service flow arrive the operating position of each pipeline processes engine.
Fig. 3 is the flow chart of service message processing method in the embodiment of the invention.
Fig. 4 be in the embodiment of the invention when a large amount of new service flow arrive the operating position of each pipeline processes engine.
Fig. 5 is the structural representation of service message processing unit in the embodiment of the invention.
Embodiment
The embodiment of the invention provides a kind of service message processing method that is applied to have multiprocessing engine occasion, and its basic thought is: at first increase newly-built flow queue, service message to be transmitted is buffered in the message queue that is called the subsequent flows formation; In configuration scheduling number of times ratio, the pending service message in subsequent flows formation and the newly-built flow queue is dispatched to processing engine, this scheduling times ratio is greater than 1; Processing engine should be cached to newly-built flow queue by pending service message when the miss conversational list of judging from the subsequent flows formation of clear text, wait is dispatched again; When judgement is hit conversational list from the pending service message in the subsequent flows formation, carry out post-treatment operations by the conversational list that hits; To the pending service message in the newly-built flow queue, carry out post-treatment operations.
Wherein, which processing engine pending service message is dispatched to can determine according to prior art, for example message destination address and source address are carried out Hash operation, determine processing engine according to operation result, thereby make the service message of same Business Stream to be handled by the same treatment engine, this is identical with prior art.
As seen, the embodiment of the invention is handled according to greater than 1 scheduling times ratio the pending service message in subsequent flows formation and the newly-built flow queue being dispatched to processing engine, thereby increased the probability that subsequent packet is dispatched to processing engine, limited the probability that first packet is dispatched to processing engine.When a large amount of new service flow arrive, owing to limited the probability that first packet is dispatched to engine, reduced first packet to the taking of processing engine, make subsequent packet can access more processing engine, thereby in time be forwarded.As seen, compared with prior art, under the situation that a large amount of new service flow arrive, the forwarding performance of subsequent packet is improved.
Preferably, the scheduling times ratio is set to the forwarding performance ratio that apparatus for forwarding message is transmitted subsequent packet separately and transmitted first packet separately.Forwarding performance is than the ratio that is specifically as follows the quantity of transmitting subsequent packet in the unit interval and the quantity of transmitting first packet.For example the forwarding performance ratio is 10: 1, and the pending service message of 10 subsequent flows formations of then every processing is handled the pending service message in the newly-built flow queue 1 time.So, handle in the newly-built flow queue during the first packet at processing engine A, other processing engine is all handled in the forwarding of carrying out subsequent packet, when taking turns to processing engine B and handle another first packet in the newly-built flow queue, processing engine A disposes, and can be used for handling subsequent packet.As seen, this scheduling times ratio is set to 10: 1, makes that 1/10th processing engine is taken by subsequent packet.When a large amount of new service flow arrived, the forwarding performance of subsequent packet can not impacted, and has guaranteed the stability of service message forwarding performance.
The service message processing scheme of the embodiment of the invention is applicable to the occasion of multiprocessing engine concurrent processing service message, for example multi-core CPU and multiple pipeline NP.List item retouching operation when first packet is carried out Business Processing can be by being responsible for the processing engine that list item is revised specially, i.e. list item modification engine is finished, and also can be taken into account by the processing engine of being responsible for the processing of conversational list lookup result.
Carry out service message with the NP with multi-stage pipeline below and be treated to example, the embodiment that develops simultaneously in conjunction with the accompanying drawings describes the present invention.
Fig. 3 shows the flow chart of service message processing method in the embodiment of the invention, and as shown in Figure 3, this method may further comprise the steps:
Step 300: be provided for buffer memory and wait to transmit the subsequent flows formation of service message and be used for the newly-built flow queue that buffer memory is judged as the new service flow message.
Step 301: will be buffered in the subsequent flows formation from the service message to be transmitted of outside.Should comprise first packet and subsequent packet from the service message of outside.
Step 302:, determine the current formation that should dispatch according to the scheduling times ratio of subsequent flows formation and newly-built flow queue; If the current formation that should dispatch is the subsequent flows formation, then execution in step 303, if the current formation that should dispatch is newly-built flow queue, then execution in step 306.
Wherein, the scheduling times ratio is the forwarding performance ratio of subsequent packet and first packet.Here got 20: 1.Certainly, this scheduling times ratio can be adjusted as required.For example, can experimentize in advance, thereby determine only ratio value.
Wherein, according to the scheduling times ratio in subsequent flows formation and the newly-built flow queue, the operation of determining the current formation that should dispatch can have multiple, wherein a kind ofly is:
According to the scheduling times ratio, be respectively the subsequent flows formation and newly-built flow queue is provided with the scheduling times upper limit; The scheduling of a subsequent flows formation of every execution adds one with the scheduling times of subsequent flows formation, equals when the scheduling times of subsequent flows formation on the scheduling times of subsequent flows formation with the scheduling times zero clearing of newly-built flow queue, to carry out the scheduling of newly-built flow queue in limited time;
Every execution is the scheduling of newly-built flow queue once, and the scheduling times of newly-built flow queue is added one, equals when the scheduling times of newly-built flow queue on the scheduling times of newly-built flow queue with the scheduling times zero clearing of subsequent flows formation, to carry out the scheduling of subsequent flows formation in limited time.
Therefore, in this step, can judge the scheduling times of current subsequent flows formation and newly-built flow queue scheduling times which do not reach its upper limit, the formation of then determining not reach the upper limit is the current scheduling queue for the treatment of.
In practice, the subsequent flows formation can have other formation of different priorities for one group, and each formation can a corresponding port that receives service message.In definite subsequent flows formation is current when treating scheduling queue, can take out pending service message from a subsequent flows formation according to priority.
Step 303: the pending service message in the subsequent flows formation is dispatched in the processing engine, and processing engine is according to the IP five-tuple inquiry session table of this pending service message, if hit conversational list, then execution in step 304; Otherwise execution in step 305.
In this step, pending service message enters first order streamline and the second level streamline of NP, and this bi-level treatment engine is finished the IP five-tuple and extracted and the conversational list searching work.
Step 304: determine that this pending service message is a subsequent packet, directly subsequent packet is carried out subsequent treatment, return step 302.
In this step, the processing engine of the third level streamline of NP determines that according to checking result the current business message is a subsequent packet, directly according to the session entry that hits subsequent packet is carried out Business Processing, the subsequent packet of the processing engine of fourth stage streamline after to Business Processing encapsulates and transmits then.
Step 305: determine that this pending service message is the new service flow message, do not create the session entry of new service flow correspondence this moment immediately and create processing, but being put into newly-built flow queue, the new service flow message of determining waits scheduling, and withdraw from current processing engine and streamline, return step 302.
In this step, the second level streamline of NP should be put into newly-built flow queue by pending service message behind definite miss conversational list of pending service message, withdraw from this level production line.Perhaps, second level streamline sends to third level streamline with checking result, and this third level streamline is put into pending service message newly-built flow queue and withdrawed from this level production line after determining the miss conversational list of pending service message according to checking result.
Wherein, the service message that is confirmed as the new service flow message may be the first packet of new service flow; If the first packet of new service flow is medium pending at newly-built flow queue, then to be confirmed as the service message of new service flow message also may be the subsequent packet of new service flow to this step.
Step 306: the pending service message in the newly-built flow queue is dispatched in the processing engine, and processing engine according to Query Result, is carried out existing post-treatment operations according to the IP five-tuple inquiry session table of pending service message, returns step 302 then.
In this step, enter each level production line of NP successively from the pending service message of newly-built flow queue: the processing engine of first order streamline is extracted the IP five-tuple of service message, the processing engine of second level streamline is according to the IP five-tuple inquiry session table of service message, the processing engine of third level streamline is carried out the Business Processing of prior art according to checking result, and the service message of the processing engine of fourth stage streamline after to Business Processing encapsulates and transmit.Wherein the operation of the processing engine of third level streamline is specially: show in checking result under the situation of miss conversational list, create the session entry of new service flow, carry out Business Processing according to the session entry of creating then; Hit under the situation of conversational list in the checking result demonstration, carry out Business Processing by the conversational list that hits.
So far, this flow process finishes.
From the flow process shown in Fig. 3 as can be seen, when the crucial part of the embodiment of the invention is to judge service message and is the new service flow message, unlike prior art, carry out the processing of new service flow, reschedule but this service message is reentered into newly-built flow queue wait, and withdraw from engine or streamline.Guarantee that by carrying out queue scheduling again subsequent packet can priority scheduling arrive engine, transmits fast.Because the scheduling ratio is to determine with reference to the forwarding performance ratio of subsequent packet and first packet, in the forwarding stage to subsequent packet, the Business Processing of first packet is also finished, and has therefore realized promoting the effect of subsequent packet forwarding performance greatly under the first packet forwarding performance descends few situation.When a large amount of new service flow arrived, the forwarding ratio of subsequent packet and first packet can not undergone mutation, so forwarding performance can not impacted, and had guaranteed the stability of message forwarding performance.This stable forwarding performance can also resist attack, and a large amount of new service flow can not transmitted message yet and make a big impact.Fig. 4 shows in the embodiment of the invention operating position of each pipeline processes engine when a large amount of new service flow arrive.As shown in Figure 4, adopt the service message processing method of the embodiment of the invention, most processing engine are used to handle subsequent packet, do not have idle processing engine in the fourth stage streamline.
The present invention also provides a kind of service message processing unit.Fig. 5 is the structural representation of service message processing unit in the embodiment of the invention.As shown in Figure 5, this device comprises newly-built flow queue unit 51, subsequent flows queue unit 52, scheduling unit 53 and processing engine unit 54 that can a plurality of pending service messages of concurrent processing, wherein,
Subsequent flows queue unit 52 is used for the to be transmitted service message of buffer memory from the outside;
Newly-built flow queue unit 51 is used for the new service flow message that caching process engine unit 54 is determined;
Scheduling unit 53 is used in configuration scheduling number of times ratio, and the pending service message in subsequent flows queue unit 52 and the newly-built flow queue unit 51 is dispatched in the processing engine unit 54.This scheduling unit 53 also receives from the place device of the outside forwarding performance ratio to Business Stream subsequent packet and Business Stream first packet, and this forwarding performance ratio is set at the scheduling times ratio.
Processing engine unit 54 is used for should being defined as the new service flow message by pending service message when the miss conversational list of judging from subsequent flows queue unit 51 of pending service message, is cached in the newly-built flow queue unit 51; When judging that pending service message from subsequent flows queue unit 51 hits conversational list, carry out post-treatment operations by the conversational list that hits; When judging pending service message, carry out post-treatment operations according to the conversational list Query Result from newly-built flow queue unit 52.
This processing engine unit 54 comprises first module 541, second module 542, three module 543 and four module 544.Wherein,
First module 541 is used to extract the IP five-tuple that scheduling unit 13 is dispatched the pending service message that comes, and pending service message and IP five-tuple thereof are sent to second module 542;
Second module 542 is used for searching conversational list according to the IP five-tuple of the pending service message that is received, and checking result is sent to three module 543.
Three module 543 is used for carrying out Business Processing according to checking result: when checking result shows the miss conversational list of pending service message from subsequent flows queue unit 51, should be cached in the newly-built flow queue unit 52 by pending service message; When checking result shows that pending service message from subsequent flows queue unit 51 hits conversational list, carry out by the conversational list that hits and to handle operation, the service message after handling is sent to described four module 544; Handle operation according to checking result to carrying out existing business, the service message after the Business Processing is sent to four module 544 from the pending service message of newly-built flow queue unit 52.
Four module 544 is used for 543 service message from three module is encapsulated and sends.
Scheme instead, second module 542 also can be used for after searching conversational list, carry out following operation: when the miss conversational list of judging from subsequent flows queue unit 51 of pending service message, then should be cached in the newly-built flow queue unit 52 by pending service message; Judging that the pending service message from subsequent flows queue unit 51 hits conversational list, perhaps pending service message sends to three module 543 with checking result during from newly-built flow queue unit 52.So, three module 534 just can be according to the processing operational processes of the existing third level streamline service message from second module 542.
Processing engine unit 54 is processing engine set, when service message processing unit of the present invention is multi-stage pipeline NP, each level production line of processing engine unit 54 corresponding NP, first module 541 is the processing engine of first-class waterline, second module 542 is the processing engine of second streamline, three module 543 is the processing engine of the 3rd streamline, and four module 544 is the processing engine of the 4th streamline.When service message processing unit of the present invention was multi-core CPU, processing engine unit 54 comprised each processing engine in the multi-core CPU, and each processing engine all comprises above-mentioned 4 modules.
In sum, more than be preferred embodiment of the present invention only, be not to be used to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any modification of being done, be equal to replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1, a kind of service message processing method is applied to multiprocessing engine concurrent processing service message, it is characterized in that this method comprises:
Service message to be transmitted is buffered in the subsequent flows formation;
In the scheduling times ratio of setting, the pending service message in subsequent flows formation and the default newly-built flow queue is dispatched to processing engine, described scheduling times ratio is greater than 1;
Described processing engine should be cached in the described newly-built flow queue by pending service message when judging from the miss conversational list of pending service message in the subsequent flows formation; When judgement is hit conversational list from the pending service message in the subsequent flows formation, carry out post-treatment operations by the conversational list that hits;
To the pending service message in the newly-built flow queue, described processing engine is carried out post-treatment operations.
2, the method for claim 1 is characterized in that, described scheduling times ratio is: transmit Business Stream subsequent packet and the independent forwarding performance ratio of transmitting the Business Stream first packet separately.
3, method as claimed in claim 2 is characterized in that, described forwarding performance ratio is: the unit interval is transmitted the ratio of quantity with the quantity of described unit interval forwarding first packet of subsequent packet.
4, the method for claim 1, it is characterized in that, in having the processor of multi-stage pipeline, described when judging from the miss conversational list of pending service message in the subsequent flows formation, should pending service message be cached in the described newly-built flow queue and be:
Carry out the processing engine in the second level streamline of conversational list search operation, when the miss conversational list of judging from the subsequent flows formation of pending service message, should pending service message be cached in the described newly-built flow queue and withdraw from streamline;
Perhaps, execution conversational list lookup result is handled the processing engine in the third level streamline of operating, when the miss conversational list of judging according to the checking result of second level streamline from the subsequent flows formation of pending service message, should pending service message be cached in the described newly-built flow queue and withdraw from streamline.
As any described method of claim 1 to 4, it is characterized in that 5, to the clear text in the newly-built flow queue, described processing engine is carried out post-treatment operations and is specially:
To carrying out message element extraction and conversational list search operation from the pending service message in the newly-built flow queue; From the miss conversational list of pending service message in the newly-built flow queue time, create session entry according to this pending service message, and carry out post-treatment operations according to the session entry of creating.
6, a kind of service message processing unit is characterized in that, but this device comprises the processing engine unit of subsequent flows queue unit, newly-built flow queue unit, scheduling unit and a plurality of service messages of concurrent processing; Wherein,
Described subsequent flows queue unit is used for the to be transmitted service message of buffer memory from the outside;
Described newly-built flow queue unit is used for the new service flow message that the caching process engine unit is determined;
Described scheduling unit is used for according to the scheduling times ratio of setting the pending service message in subsequent flows queue unit and the newly-built flow queue unit being dispatched in the processing engine unit;
Described processing engine unit is used for should being defined as the new service flow message by pending service message, and being cached in the described newly-built flow queue unit when the miss conversational list of judging from the subsequent flows queue unit of pending service message; When judging that pending service message from the subsequent flows queue unit hits conversational list, carry out post-treatment operations by the conversational list that hits; To pending service message, carry out post-treatment operations from described newly-built flow queue unit.
7, device as claimed in claim 6, it is characterized in that, described scheduling unit is further used for receiving the forwarding performance ratio that the place device is transmitted the Business Stream subsequent packet separately and transmitted the Business Stream first packet separately from the outside, and the forwarding performance ratio that receives is set at described scheduling times ratio.
8, device as claimed in claim 7 is characterized in that, described processing engine unit comprises first module, second module, three module and four module;
Described first module is used to extract the IP five-tuple from the pending service message of described scheduling unit, and described pending service message and IP five-tuple thereof are sent to described second module;
Described second module is used for searching conversational list according to the IP five-tuple of the pending service message of receive, and checking result is sent to three module;
Described three module is used for should being cached in the described newly-built flow queue unit by pending service message when described checking result shows the miss conversational list of pending service message from the subsequent flows queue unit; When described checking result shows that pending service message from the subsequent flows queue unit hits conversational list, carry out by the conversational list that hits and to handle operation, the service message after handling is sent to described four module; To carrying out the respective handling operation, the service message after handling is sent to described four module according to checking result from the pending service message of newly-built flow queue unit;
Described four module is used for the service message from three module is encapsulated and sends.
9, device as claimed in claim 7 is characterized in that, described processing engine unit comprises first module, second module, three module and four module;
Described first module is used to extract the IP five-tuple from the pending service message of described scheduling unit, and described pending service message and IP five-tuple thereof are sent to described second module;
Described second module, be used for searching conversational list according to the IP five-tuple of the pending service message of receive, when the miss conversational list of judging from the subsequent flows queue unit of pending service message, should be cached in the described newly-built flow queue unit by pending service message; Judging that the pending service message from the subsequent flows queue unit hits conversational list, perhaps pending service message is during from described newly-built flow queue unit, should pending service message and checking result send to three module;
Described three module is used for according to described checking result the message of managing business that receives being carried out follow-up business and handles, and the service message after the described Business Processing is sent to described four module;
Described four module is used for the service message after the described Business Processing is encapsulated and sends.
10, as any described device of claim 6 to 9, it is characterized in that, described processing engine unit has multi-stage pipeline, described first module is the processing engine of the first order streamline of execution message element extraction, described second module is the processing engine of the second level streamline carrying out conversational list and search, described three module is the processing engine of the third level streamline of execution conversational list lookup result processing, and described four module is the processing engine of the fourth stage streamline of encapsulation of execution message and transmission.
CN2008101118857A 2008-05-19 2008-05-19 Method and apparatus for processing service packet Active CN101282303B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101118857A CN101282303B (en) 2008-05-19 2008-05-19 Method and apparatus for processing service packet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101118857A CN101282303B (en) 2008-05-19 2008-05-19 Method and apparatus for processing service packet

Publications (2)

Publication Number Publication Date
CN101282303A true CN101282303A (en) 2008-10-08
CN101282303B CN101282303B (en) 2010-09-22

Family

ID=40014586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101118857A Active CN101282303B (en) 2008-05-19 2008-05-19 Method and apparatus for processing service packet

Country Status (1)

Country Link
CN (1) CN101282303B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447943B (en) * 2008-12-26 2011-05-11 杭州华三通信技术有限公司 Queue scheduling system and method
CN102316022A (en) * 2011-07-05 2012-01-11 杭州华三通信技术有限公司 Protocol message forwarding method and communication equipment
CN102025607B (en) * 2009-09-19 2013-04-17 华为技术有限公司 Data processing method, network processor and network equipment
CN103166845A (en) * 2013-03-01 2013-06-19 华为技术有限公司 Data processing method and device
CN103179109A (en) * 2013-02-04 2013-06-26 上海恒为信息科技有限公司 Secondary session query function based filtering and distribution device and method thereof
CN105760402A (en) * 2014-12-16 2016-07-13 中兴通讯股份有限公司 End-to-end service performance query method and end-to-end service performance query device
WO2016206520A1 (en) * 2015-06-26 2016-12-29 中兴通讯股份有限公司 Method and apparatus for implementing flow table traversal service
CN106648929A (en) * 2016-12-02 2017-05-10 武汉斗鱼网络科技有限公司 Switch system and switch mode implementation method
CN109257280A (en) * 2017-07-14 2019-01-22 深圳市中兴微电子技术有限公司 A kind of micro engine and its method for handling message

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1677952A (en) * 2004-03-30 2005-10-05 武汉烽火网络有限责任公司 Method and apparatus for wire speed parallel forwarding of packets
US7941585B2 (en) * 2004-09-10 2011-05-10 Cavium Networks, Inc. Local scratchpad and data caching system

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447943B (en) * 2008-12-26 2011-05-11 杭州华三通信技术有限公司 Queue scheduling system and method
CN102025607B (en) * 2009-09-19 2013-04-17 华为技术有限公司 Data processing method, network processor and network equipment
CN102316022A (en) * 2011-07-05 2012-01-11 杭州华三通信技术有限公司 Protocol message forwarding method and communication equipment
CN103179109B (en) * 2013-02-04 2016-12-28 恒为科技(上海)股份有限公司 Filter bypass devices and methods therefors based on two grades of session query functions
CN103179109A (en) * 2013-02-04 2013-06-26 上海恒为信息科技有限公司 Secondary session query function based filtering and distribution device and method thereof
CN103166845A (en) * 2013-03-01 2013-06-19 华为技术有限公司 Data processing method and device
CN105760402A (en) * 2014-12-16 2016-07-13 中兴通讯股份有限公司 End-to-end service performance query method and end-to-end service performance query device
WO2016206520A1 (en) * 2015-06-26 2016-12-29 中兴通讯股份有限公司 Method and apparatus for implementing flow table traversal service
CN106330694A (en) * 2015-06-26 2017-01-11 中兴通讯股份有限公司 Method and device for realizing flow table traversal business
CN106648929A (en) * 2016-12-02 2017-05-10 武汉斗鱼网络科技有限公司 Switch system and switch mode implementation method
CN106648929B (en) * 2016-12-02 2019-06-04 武汉斗鱼网络科技有限公司 A kind of switching system and switching mode implementation method
CN109257280A (en) * 2017-07-14 2019-01-22 深圳市中兴微电子技术有限公司 A kind of micro engine and its method for handling message
CN109257280B (en) * 2017-07-14 2022-05-27 深圳市中兴微电子技术有限公司 Micro-engine and message processing method thereof

Also Published As

Publication number Publication date
CN101282303B (en) 2010-09-22

Similar Documents

Publication Publication Date Title
CN101282303B (en) Method and apparatus for processing service packet
US6779084B2 (en) Enqueue operations for multi-buffer packets
CN101069170B (en) Network service processor and method for processing data packet
US9385957B1 (en) Flow key lookup involving multiple simultaneous cam operations to identify hash values in a hash bucket
CN106130985B (en) A kind of message processing method and device
CN104641616B (en) The low delay networked devices predicted using header
CN1846409B (en) Apparatus and method for carrying out ultraspeed buffer search based on transmission control protocol traffic flow characteristic
CN101267437B (en) Packet access control method and system for network devices
CN103401783A (en) Method and device for realizing Openflow multistage flow table
CN102299843B (en) Network data processing method based on graphic processing unit (GPU) and buffer area, and system thereof
CN101729402A (en) Flow consistent dynamic load balancing
CN1396748A (en) Block processing device
CN101567852B (en) Method and device for switching the network address of IP message
CN102882810A (en) Rapid message transmitting method and device
CN101573927A (en) Path MTU discovery in network system
US20030093566A1 (en) System and method for network and application transparent database acceleration
CN102480430A (en) Method and device for realizing message order preservation
US20140115263A1 (en) CHILD STATE PRE-FETCH IN NFAs
CN1863158B (en) IP message fragment cache memory and forwarding method
CN1781293A (en) System and method for modifying data transferred from a source to a destination
CN101217486B (en) A mobile Internet data load allocation method based on network processor
US20190124184A1 (en) Data Processing Method and Apparatus
CN104618152A (en) Session table aging method and system
CN110046286A (en) Method and apparatus for search engine caching
CN102761608A (en) UDP (User Datagram Protocol) conversation multiplexing method and load balancing equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP03 Change of name, title or address

Address after: 310052 Binjiang District Changhe Road, Zhejiang, China, No. 466, No.

Patentee after: Xinhua three Technology Co., Ltd.

Address before: 310053 Hangzhou hi tech Industrial Development Zone, Zhejiang province science and Technology Industrial Park, No. 310 and No. six road, HUAWEI, Hangzhou production base

Patentee before: Huasan Communication Technology Co., Ltd.

CP03 Change of name, title or address