CN100570551C - Method for reducing buffer capacity in pipeline processor - Google Patents

Method for reducing buffer capacity in pipeline processor Download PDF

Info

Publication number
CN100570551C
CN100570551C CNB2005800445729A CN200580044572A CN100570551C CN 100570551 C CN100570551 C CN 100570551C CN B2005800445729 A CNB2005800445729 A CN B2005800445729A CN 200580044572 A CN200580044572 A CN 200580044572A CN 100570551 C CN100570551 C CN 100570551C
Authority
CN
China
Prior art keywords
packet
cost information
interface
streamline
grouping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2005800445729A
Other languages
Chinese (zh)
Other versions
CN101088065A (en
Inventor
汤玛斯·柏顿
贾克柏·卡斯崔姆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
A Strategic Position Lelateniuke Co
Original Assignee
Xelerated AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xelerated AB filed Critical Xelerated AB
Publication of CN101088065A publication Critical patent/CN101088065A/en
Application granted granted Critical
Publication of CN100570551C publication Critical patent/CN100570551C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a method for a processor (1) and a processor comprising a processing pipeline (2) and at least one interface (3) for data packets. The method is characterized in that the second data packet (D2) is admitted into the pipeline (2) in dependence on cost information (c1) determined by an expected residence time period of the first data packet (D1) in at least a part (P1.., PK) of the pipeline (2). The first data packet (D1) may be the same as the second data packet, but preferably the first data packet (D1) enters the pipeline (2) before the second data packet (D2).

Description

Be used for reducing the method for pipeline processor buffer capacity
Technical field
The present invention relates to be used to comprise the method for handling the streamline and the processor of at least one interface that is used for packet.
Background technology
In the pipelined network processor, process data packets in processing unit or treatment element comes order to transmit packet by these processing units or treatment element.In asynchronous pipeline, the access of a treatment element is independent of access to other treatment elements.In known pipeline processors, allow that input flow rate enters the treatment element sequence, make to realize constant code rate, perhaps allow fast as far as possible, and there is not controlled admittance restriction, thus, provide restriction by handling property.Have under the situation that different disposal requires or difference is big or small in grouping, between treatment element, need relatively large buffer capacity, finish in treatment element subsequently to wait for because grouping may be waited in line longer a period of time.In other words, owing to provide maximum packet rate by streamline bandwidth [bits/s] and minimum packet size, different grouping may be waited in different disposal element FIFO, makes that the summation of PE FIFO size is bigger.
Summary of the invention
Target of the present invention is to reduce required buffer capacity in the pipeline processor.
Be used to comprise that by a kind of the method for handling the streamline and the processor of at least one interface that is used for packet reaches this target, the step that this method comprises is: rely on the cost information that is determined resident period by the expectation of first packet in streamline at least a portion, permit second packet and enter streamline.
The present invention specifically is suitable for combining with network processing unit.Cost information can be depending on first packet residing permanently most the period of staying in the treatment element of handling streamline.Cost can otherwise define.Preferably be defined as follows: tentation data grouping D1 enters vacancy reason streamline and withdraws from the processing streamline at moment TM_EXIT_1 at moment TM_ENTRY_1.Packet D2 subsequently enters at moment TM_ENTRY_2 and handles streamline and withdraw from the processing streamline at moment TM_EXIT_2.The cost C1 of packet D1 is based on minimum time difference C1=TM_ENTRY_2-TM_ENTRY_1, and it will prevent any busy treatment element in the pending streamlines such as packet D2, perhaps based on the approximate value more than or equal to this mistiming.Therefore, cost C1 takies any element in the processing streamline and makes its maximum duration that can't accept new grouping based on packet D1, referring to following further discussion.
The present invention will avoid the long-time afterwards situation of lining up of the grouping in processing.Because the present invention will reduce the risk of packet in the wait of any part of streamline,, the invention provides the streamline storage capacity that has reduced with respect to streamline according to known technology.
In one embodiment, first packet is identical with second packet.In other words, the access of packet depends on himself cost of grouping.Thus, can use so-called strict token bucket algorithm (strict token bucket algorithm) to permit packet and enter streamline, whereby, periodically increase progressively bucket level with fixed credit amount (credit amount), when credit arrives cost corresponding to successively next packet at least greatly, allow that this packet enters streamline, whereby, the token bucket rank reduces, and the amount of its reduction is corresponding to the cost of described packet.
But in a preferred embodiment, first and second packets are inequality, and first packet enters streamline prior to second packet.Thus, can determine by so-called loose token bucket algorithm, whereby, periodically increase progressively bucket level with the fixed credit amount to the access of streamline.Preferably, no matter use strict token bucket algorithm or loose token bucket algorithm, under every kind of situation of the credit level that increases progressively token bucket, credit level increases progressively with scheduled volume, and scheduled volume is a bit for example.When credit reaches predetermined value, for example, when credit be zero or on the occasion of, next packet successively is first packet here, is allowed to enter streamline, whereby, the token bucket rank reduces, and the amount of its reduction is corresponding to the cost of first packet.Because credit value depends on the latter's cost after the first packet access, and next packet, here be second packet, reach predetermined value once more up to credit and just can allow and enter, thereby the access of second packet depend on the cost of first packet.This is an advantage, finishes risk in the processing of preceding grouping because directly related between the access of grouping and the cost of last grouping will further reduce any packet awaits treatment element.
Preferably, no matter use loose or strict token bucket algorithm, cost information all is confirmed as C=N*T/D, wherein C is the cost of grouping, N is to the grouping input port of processor pipeline or the quantity of interface, T is that the maximum of any treatment element of causing owing to packet transaction in the streamline intercepts the time, and D is the period between two continuous increments of credit level of each interface token bucket.
Therefore, as an example, enter streamline if having only an input port to be used for grouping, i.e. if N=1 is and T=1 and D=1, then C=1.If the more than one input mouth is arranged, preferably, use round-robin scheduling to share streamline.Therefore, as an example, enter streamline if there are two input ports to be used for grouping, i.e. if N=2 is and T=1 and D=1, then C=2.
Replacedly, when providing the more than one input mouth to processor, processing power can be shared between input port asymmetricly.Thus, each input port K=1,2,3...N can have the weight M_K that is associated, and makes M_1+M_2+...+M_N=N, and is C_K=M_K*N*T/D by the cost that input port K receives grouping.
Scheme as an alternative, cost information is confirmed as C 〉=N*T/D, and promptly cost is set to N*T/D at least.This does not think maybe can not to utilize under the situation of all told of streamline the user be useful.
According to an embodiment, depend on second packet cost information in the expectation resident period in streamline at least a portion, different with the described cost information that depends on the expectation resident period of first packet in streamline at least a portion.In other words, cost information is " it is specific to divide into groups ".As a result, can realize the high-accuracy of access control that divides into groups, because can obtain the cost information of relevant each grouping.Preferably, thus, cost information is stored in the head of each packet.Replacedly, other storage forms of cost information also are possible, for example, are stored in each grouping afterbody, perhaps are stored in the independent control channel that is parallel to the input buffer that divides into groups.
Replacedly, perhaps additionally, cost information depends on the hypothesis of the cost of a plurality of packets corresponding to predetermined " interface is specific " information.Thus, have under the similar condition of cost in known grouping in advance (for example from one or more interfaces), general hypothesis makes and there is no need to read the information of so relevant each grouping, and can simplify the process of grouping access thus.More specifically,, suppose that a plurality of or whole groupings have the identical cost corresponding to predetermined information, make to form the specific cost information of interface at interface.Preferably, the interface information specific is based on the estimation to the maximum composition of a plurality of groupings.When processor comprises at least two interfaces that are used for packet, can be different at the cost information of the grouping of at least one interface with the cost information of the grouping of at least one other interface, make and packet allocation to be given special interface by the specific cost assumption of special interface expensively, and can give another interface with packet allocation at low cost by the specific cost assumption of another interface.
According to an alternative embodiment, pipeline processor comprises at least two interfaces that are used for packet, whereby, is that interface is specific at the cost information of the packet of first interface, is different from the specific cost information of interface in the packet of second interface.
As an alternative, method may further comprise the steps: check a plurality of groupings, determine the grouping cost, and for the grouping that has above the cost of predetermined value, store grouping information as the specific cost information of grouping.
Preferably, described allowance second packet enters the step of streamline and carries out by usage flag hash table bucket algorithm at first interface, whereby, the credit of token bucket can be adjusted based on described cost information, whereby, will be from the token bucket of first interface overflow credit transfer to overflow flag hash table bucket (overflow token bucket) in the grouping access of second interface.
Overflow bucket makes and might allow a more expensive grouping to enter streamline via second interface.Thus, ordinary traffic packets is also referred to as the transmission plane packet here, can introduce by first interface, and expensive grouping can be introduced by second interface.Preferably, the design of fifo buffer size is at the formation of piling up after such expensive packet in the streamline.Replacedly, the design of fifo buffer size is at the formation of piling up after one or more such expensive packet.Preferably, before can sending in the streamline, new expensive packet fills up this overflow bucket again.This has guaranteed that the impact damper of piling up in the streamline has turned back to normal level before next expensive packet arrives and makes that formation increases once more.The example of such expensive packet is to be used to the grouping controlling and manage, is also referred to as control and management grouping here, sends to processor by outer CPU.
Preferably, at second interface, for two divide into groups all accesses of its cost information corresponding to the specific predetermined value of interface, as mentioned above, for the grouping with cost information independent, that grouping is specific also access, as mentioned above.
Target of the present invention can also realize by processor and router or computer unit.
A kind of processor (1), comprise at least one interface (3) of handling streamline (2) and being used for packet, it is characterized in that, described processor comprises reshaper (5), described reshaper (5) be suitable for relying on by first packet (D1) at least a portion of streamline (2) (P1 ..., PK) in the cost information (c1) that determined resident period of expectation, enter streamline (2) and permit second packet (D2).
A kind of router or computer unit, it comprises according to above-mentioned processor.
Description of drawings
In detailed description, in conjunction with the accompanying drawings, further advantage of the present invention will be described, in the accompanying drawings:
Fig. 1 is corresponding to the block diagram of processor according to an embodiment of the invention,
Fig. 1 a and 1b are the block diagrams corresponding to the part of the processor of Fig. 1,
Fig. 2 is the block diagram corresponding to the part of the processor of the further embodiment according to the present invention,
Fig. 3 is the block diagram corresponding to the processor of another embodiment of the present invention, and
Fig. 4 is the block diagram corresponding to the part of the processor of another embodiment of the present invention.
Embodiment
Fig. 1 illustrates the first embodiment of the present invention.Network processing unit is handled streamline 1 and is comprised asynchronous process streamline 2, asynchronous process streamline 2 comprise again a plurality of treatment element P1, P2 ..., PK.Treatment element P1, P2 ..., among the PK any one can be the access point that is used to insert treatment facility or engine, described in WO 2004/010288, at this with its reference of including.Data traffic is transmitted in the drawings from left to right.In the direction of data traffic, each treatment element P1, P2 ..., before the PK, provide treatment element impact damper B1, B2 ..., BK, be the form of fifo buffer.Each impact damper B1, B2 ..., among the BK, subsequent treatment element P1, P2 ..., during the processing of the grouping formerly among the PK, can storage data grouping, after finishing described processing, next grouping then allowed enter subsequent treatment element P1, P2 ..., PK.
Packet D1, D2, D3 enter processor by the interface that comprises input port 3, and are stored in the input buffer 4.Each packet D1, D2, D3 comprise information c1, the c2 of the cost with relevant data grouping, the head of c3.(head can comprise the information of the size of relevant packet.) cost information c1, c2, c3 depend on relevant each packet D1, D2, D3 take handle treatment element P1, P2 in the streamline 2 ... thereby, any one makes it can't accept the information of the maximum duration of new data packet among the PK.
Preferably, cost information c1, c2, c3 are confirmed as the form described in " summary of the invention " part, c1=N*T/D for example, wherein, in this example, N (quantity of input port)=1, T be in the streamline 2 arbitrarily treatment element P1, P2 ..., PK is owing to handle the maximum obstruct time that grouping D1 causes, D is the period between the increment of two continuous credit levels of token bucket at input port 3 places, referring to following further description.
Can be with multiple mode to each packet apportioning cost information.For example, the user knows the processing cost of packet in advance, can send cost information with packet, for example in head, as shown in Figure 1.Replacedly, the sorter of processor can be adapted for and check grouping and determine grouping composition (" presorting ").As further replacement, as will be described further below, all costs that are grouped in an interface can be set to identical, that is, the cost of all groupings is corresponding to the predetermined specific value of interface.
Access to streamline 2 is determined by token bucket algorithm, is carried out by packet rate shaper 5 (being also referred to as reshaper 5 here).Reshaper 5 is adapted for relevant cost information c1, c2, the c3 of grouping D1, D2, D3 that reads and enter.Preferably, reshaper uses so-called loose token bucket shaper, that is, if credit is negative, bucket level just periodically for example each clock period of processor 1, increases progressively fixing credit.When credit is zero or timing, next the grouping D1 in turn in the input buffer 4 is allowed to enter streamline 2, and whereby, token bucket reduces, and the amount that reduces is corresponding to the cost c1 of next grouping D1.Fig. 1 a and 1b have schematically described this mechanism.The cost c1 of next grouping D1 in turn in the input buffer 4 is X, and because the bucket level of reshaper 5 is 0, grouping D1 is allowed to enter streamline 2.Thus, bucket level reduces X, and next the grouping D2 in the input buffer must wait for that bucket level reaches zero again and just can be allowed to enter streamline.
Replacedly, use strict token bucket algorithm, whereby, bucket level periodically increases progressively the fixed credit amount, and, when credit arrive at least greatly corresponding in turn next the grouping D1 cost c1 the time, this grouping D1 is allowed to enter streamline 2, whereby, the token bucket rank reduces, and the amount that reduces is corresponding to the cost c1 of grouping D1.But, preferably, use loose token bucket algorithm, because it relates to than strict version operation steps still less, its reason is, in loose version, be non-negative credit rank to the unique conditional of grouping access, therefore needn't be then between the cost of the credit of token bucket and grouping, compare.
Because cost information is based on the premeasuring to the operation of the D1 that divides into groups in the streamline, D2, D3, and the processing time of operational ton and grouping is proportional, stream according to the present invention to data groupings carries out the processing requirements that shaping will cause the stream of packets adapted packet, and this will make might make treatment element impact damper B1, B2 ..., BK keeps less.
After withdrawing from streamline 2, be grouped in by being stored in the output buffer 6 before output port 7 transmissions.
As reading cost information c1, the c2 in the head that is stored in grouping D1, D2, D3, the replaceable scheme of c3, reshaper 5 can be adjusted the rank of token bucket by the specific predetermined value of interface.Preferably, the specific value of interface is corresponding to the estimation to the maximum cost that enters grouping.
Also might mix of grouping and the aforesaid grouping with cost information single, that grouping specific of aforesaid those its cost information at interface corresponding to the specific predetermined value of interface.Thus, sorter such as sorter recited above, can be checked grouping and determine the grouping cost and for the grouping that has above the cost of predetermined value, storage cost information is as the specific cost information of grouping.Thus, obtained more flexibility and efficient, because can the very high grouping of processing cost, and the specific cost information of interface needn't be supposed unpractical high value.
Referring to Fig. 2, should be noted that streamline can comprise at least one synchronous element 8, before each synchronous element 8 and have elastic buffer 9,10 afterwards.But, the inventive concept that this does not change here to be presented.
Referring to Fig. 3, illustrate the second embodiment of the present invention.Network processing unit 1 comprises asynchronous process streamline 2, is similar to the 1 described streamline with reference to figure, but also comprises asynchronous element 8, has elastic buffer 9,10, as described in reference to figure 2.
Packet D11 ..., D1M enters processor by interface, each interface comprise input port 31,32 ..., 3M, packet be stored in each input buffer 41,42 ..., among the 4M.Pipeline arbiter 11,51,52 ..., 5M comprise scheduler 11 and a plurality of reshaper 51,52 ..., 5M.Especially, for input port 31,32 ..., 3M and input buffer 41,42 ..., each is right among the 4M, provide reshaper 51,52 ..., 5M, each reshaper is carried out shaping according to token bucket algorithm.In this diagram, represent that it is provided by interface or input port entity, perhaps functional interface or the input port of being connected in scheduler or token bucket that interface or input port provide.To the access of streamline 2 be by reshaper 51,52 ..., 5M and scheduler 11 determine, it is according to the round-robin algorithm computing, whereby, by scheduler 11 with continuous polling sequence permit reshaper 51,52 ..., 5M visits this streamline.Except round-robin algorithm, can also use the scheduling standard of replacement, for example Weighted Fair Queuing, difference circulation, difference weighting circulation, strict priority queuing and First Come First Served.
When grouping D11 ..., when D1M enters into streamline 2, according to each interface specific predetermined costs value c1, c2 ..., cM, adjust, promptly reduce each token bucket credit.In this embodiment, reshaper 51,52 ..., value at cost c1, the c2 of 5M ..., cM differs from one another.Thus, similar for the needs of handling in the grouping of interface reception owing to the total system attribute has.For example, interface may receive the grouping that will be classified and switch from circuit, and another interface receives the grouping from switching construction, and these need less processing before being grouped in transmission usually.Therefore, the embodiment shown in Fig. 3 provides a plurality of input buffers, and each impact damper has specific cost; For example, impact damper of every physical/logical, wherein distinct interface has the different disposal demand.Certainly, scheme as an alternative, two or more reshapers can be with identical value at cost operation.
In this embodiment, for specific reshaper 51,52 ..., specific value at cost c1, the c2 of the interface of 5M ..., cM is confirmed as N*T/D, wherein N be input port 31,32 ..., the quantity of 3M, T be because reshaper 51,52 ..., the processing of the grouping of 5M access and in streamline the maximum at any treatment element place intercept the time, D be reshaper 51,52 ..., the periodicity increment of the credit level of the token bucket of 5M.
Reshaper 51,52 among Fig. 3 ..., the token bucket algorithm of 5M is preferably loose, as mentioned above.If for to each reshaper 51,52 ..., the mark of the 5M cycle rate of depositing (deposit) be higher than scheduler 11 to each reshaper 51,52 ..., the words of the poll rate of 5M, preferably, each reshaper 51,52 ..., the bucket level of 5M increases progressively, as long as it is lower than burst sizes B.Thus, B be scheduler 11 to each reshaper 51,52 ..., the maximum clock periodicity between two continuous polls of 5M, or maximum mark is deposited number.As a result, do not have mark waste, wherein, reshaper 51,52 ..., the 5M preparation transmit grouping D11 ..., D1M, but scheduler 11 another reshapers 51,52 of service ..., 5M.Certainly, can use same policy in conjunction with strict token bucket algorithm.
After withdrawing from streamline 2, each grouping D11 ..., D1M, by each output buffer 61,62 ..., after the 6M, via some output ports 71,72 ..., one of 7M and being sent out.
As docking port specific predetermined costs value c1, c2 ..., the alternative of cM, reshaper 51,52 ..., 5M can allow to divide into groups D11 ..., DM transmits based on the cost information in each packet header, and is as above described with reference to figure 1.As further replacement, can use mixed strategy, wherein, reshaper 51,52 ..., one or more uses have the specific predetermined costs value of interface among the 5M strategy, and reshaper 51,52 ..., one or more uses have the strategy of the cost information in packet header among the 5M.
Some grouping, for example be used for the control of processor and the grouping of management, transmission is from outer CPU, cause treatment element P1, P2 ..., the operation at PK place, it consumes the clock cycle more for a long time, and therefore have than the higher cost of the routine operation in the normal discharge grouping (for example, transmitting plane engine visit).The TCAM that writes particular type is an example of such control and management engine visit.If pipeline element P1, P2 ..., the routine operation in the adaptive normal discharge grouping of the processing speed of PK, and the packet rate shaper 51,52 of pipeline arbiter ..., 5M is configured to mate these processing speeds, " consume " (cycle-expensive) than the multicycle, it is expensive processing, for example, its form may cause the accumulation of asking among the request FIFO of affected treatment element for control and management engine visit.FIFO overflows for fear of request, and the latter must be allowed to emptying before new consumption can be inserted in the programmable flow waterline 2 than the control of multicycle and management grouping.
Referring to Fig. 4, pipeline arbiter PA is shown as has five interfaces 31,32,33,34, the 3X that is used to enter data traffic, and pipeline arbiter is adapted for data traffic is sent to the processing streamline, as shown by arrow A.Certainly, streamline stamping-out device can comprise the interface of any amount in principle.In order to address the above problem, the preferred embodiments of the present invention are included in that the regulation interface is interface 3X among the pipeline arbiter PA, are used for the lower-priority data grouping, are also referred to as control and management packet interface 3X here, such as being used for control and management grouping.
In this example, four interfaces 31,32,33,34 are suitable for the general data flow, are also referred to as the transmission plane packet here.Pipeline arbiter PA provides rate shaper 81,82,83,84 at each common discharge interface 31,32,33,34.Preferably, rate shaper 81,82,83,84 also is a loose token bucket.They for the restricting data burst sizes of great use, the restriction and the reservation of each interface bandwidth of customization system that is used for exceeding the quata.Replacedly, one or more interfaces of moderator can be provided, and not have such rate shaper, for example in the situation of fixed packet sizes, for example in ATM or other systems based on unit (cell-baed) by one or more interfaces.
As finding out among Fig. 4, the packet rate shaper 51,52,53,54 that provides as other interfaces is not provided for control and management packet interface 3X.Each all comprises packet rate shaper token buckets T1, T2, T3, T4 packet rate shaper 51,52,53,54, is used for by token bucket algorithm the data flow being carried out shaping, and is as above described with reference to figure 3.In addition, each packet rate shaper 51,52,53,54 of each packet interface also comprises overflow flag hash table bucket TX1, TX2, TX3, TX4.Shown in curved arrow among Fig. 4, each overflow flag hash table bucket TX1, TX2, TX3, TX4 are when each packet rate shaper token buckets T1, T2, T3, (more specifically, when it is zero or some other intended level) reception grouping beginning label from wherein overflowing when T4 is full of.Preferably, packet rate shaper token buckets T1, T2, T3, T4, and overflow flag hash table bucket TX1, TX2, TX3, TX4 are loose token bucket, as mentioned above.
Similar with the embodiment among Fig. 3, the embodiment among Fig. 4 comprises scheduler 11, and it is operated according to round-robin algorithm, whereby, in the mode of the continuous polling sequence of scheduler 11, permits reshaper 51,52,53,54 access stream waterlines.
According to an alternative, scheduler 11 is according to strict priority queuing standard, and whereby, different formations can have different priority.Thus, when which formation is scheduler 11 decision next served, rule was: the formation that service has grouping and do not have the priority lower than any other formation with grouping.
Pipeline arbiter PA comprises comparing function 12, and the rank of the overflow flag hash table bucket TX1 of the reshaper 51,52,53,54 that it is served scheduler 11, TX2, TX3, TX4 is compared with control and management packet header information.
Preferably, provide extra cost information, it can provide in the head of grouping, perhaps provides in each grouping afterbody, perhaps provides in the independent control channel that is parallel to the input buffer that divides into groups.Extra cost information is based on transmitting the poor of panel data grouping and control and management grouping, relevant for pipeline element because the longest busy period due to handling.Special grouping spacing can be control and management grouping definition extra cost with respect to transmitting the panel data grouping.For example, if the specific cluster that is used to control and manage has the worst case cost in 10 cycles, wherein transmit the cost that plane packet had for 2 cycles, then extra cost information is set to 10-2=8.For not having than control that transmits the bigger cost of plane packet and management grouping, extra cost information is set to zero.
Preferably, be used for allowing that it is currently in the interface 31,32,33,34 that scheduler 11 is served not have a grouping that the control of control and management packet interface 3X and management grouping enter the condition of handling streamline.
Preferably, comparing function 12 rank and control and the management of overflow flag hash table bucket TX1, TX2, TX3, the TX4 of schedulers 11 reshaper 51,52,53,54 of the being served extra cost information of dividing into groups relatively.If the rank of overflow flag hash table bucket TX1, TX2, TX3, TX4 is zero or just, control and management grouping are allowed to enter streamline.Thus, the rank of the overflow flag hash table bucket TX1 of the reshaper 51,52,53,54 that scheduler 11 is served, TX2, TX3, TX4 reduces, and the amount that reduces is corresponding to extra cost information.Simultaneously, corresponding packet rate shaper token buckets T1, T2, T3, T4 also reduce, and the amount that reduces is corresponding to the cost that transmits plane packet.Therefore, if transmitting the cost of plane packet is 2, and the extra cost of control and management grouping access streamline is 8, and then packet rate shaper token buckets T1, T2, T3, T4 reduce 2, and corresponding overflow flag hash table bucket TX1, TX2, TX3, TX4 reduce 8.
Certainly, might be to replacing with the access flow process of control and management grouping.For example, the rank that is used to permit to control and manage condition that grouping enters and can is at least two overflow flag hash table bucket TX1, TX2, TX3, TX4 is zero or just, whereby, when permitting dividing into groups to enter, at least two these overflow flag hash table bucket decreases are described extra cost.In addition, can be given for the different condition of the access of control and management grouping, whereby, the selection of such condition is that grouping is specific.For example, can be to control and the management packet allocation priority higher than ordinary traffic packets, and such information can also be included in control and the management packet header, is used for the priority decisions based on single grouping.
As further alternative,, can use strict token bucket algorithm for the access of control and management grouping.
By proposed embodiment, as described in reference to figure 4, the treatment element impact damper of FIFO or FIFO B1, B2 ..., BK guarantees just to be drained before being inserted into than the control of multicycle and management grouping consuming.This has prevented to produce among the pipeline element FIFO and has overflowed.With respect to known existing solution, solution of the present invention provides the effective use to processing power, and existing solution comprises that control plane CPU inserts illusory (dummy), inactive control and management grouping, and this has increased the control plane load.
Processor 1, and above-mentioned method are applicable to router or are used for the computer unit of firewall applications that also be applicable to various computer units, wherein computer unit will be benefited owing to stream line operation.The example of such computer unit is the network computer unit, such as: switch; Gateway promptly, is carried out the computer unit of the protocol conversion between networks of different type and the application, and the balancing the load unit that is used for the webserver.
The present invention also is applicable to the computer unit that is contained in digital signal processing, promptly is contained in from the analysis of the signal of sound, weather satellite and seismic monitoring device and/or the computer unit in revising.Processor 1 received packet may be the numeral of simulating signal sequence in this case.Use the field of the digital signal processing relevant for example to have: biomedicine, sonar, radar, seismology, voice and music processing, imaging, multimedia application and communicate by letter with the present invention.

Claims (18)

1. method that is used for processor (1), described processor (1) comprise at least one interface (3) of handling streamline (2) and being used for packet, the method is characterized in that:
Dependence by first packet (D1) at least a portion of streamline (2) (P1 ..., PK) in the cost information (c1) that determined resident period of expectation, enter streamline (2) and permit second packet (D2),
Wherein, described first packet (D1) enters streamline (2) prior to described second packet (D2), and,
Wherein, permitting the step that first packet (D1) enters streamline (2) is to use token bucket algorithm to carry out, whereby, when the credit of token bucket reaches predetermined value, permitting first packet (D1) enters, and whereby, reduce token bucket credit, its amount that reduces is corresponding to the cost information (c1) of first packet (D1).
2. according to the process of claim 1 wherein, described first packet (D1) is identical with described second packet.
3. according to the method for claim 1 or 2, wherein, the cost information (c2) that is determined resident period by the expectation of second packet (D2) in streamline at least a portion is different from the described cost information (c1) that is determined resident period by the expectation of first packet (D1) in streamline at least a portion.
4. according to the method for claim 3, wherein, described cost information (c1, c2) is stored in the head of each packet (D1, D2).
5. according to the method for claim 1 or 2, wherein, described cost information (c1) is corresponding to the specific predetermined information of interface, and this predetermined information depends on the hypothesis to the cost information of a plurality of packets.
6. according to the method for claim 5, wherein, the specific predetermined information of described interface is corresponding to the estimation to the maximum cost information of a plurality of groupings.
7. according to the method for claim 1, it comprises: check a plurality of groupings, determine the grouping cost information, and for the grouping that has above the cost information of predetermined value, storage cost information is as the specific cost information of grouping.
8. according to the method for claim 1, wherein, described at least one interface (3) that is used for packet comprises first interface (31,32,33,34) and second interface (3X), the step that described allowance second packet (D2) enters streamline (2) is at first interface (31,32,33,34) locate to carry out by usage flag hash table bucket algorithm, whereby, adjust token bucket (T1 based on described cost information (c1), T2, T3, T4) credit, whereby, will be from first interface (31,32,33,34) token bucket (T1, T2, T3, T4) credit is overflowed to transfer to and is used for the overflow flag hash table bucket (TX1 that at least one is grouped in the access that second interface (3X) locates, TX2, TX3, TX4).
9. a processor (1) comprises at least one interface (3) of handling streamline (2) and being used for packet, it is characterized in that:
Described processor comprises reshaper (5), described reshaper (5) be suitable for relying on by first packet (D1) at least a portion of streamline (2) (P1 ..., PK) in the cost information (c1) that determined resident period of expectation, enter streamline (2) and permit second packet (D2)
Wherein, described first packet (D1) enters streamline (2) prior to described second packet (D2), and,
Wherein, described reshaper (5) is adapted to pass through usage flag hash table bucket algorithm and permits first packet (D1) and enters streamline (2), whereby, when the credit of token bucket reaches predetermined value, permitting first packet (D1) enters, and whereby, reduce token bucket credit, its amount that reduces is corresponding to the cost information (c1) of first packet (D1).
10. according to the processor of claim 9, wherein, described first packet (D1) is identical with described second packet.
11. processor according to claim 9 or 10, wherein, the cost information (c2) that is determined resident period by the expectation of second packet (D2) in streamline at least a portion is different from the described cost information (c1) that is determined resident period by the expectation of first packet (D1) in streamline at least a portion.
12. according to the processor of claim 11, wherein, described cost information (c1, c2) is stored in the head of each packet (D1, D2).
13. according to the processor of claim 9 or 10, wherein, described cost information (c1) is corresponding to the specific predetermined information of interface, this predetermined information depends on the hypothesis to the cost information of a plurality of packets.
14. according to the processor of claim 13, wherein, the specific predetermined information of described interface is corresponding to the estimation to the maximum cost information of a plurality of groupings.
15. the processor according to claim 9 comprises sorter, it is applicable to: check a plurality of groupings, determine the grouping cost information, and for the grouping that has above the cost information of predetermined value, storage cost information is as the specific cost information of grouping.
16. processor according to claim 9, wherein, described at least one interface (3) that is used for packet comprises first interface (31,32,33,34) and second interface (3X), described reshaper (5) is suitable at first interface (31,32,33,34) locate to permit second packet (D2) and enter streamline (2) by usage flag hash table bucket algorithm, whereby, adjust token bucket (T1 based on described cost information (c1), T2, T3, T4) credit, whereby, will be from first interface (31,32,33,34) token bucket (T1, T2, T3, T4) credit is overflowed to transfer to and is used for the overflow flag hash table bucket (TX1 that at least one is grouped in the access that second interface (3X) locates, TX2, TX3, TX4).
17. a router, it comprises according to processor any among the claim 9-16.
18. a computer unit, it comprises according to processor any among the claim 9-16.
CNB2005800445729A 2004-12-22 2005-12-20 Method for reducing buffer capacity in pipeline processor Expired - Fee Related CN100570551C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
SE0403128A SE0403128D0 (en) 2004-12-22 2004-12-22 A method for a processor, and a processor
SE04031282 2004-12-22
US60/643,580 2005-01-14

Publications (2)

Publication Number Publication Date
CN101088065A CN101088065A (en) 2007-12-12
CN100570551C true CN100570551C (en) 2009-12-16

Family

ID=34075257

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005800445729A Expired - Fee Related CN100570551C (en) 2004-12-22 2005-12-20 Method for reducing buffer capacity in pipeline processor

Country Status (3)

Country Link
CN (1) CN100570551C (en)
SE (1) SE0403128D0 (en)
TW (1) TWI394078B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016000170A1 (en) * 2014-06-30 2016-01-07 华为技术有限公司 Data processing method executed by network apparatus, and associated device
CN108628277B (en) * 2018-08-16 2020-07-24 珠海格力智能装备有限公司 Distribution processing method, device and system for workstations

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN85107692A (en) * 1985-10-19 1987-05-06 霍尼韦尔信息系统公司 Pipelined cache memory common to multiple processors
CN1232219A (en) * 1998-03-23 1999-10-20 日本电气株式会社 Pipeline-type multi-processor system
WO2002027483A2 (en) * 2000-09-29 2002-04-04 Intel Corporation Trace buffer for loop compression
US6757249B1 (en) * 1999-10-14 2004-06-29 Nokia Inc. Method and apparatus for output rate regulation and control associated with a packet pipeline

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1236092A4 (en) * 1999-09-01 2006-07-26 Intel Corp Branch instruction for processor
KR100800958B1 (en) * 2001-10-04 2008-02-04 주식회사 케이티 Method for controlling traffic flow using token bucket

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN85107692A (en) * 1985-10-19 1987-05-06 霍尼韦尔信息系统公司 Pipelined cache memory common to multiple processors
CN1232219A (en) * 1998-03-23 1999-10-20 日本电气株式会社 Pipeline-type multi-processor system
US6757249B1 (en) * 1999-10-14 2004-06-29 Nokia Inc. Method and apparatus for output rate regulation and control associated with a packet pipeline
WO2002027483A2 (en) * 2000-09-29 2002-04-04 Intel Corporation Trace buffer for loop compression

Also Published As

Publication number Publication date
TW200632741A (en) 2006-09-16
SE0403128D0 (en) 2004-12-22
TWI394078B (en) 2013-04-21
CN101088065A (en) 2007-12-12

Similar Documents

Publication Publication Date Title
US8250231B2 (en) Method for reducing buffer capacity in a pipeline processor
US11500682B1 (en) Configurable logic platform with reconfigurable processing circuitry
CA2575869C (en) Hierarchal scheduler with multiple scheduling lanes
US6646986B1 (en) Scheduling of variable sized packet data under transfer rate control
US6721796B1 (en) Hierarchical dynamic buffer management system and method
CN101057481B (en) Method and device for scheduling packets for routing in a network with implicit determination of packets to be treated as a priority
US8325736B2 (en) Propagation of minimum guaranteed scheduling rates among scheduling layers in a hierarchical schedule
US7457296B2 (en) Method and apparatus for sorting packets in packet schedulers using a connected trie data structure
EP0989770B1 (en) Packet transfer control apparatus and scheduling method therefor
WO2006089560A1 (en) A method of and a system for controlling access to a shared resource
Kostrzewa et al. Dynamic control for mixed-critical networks-on-chip
Wang et al. A distributed switch architecture with dynamic load-balancing and parallel input-queued crossbars for terabit switch fabrics
US20060168405A1 (en) Sharing memory among multiple information channels
CN100570551C (en) Method for reducing buffer capacity in pipeline processor
CN1518296A (en) Method of implementing integrated queue scheduling for supporting multi service
WO2001020876A1 (en) Allocating network bandwidth
Kim Bandwidth and latency guarantees in low-cost, high-performance networks
Eugster et al. Essential traffic parameters for shared memory switch performance
US7599381B2 (en) Scheduling eligible entries using an approximated finish delay identified for an entry based on an associated speed group
EP1774721B1 (en) Propagation of minimum guaranteed scheduling rates
Tyan A rate-based message scheduling paradigm
Tobuschat et al. Selective congestion control for mixed-critical networks-on-chip
Eugster et al. Admission control in shared memory switches
Ren et al. Dynamic Priority Coflow Scheduling in Optical Circuit Switched Networks
Rizzo et al. A Fast and Practical Software Packet Scheduling Architecture

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20120401

Address after: Stockholm, Sweden

Patentee after: A strategic position Lelateniuke company

Address before: Stockholm, Sweden

Patentee before: Xelerated AB

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091216

Termination date: 20191220