CN101473614B - Processor and method for processor - Google Patents

Processor and method for processor Download PDF

Info

Publication number
CN101473614B
CN101473614B CN2007800232609A CN200780023260A CN101473614B CN 101473614 B CN101473614 B CN 101473614B CN 2007800232609 A CN2007800232609 A CN 2007800232609A CN 200780023260 A CN200780023260 A CN 200780023260A CN 101473614 B CN101473614 B CN 101473614B
Authority
CN
China
Prior art keywords
value
credit parameter
processing unit
credit
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2007800232609A
Other languages
Chinese (zh)
Other versions
CN101473614A (en
Inventor
雅各布·卡尔斯特伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
A Strategic Position Lelateniuke Co
Original Assignee
Xelerated AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xelerated AB filed Critical Xelerated AB
Priority claimed from PCT/EP2007/055777 external-priority patent/WO2007147756A1/en
Publication of CN101473614A publication Critical patent/CN101473614A/en
Application granted granted Critical
Publication of CN101473614B publication Critical patent/CN101473614B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a processor (1) and a method for a processor comprising processing means (2), the method comprising the steps of: admitting the data packet (D1, D2, D3) to the processing device (2) based at least in part on the value of the first credit parameter (CS1) and the first limit of the first credit parameter (L1S 1); -if a data packet (D1, D2, D3) is admitted to the processing device (2), reducing the value of the first credit parameter (CS 1); and increasing the value (CS1) of the first credit parameter in dependence on the value (CS2) of the second credit parameter, the data packet (D1, D2, D3) being admitted to the processing device (2) based on the value of the second credit parameter.

Description

Processor and the method that is used for processor
Technical field
The present invention relates to the method that a kind of processor and being used to comprises the processor of processing unit, this method comprises the steps: to enter processing unit to small part based on the value of the first credit parameter and first boundary allowance packet of this first credit parameter, if and this packet is allowed to enter this processing unit, then reduce the value of this first credit parameter.
Background technology
In data processing, expectation be to reduce buffer capacity (capacity), promptly be provided for the memory capacity of storage data during lining up.
In some known processors, the input data communication is permitted as quickly as possible entering and is not had controlled admittance restriction, provides restriction by disposal ability thus.This will cause that the buffer capacity in the processor is had very big demand.Equally, can use data shaping technology (datashaping), make the input data communication be allowed to enter the treatment element of processor, so that realize constant bit speed and/or constant packet rates.
In processor, reshaper can be used to first resource based on processor, for example bit rate capacity), control input communication, and another reshaper can be used to second resource based on processor, and for example the data packet rate capacity is controlled input communication.Such reshaper has for example some credit parameters of token bucket (token bucket) form usually, permits the treatment element that grouping enters processor based on this credit parameter.This credit value increases with scheduled volume termly, unless the credit value of reshaper has reached boundary value, otherwise packet is not allowed to enter, and when packet is allowed to enter, reduces this credit value.In such processor, because by the former of following example explanation thereby may in processor, happen suddenly: at the packet sequence of second resource of a large amount of relatively first resource of consumption and relatively small amount, after for example long relatively packet, the credit value of one of reshaper will reach high relatively level.If after the such packet sequence that consumes a large amount of first resources and second resource seldom, follow the packet sequence of first resource that consumes relatively small amount, for example short relatively packet, then packet burst will be allowed to, up to reaching below the access boundary that high-caliber relatively credit value drops to grouping.The risk of such data burst will require shaping appliance that very big descending buffer capacity is arranged.
Summary of the invention
The objective of the invention is to reduce the buffer capacity in the processor.
The method of that type that this purpose utilization is mentioned at first realizes that this method comprises the steps: to increase according to the value of the second credit parameter value of the first credit parameter, is allowed to enter into processing unit based on the Value Data grouping of the described second credit parameter.
As following further as described in, its allowance enter be based on second credit parameter packet can to enter the packet that is based on first credit parameter identical or different with its allowance.
Advantage of the present invention is that especially when the value of the first credit parameter was compared with first boundary of the first credit parameter, if the value of the first credit parameter is lower than first boundary, then packet was not allowed to enter into processing unit.The present invention will make the value of the first credit parameter and the first credit parameter second boundary relatively become possibility, if the value of the second credit parameter is lower than first boundary of the second credit parameter, then do not increase the value of the first credit parameter, so that second boundary of first parameter becomes bigger.
Especially, when any one other credit value is lower than predetermined threshold, do not allow any one credit value to increase.This will be avoided the formation of big credit value, and will significantly reduce the size that happens suddenly, itself so that will allow lower descending buffer capacity demand.
First boundary of the first credit parameter can be different or identical with second boundary.
Preferably, the step that increases first credit parameter be to small part based on first resource of processing unit or second resource.Thereby, the resource of the processing unit that credit level and data access therefore are suitable for selecting, this will reduce the latter's buffer capacity demand.As following further explanation, the processing unit resource can be any one in a large amount of dissimilar feature of processing unit.For example, one or more resources can be the performance parameters relevant with processing unit, and for example first resource can be the bit rate capacity of processing unit, and second resource can be the data packet rate capacity of processing unit.Alternatively, one or more resources can be the treatment elements that is suitable for deal with data.Alternatively or in addition, increase the value of the first credit parameter and/or the value of the second credit parameter, if and/or packet be allowed to enter into processing unit reduce the first and second credit parameters value step can to small part based on the time period of processing pipeline form in the resident expection of processing unit, as described in, and incorporate it into this specification in this mode by reference at the International Patent Application PCT/SE2005/001969 that submits to by the applicant.Herein, term " credit parameter " means parameter, and the value of this parameter is conditioned based on the access of packet.Therefore, if packet is allowed to enter into processing unit, then also reduce the value of the second credit parameter.
Utilize the method for the initial type that proposes also can realize purpose of the present invention, this method comprises the steps: according to the storage level in the buffer, increase the value of the first credit parameter, packet was stored in the described buffer before being allowed to enter processing unit.This prevented received communication not or receive few relatively communication stream a period of time inner treater Data Input Interface place the formation of big credit, make when this time period is over and done with, can avoid data burst from this interface.Preferably, if buffer is empty, then do not increase the value of the first credit parameter, so that second boundary of the first credit parameter becomes bigger.
Utilization also can realize purpose of the present invention according to any one processor of claim 9-16.
Description of drawings
Below, the present invention is described in embodiment with reference to the accompanying drawings, wherein:
Fig. 1 is the block diagram corresponding to processor according to an embodiment of the invention;
Fig. 2 is the block diagram corresponding to the part of the pairing processor of block diagram of Fig. 1;
Fig. 3 is the block diagram corresponding to according to another embodiment of the invention processor; And
Fig. 4 is the block diagram corresponding to according to still another embodiment of the invention processor.
Embodiment
Fig. 1 illustrates one embodiment of the present of invention.Network processing unit 1 comprises processing unit 2.Two features of processing unit 2 are called as the first and second resource R1, R2 in this explanation.As understanding in this explanation, resource can be any one in a large amount of dissimilar features, and provides several examples at this.In addition, processing unit can present plural resource usually, referring to reference Fig. 3 hereinafter.One or more resource R1, R2 can be the treatment elements that is suitable for deal with data.Alternatively, one or more resource R1,, R2 can be the performance parameter relevant with processing unit 2.In this embodiment of the present invention, two resources are performance parameter.More specifically, the first resource R1 is the bit rate capacity of processing unit 2, and the second resource R2 is the packet rates capacity of processing unit 2.
Processing unit 2 can be any one in the multiple known type, comprises the asynchronous process pipeline, as described in described in International Patent Application PCT/SE2005/001969, incorporate it into this specification in this mode by reference.Thus, any or all resource R1, R2 can be and processing unit 2, the perhaps performance parameter of the treatment element that processing pipeline is relevant, and its quantity can be far longer than two.Any such treatment element can be the access point of access process equipment or engine, described at WO2004/010288, in this mode by reference it is comprised in this manual.
Alternatively, processing unit 2 can be the combination of RISC (Reduced Instruction Set Computer) processor, microcoding engine, hard coded engine or a type or polytype a plurality of processing unit.
In Fig. 1, data communication is from left to right sent.Packet D1, D2, D3 enter processor by the Data Input Interface that comprises input port 3, and are stored in the following manner in the input buffer 4 before being allowed to enter processing unit 2.After withdrawing from from processing unit 2, be grouped in be transmitted by output port 7 before, be stored in the output buffer 6.
Determined to permit entering processing unit 2 by the first and second reshaper S1, S2, reshaper is respectively the form of bit rate reshaper S1 and packet rate shaper S2.Bit rate reshaper S1 is restricted to the bit rate of processing unit 2.Based on the first resource R1, i.e. the limited characteristic of the bit rate Capacity Selection bit rate reshaper S1 of processing unit 2.Packet rate shaper S2 is restricted to the data packet flows of processing unit 2.Based on the second resource R2, i.e. the limited characteristic of the packet rates Capacity Selection packet rate shaper S2 of processing unit 2.
Reshaper S1, the S2 of any suitable form can be provided, for example as a software program or a part wherein, or as the numeral or the analog circuit of electricity, light or mechanical component.
With reference to Fig. 2.Reshaper S1, S2 all use token bucket algorithm, make the access of data based on separately credit parameter CS1, CS2.All compare in this each of these values CS1, CS2 that also is called as credit value CS1, CS2 with separately the first boundary L1S1, L1S2.If any credit value CS1, CS2 are lower than separately the first boundary L1S1, L1S2, then do not allow the reshaper of any data communication by separately.
If credit value CS1, CS2 neither are lower than separately the first boundary L1S1, L1S2 in the token bucket of reshaper S1, S2, next grouping D1 then be allowed to enter processing unit 2 in the input buffer then.When grouping D1 is allowed to enter processing unit, the credit value of bit rate reshaper S1 has reduced the amount corresponding to the amount of bits of grouping D1, and the credit value CS2 of packet rate shaper S2 has reduced the amount corresponding to the number of the grouping of permitting entering, i.e. grouping.
As an alternative, the credit value CS2 of packet rate shaper S2 can be conditioned, as described in described in International Patent Application PCT/SE2005/001969, incorporate it into this specification in this mode by reference.Therefore, each packet D1, D2, D3 can comprise the header with information, and packet rate shaper S2 can be suitable for reading this information, this information can be relevant with the cost of packet, promptly makes any treatment element of processing unit 2 keep busy with separately packet D1, D2, D3 and can't to accept the maximum duration of new data packet relevant.Alternatively or in addition, such header information can be used to set up the sign of the resource (being treatment element) of the grouping D1, the D2 that participate in handling separately, D3.Further, this header can also comprise the information about separately data packet size.When grouping was allowed to enter processing unit, the credit value CS2 of packet rate shaper S2 had been reduced the amount corresponding to header information, for example cost information.
As shown in Figure 2, the second boundary L2S1, the L2S2 of reshaper S1, S2 separately are greater than separately the first boundary L1S1, L1S2.Alternatively, the second boundary L2S1, the L2S2 of reshaper S1, S2 separately can be identical with the first boundary L1S1, L1S2 separately.If the credit value CS1 of bit rate reshaper S1 is lower than the second boundary L2S1, then credit value CS1 termly (for example each clock cycle of processor) increase fixing credit.The value of this fixed credit amount is based on frequency (for example each clock cycle) and the first resource R1, i.e. the bit rate capacity of processing unit 2 of regular increase.Similarly, if the credit value of packet rate shaper S2 is lower than the second boundary L2S2, then credit value CS2 increases fixing credit termly, and this fixing credit is based on the frequency and the second resource R2 of regular increase, i.e. the packet rates capacity of processing unit 2.
Preferably, reshaper S1, S2 use so-called loose token bucket (loose token bucket) algorithm, promptly the first boundary L1S1, L1S2 are zero, and when credit value CS1, CS2 be non-when negative, divide into groups D1 then be allowed to enter processing unit 2 of the next one in input buffer 4.
If the credit value of any one reshaper S1, S2 is lower than the first boundary L1S1, L1S2, then the credit value of another reshaper S1, S2 can not be increased to the second boundary L2S1, the L2S2 greater than separately.If the credit value of any one reshaper S1, S2 is lower than the first boundary L1S1, L1S2, then the credit value of other reshaper S1, S2 is restricted to the second boundary L2S1 separately, the buffer capacity demand that L2S2 will reduce processing unit 2 with any one.This will be explained by following example:
Allow unrestricted increase of credit level and no matter the independent reshaper of the credit level in other reshaper can not prevent situation as described below: afterwards, the credit value of the second reshaper S2 will reach high relatively level at the packet sequence (promptly being relative long packet in this example) of the second resource R2 that consumes first a large amount of relatively resource R1 and relatively small amount.If consume the packet sequence of following the first resource R1 that consumes relatively small amount after such packet sequence of a large amount of first resource R1 and a small amount of second resource R2 (promptly being relative short packet in this example), then packet burst will be allowed to, and drop to below the first boundary L1S2 up to the credit value CS2 of the second reshaper S2.Accordingly, consuming a large amount of second resource R2 and only the packet sequence of a small amount of first resource R1 (promptly being relative packet of lacking in this example) is afterwards, the credit value of the first reshaper S1 will reach high level, thereby allow to consume the burst of the follow-up packet sequence (promptly being short relatively packet in this example) of the relatively small amount second resource R2, drop to below the first boundary L1S1 up to the credit value CS1 of the first reshaper S1.
The present invention will prevent the formation of big credit value in the process of a large amount of a kind of resources relevant with the another kind of resource of processor of data sequence consumption.This will significantly reduce the size that happens suddenly, itself so that will allow lower descending buffer capacity demand.At processing unit 2 is under the situation of asynchronous process pipeline, as as described in described in International Patent Application PCT/SE2005/001969, the demand to the treatment element buffering will be provided with the buffer of first-in first-out (FIFO) form that provided before treatment element in the present invention.
As mentioned above, reshaper S1, S2 preferably use loose token bucket algorithm, but alternatively, can use any other suitable access algorithm.Using under the situation of so-called strict token bucket algorithm, the first boundary L1S1, L1S2 can be positive, and arrive at least greatly with separately the first boundary L1S1, when L1S2 is corresponding as credit value CS1, CS2, and the D1 that divides into groups is allowed to enter processing unit 2.
When using strict token bucket algorithm, the first boundary L1S1, the L1S2 of any or all reshaper can be determined in advance, and can equate for all packets of reshaper S1, S2 by separately.Alternatively, for each grouping, the first boundary L1S1, L1S2 can be independent, reshaper S1, S2 separately (for example is suitable for reading before access in this case, the above-mentioned type) header information of each packet D1, D2, D3, and the first boundary L1S1, L1S2 are set based on this header information.For example, packet D1, D2 separately, the header information of D3 can comprise cost C1, C2, C3, corresponding to first boundary value L1S1, the L1S2 of one of reshaper S1, S2.Therefore, the header information of the first grouping D1 from input buffer 4 has read cost C1, and first boundary value L1S1, L1S2 are defined as L1S1 (or L1S2)=C1.
Further, when using strict token bucket algorithm, the second boundary L2S1, L2S2 (if the credit value of separately reshaper S1, S2 is lower than its first boundary L1S1, L1S2, then the credit value of another reshaper S1, S2 can not be increased to greater than the second boundary L2S1, L2S2) can be equal to, or greater than the first boundary L1S1, L1S2.Under latter event, can the second boundary L2S1, L2S2 be arranged to surpass the value of the first boundary L1S1, L1S2 scheduled volume separately at each grouping.
Fig. 3 illustrates another embodiment of the present invention.Processing unit 2 presented have resource R1, the R2...RN form more than two feature, each feature can be any one in a large amount of dissimilar features.For example, the first and second resource R1, R2 can be respectively the bit rate capacity and the packet rates capacity of processing unit 2, and further resource can be the treatment element that is suitable for deal with data.
Determined to permit entering processing unit 2 by reshaper S1, S2...SN, the quantity of described reshaper is identical with the quantity of processor device resource R1, R2...RN.Select the limited characteristic of the first reshaper S1 based on the first resource R1, and select the limited characteristic of the second reshaper S2 based on the second resource R2, or the like.
Preferably, each reshaper S1, S2...SN uses token bucket algorithm, makes the access of data based on value CS1 separately, the CS2...CSN of credit parameter.If credit value CS1, CS2...CSN are lower than the first boundary L1S1, L1S2...L1SN, then do not allow the reshaper of any data communication by separately.With corresponding to the above access of carrying out data communication with reference to the described mode of Fig. 1 and 2.Therefore, if credit value CS1, the CS2...CSN of any reshaper S1, S2...SN are lower than separately the first boundary L1S1, L1S2...L1SN, then credit value CS1, CS2...CSN separately termly (for example each clock cycle of processor 1) increase separately fixed credit amount.The value of fixed credit amount separately is based on frequency (for example each clock cycle) and separately resource R1, the R2...RN of regular increase.
If the credit value of any reshaper S1, S2...SN is lower than separately the first boundary L1S1, L1S2...L1SN, then the credit value of other reshaper S1, S2...SN can not be increased to the second boundary L2S1, the L2S2...L2SN greater than separately.The second boundary L2S1, L2S2...L2SN can be more than or equal to the first boundary L1S1, L1S2...L1SN separately.
In the embodiment that reference Fig. 1-3 describes, its access is identical based on the packet of the value CS1 of the first credit parameter with its access based on packet D1, D2, the D3 of the value CS2 of the second credit parameter.Yet, shown in Figure 4 as following reference, the present invention also is revisable, make the packet of winning arrive the access of processing device 2 based on first credit parameter, described first credit parameter increases according to the value of the second credit parameter, second packet is allowed to enter processing unit based on second credit parameter, and described second packet is different with first packet.In the example of Fig. 4, first and second packets enter processor by the interface that separates.
With reference to Fig. 4, show another embodiment of the present invention.Network processing unit 1 comprises the processing unit 2 with asynchronous process pipeline 2 forms, the description in a more close step among International Patent Application PCT/SE2005/001969 as described, it is comprised in this manual the synchronous element 8 that this processing unit comprises asynchronous process element P1...PK and has elastic buffer 9,10 in this mode by reference.As under the situation of the described embodiment of reference Fig. 1 and 2, can alternatively provide processing unit 2 with another form, for example in risc processor, provide.
What a more close step described among International Patent Application PCT/SE2005/001969 as described, by reference it is comprised in this manual at this, packet D11...D1M by comprise respectively input port 31,32 ... the interface of 3M enters processor, and be stored in separately input buffer 41,42 ... among the 4M, in addition pipeline moderator 11, S1, S2 ... SM comprise scheduler 11 and a plurality of reshaper S1, S2 ... SM.Especially, for every pair of input port 31,32 ... 3M and input buffer 41,42 ... 4M, provide reshaper S1, S2 ... SM.By reshaper S1, S2 ... SM and scheduler 11 determines to permit entering pipeline 2, and scheduler 11 is according to circulation (Round Robin) algorithm work, scheduler 11 with continuous polling sequence to reshaper S1, S2 ... SM provides the access to pipeline.Except round-robin algorithm, can also use alternative scheduling rule, for example Weighted Fair Queuing, difference circulation, the circulation of weighting difference, strict preference are lined up, are expired the earliest preferentially and First Come First Served.
Preferably, each reshaper S1, S2 ... SM uses token bucket algorithm, make the data access based on value CS1 separately, the CS2 of credit parameter ... CSM.If credit value CS1, CS2 ... CSM be lower than the first boundary L1S1, L1S2 ... L1SM does not then allow the reshaper of any data communication by separately.If any reshaper S1, S2 ... credit value CS1, the CS2 of SM ... CSM be lower than separately the first boundary L1S1, L1S2 ... L1SM, then credit value CS1, CS2 separately ... CSM termly (for example each clock cycle of processor 1) increase separately fixed credit amount.Fixed credit value separately is based on the resource of processing unit 2, for example its packet rates capacity, regularly (for example, each clock cycle) frequency of increasing and input port 31,32 ... the quantity of 3M.Reshaper S1, S2 ... the fixed credit amount of SM increase based on the resource of processing unit 2 may instead be the bit rate capacity of processing unit 2, or its any other performance parameter.As further substituting, different reshaper S1, S2 ... the fixed credit amount increase of SM can based on different treatment element P1 ... PK, 8, from the communication addressing of separately reshaper to described treatment element.
If any reshaper S1, S2 ... the credit value of SM be lower than separately the first boundary L1S1, L1S2 ... L1SM[ZF1], then other reshaper S1, S2 ... the credit value of SM can not be increased to greater than separately the second boundary L2S1, L2S2 ... L2SM.The second boundary L2S1, L2S2 ... L2SM can more than or equal to separately the first boundary L1S1, L1S2 ... L1SM.This has prevented the formation of big credit in the reshaper at interface in the time period of the communication stream that received communication or reception are not few relatively, makes when this time period is over and done with, can avoid the data burst from this interface.(should be noted in the discussion above that in this explanation the reshaper that provides at interface or input port place represents that it is physically provided by this interface or this input port, perhaps is functionally connected to this interface or this input port).
Still with reference to Fig. 4, what should propose is, each interface or input port 31,32 ... the 3M place can provide a plurality of reshapers, and is as above described with reference to Fig. 1-3, the credit of regulating the reshaper of each interface based on the resource separately of processing unit respectively.Therefore, if the credit value of any reshaper is lower than first boundary separately, then can not be increased to second boundary greater than separately at the credit value of other reshaper of same interface.Alternatively, if the credit value of any reshaper is lower than first boundary separately, all other reshapers then, the credit value that comprises the reshaper of other interface can not be increased to second boundary greater than separately.
Alternatively or in addition, can regulate the above any embodiment that describes with reference to Fig. 1-4, if make any input buffer 4,41,42 ... 4M be a sky, then be suitable for from this input buffer 4,41,42 ... the credit value of the reshaper of 4M received communication (or a plurality of reshaper) can not be increased to greater than the second boundary L2S1, L2S2 ... L2SM.This has prevented the formation of big credit in the reshaper at interface in the time period of the communication stream that received communication or reception are not few relatively, makes when this time period is over and done with, can avoid the data burst from this interface.

Claims (12)

1. one kind is used for the method that the control data grouping enters processing unit (2), and this method comprises the steps:
To small part based on the value (CS1) of the first credit parameter and first boundary (L1S1) of the described first credit parameter, permit packet (D1, D2, D3) and enter described processing unit (2), and
If described packet (D1, D2, D3) is allowed to enter described processing unit (2), then reduce the value (CS1) of the described first credit parameter, it is characterized in that following step
Increase the value (CS1) of the described first credit parameter according to the value (CS2) of the second credit parameter, based on the value of the described second credit parameter, packet (D1, D2, D3) is allowed to enter described processing unit (2),
The value (CS1) of the described first credit parameter and second boundary (L2S1) of the described first credit parameter are compared, if the value of the described second credit parameter (CS2) is lower than first boundary (L1S2) of the described second credit parameter, the value of the then described first credit parameter (CS1) does not increase, so that second boundary (L2S1) of the described first credit parameter becomes bigger.
2. method according to claim 1, second boundary (L2S1) of the wherein said first credit parameter is greater than first boundary (L1S1) of the described first credit parameter.
3. method according to claim 1, the step of the value (CS1) of the described first credit parameter of wherein said increase is to first resource (R1) or second resource (R2) of small part based on described processing unit (2).
4. method according to claim 3, wherein said first resource (R1) are the bit rate capacity of described processing unit (2), and described second resource (R2) is the data packet rate capacity of described processing unit (2).
5. according to any one the described method in the aforementioned claim,, then reduce the value (CS2) of the described second credit parameter if wherein described packet (D1, D2, D3) is allowed to enter described processing unit (2).
6. one kind is used for the method that the control data grouping enters processing unit (2), and this method comprises the steps:
Permit packet (D1, D2, D3) to small part based on first boundary (L1S1) of the value (CS1) of the first credit parameter and the described first credit parameter and enter described processing unit (2), and
If described packet (D1, D2, D3) is allowed to enter described processing unit (2), then reduce the value (CS1) of the described first credit parameter, it is characterized in that following step:
According to buffer (4,41,42 ... the storage level 4M) increases the value (CS1) of the described first credit parameter, be allowed to enter the described before packet of described processing unit (2) in described packet and be stored in the described buffer,
If described buffer (4,41,42 ... be empty 4M), then do not increase the value (CS1) of the described first credit parameter, so that second boundary (L2S1) of the described first credit parameter becomes bigger.
7. one kind is used for the device that the control data grouping enters processing unit (2), and described device comprises:
Be used for permitting the parts that packet (D1, D2, D3) enters described processing unit (2) based on the value (CS1) of the first credit parameter and first boundary (L1S1) of the described first credit parameter to small part,
Be used for being allowed to enter under the situation of described processing unit (2), reduce the parts of the value (CS1) of the described first credit parameter in described packet (D1, D2, D3),
Be used for increasing the parts of the value (CS1) of the described first credit parameter according to the value (CS2) of the second credit parameter,
Be used for permitting the parts that packet (D1, D2, D3) enters described processing unit (2) based on the value of the described second credit parameter,
Be used for the parts that second boundary (L2S1) with the value (CS1) of the described first credit parameter and the described first credit parameter compares, and
Be used under the situation of first boundary (L1S2) that value (CS2) in the described second credit parameter is lower than the described second credit parameter, the value (CS1) that does not increase the described first credit parameter is so that the parts that second boundary (L2S1) of the described first credit parameter becomes bigger.
8. device according to claim 7, second boundary (L2S1) of the wherein said first credit parameter is greater than first boundary (L1S1) of the described first credit parameter.
9. device according to claim 7 also comprises the parts that are used for increasing based on first resource (R1) of described processing unit (2) or second resource (R2) to small part the value (CS1) of the described first credit parameter.
10. device according to claim 9, wherein said first resource (R1) are the bit rate capacity of described processing unit (2), and described second resource (R2) is the data packet rate capacity of described processing unit (2).
11. according to any one the described device among the claim 7-10, also comprise being used for being allowed to enter under the situation of described processing unit (2), reduce the parts of the value (CS2) of the described second credit parameter in described packet (D1, D2, D3).
12. one kind is used for the device that the control data grouping enters processing unit (2), described device comprises:
Be used for permitting the parts that packet (D1, D2, D3) enters described processing unit (2) based on the value (CS1) of the first credit parameter and first boundary (L1S1) of the described first credit parameter to small part,
Be used for being allowed to enter under the situation of described processing unit (2), reduce the parts of the value (CS1) of the described first credit parameter in described packet (D1, D2, D3),
Be used for according to buffer (4,41,42 ... the storage level 4M) increases the parts of the value (CS1) of the described first credit parameter, wherein said buffer is suitable for being allowed to enter described processing unit (2) in described packet and stores described packet before
Be used for described buffer (4,41,42 ... be that the value (CS1) that does not increase the described first credit parameter is so that the parts that second boundary (L2S1) of the described first credit parameter becomes bigger under the empty situation 4M).
CN2007800232609A 2006-06-22 2007-06-12 Processor and method for processor Expired - Fee Related CN101473614B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
SE0601389-0 2006-06-22
SE06013890 2006-06-22
SE0601389 2006-06-22
US81709506P 2006-06-29 2006-06-29
US60/817,095 2006-06-29
PCT/EP2007/055777 WO2007147756A1 (en) 2006-06-22 2007-06-12 A processor and a method for a processor

Publications (2)

Publication Number Publication Date
CN101473614A CN101473614A (en) 2009-07-01
CN101473614B true CN101473614B (en) 2011-07-06

Family

ID=40829596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007800232609A Expired - Fee Related CN101473614B (en) 2006-06-22 2007-06-12 Processor and method for processor

Country Status (1)

Country Link
CN (1) CN101473614B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5492709B2 (en) * 2010-09-06 2014-05-14 株式会社日立製作所 Band control method and band control device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004024407A1 (en) * 2003-05-14 2004-12-09 Extreme Networks, Santa Clara Rate color marker

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004024407A1 (en) * 2003-05-14 2004-12-09 Extreme Networks, Santa Clara Rate color marker

Also Published As

Publication number Publication date
CN101473614A (en) 2009-07-01

Similar Documents

Publication Publication Date Title
CN104580396B (en) A kind of method for scheduling task, node and system
EP0617361B1 (en) Scheduling method and apparatus for a communication network
CN100485625C (en) Real-time system task scheduling method
US8997105B2 (en) Method for packet flow control using credit parameters with a plurality of limits
WO2006068595A1 (en) A method for reducing buffer capacity in a pipeline processor
CN102387076B (en) Shaping-combined hierarchical queue scheduling method
WO2007038431A2 (en) Scaleable channel scheduler system and method
JP4163044B2 (en) BAND CONTROL METHOD AND BAND CONTROL DEVICE THEREOF
CN105409170A (en) Packet output controller and method for dequeuing multiple packets from one scheduled output queue and/or using over- scheduling to schedule output queues
CN102158906A (en) Service quality sensory system and task scheduling method thereof
CN103428099B (en) A kind of method of universal multi-core network processor flow control
WO2003047179A1 (en) Hierarchical credit queuing for traffic shaping
CN109729013A (en) The method, apparatus and computer readable storage medium of token are added in a kind of traffic shaping
CN101473614B (en) Processor and method for processor
CN100456744C (en) Data dispatching method and system
CN102546423A (en) Method and device for queue scheduling and network device
CN103701721A (en) Message transmission method and device
CN115695330B (en) Scheduling system, method, terminal and storage medium for shreds in embedded system
Vanithamani et al. Performance analysis of queue based scheduling schemes in wireless sensor networks
Zhang et al. ATFQ: a fair and efficient packet scheduling method in multi-resource environments
CN109347764A (en) Scheduling method, system and medium for realizing bandwidth matching
CN101651614B (en) Method and device for scheduling multiport queues
CN100376099C (en) Method for realizing comprehensive queue managing method based network processor platform
EP3021540B1 (en) Scheduler and method for layer-based scheduling queues of data packets
CN100570551C (en) Method for reducing buffer capacity in pipeline processor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: XELERATED NETWORK AB

Free format text: FORMER OWNER: XELERATED AB

Effective date: 20120401

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20120401

Address after: Stockholm, Sweden

Patentee after: A strategic position Lelateniuke company

Address before: Stockholm, Sweden

Patentee before: Xelerated AB

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110706

Termination date: 20200612