CN1402472A - Method for implementing dynamic partial buffer share based on network processor platform - Google Patents
Method for implementing dynamic partial buffer share based on network processor platform Download PDFInfo
- Publication number
- CN1402472A CN1402472A CN 02131335 CN02131335A CN1402472A CN 1402472 A CN1402472 A CN 1402472A CN 02131335 CN02131335 CN 02131335 CN 02131335 A CN02131335 A CN 02131335A CN 1402472 A CN1402472 A CN 1402472A
- Authority
- CN
- China
- Prior art keywords
- grouping
- priority
- queue
- arrive
- drop threshold
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The method belongs to the technique area for managing the queue buffered resources. Its characteristics are as follows. Based on the discarding behavior of the arrival packets, the method adjusts the discarding threshold dynamically in order to raise the use rate of the system buffer resources. Besides the discarding threshold TH (TH=O-N, N is the capacity of the quene) is setup for controlling the discarding packets. The counting value of the number of the discarding packets and the upper limit value for the discarding packets for each priority are setup. When the counting value increases, reaching to the upper limit value, the discarding threshold is moved a step in order to reduce the discarded packets in this priority so as to realize the buffer resources shared dynamically.
Description
Technical field
The dynamic partial buffer that processor platform Network Based is realized is shared method and is belonged to buffer queue resource management techniques field, the Internet.
Background technology
Cache management is the buffer queue resource management, is a core technology of service quality (QoS) control.Router adopts the formation buffer memory usually, postpones the bandwidth availability ratio that service manner improves output link.Whether cache management mechanism allows this grouping to enter buffer queue according to certain algorithm and information decision when grouping arrives queue front, i.e. whether decision abandons this grouping, is also referred to as and abandons control.
Traditional IP network adopts simple First Input First Output and tail drop controlling mechanism, can't provide service quality to guarantee.Present mechanism is based on static threshold, allows the buffer resource of use to come guaranteed performance by limiting the different priorities grouping.But the setting of its threshold value does not have guiding method, is difficult to the fair relatively performance that obtains to expect, because threshold value is static the setting, can not adapt to the variation of network traffics on the other hand, sudden will flow condition under can cause resource utilization to descend.
Summary of the invention
The object of the present invention is to provide and a kind ofly abandon the shared method of dynamic partial buffer that drop threshold is dynamically adjusted in behavior according to arriving grouping, it can guarantee the fair relatively performance that obtains to expect, and can adapt to the variation of network traffics to a certain extent, thereby improve the utilance of system buffer resource.
The present invention is based on the network processor platform realization, and this is because network processing unit has following feature: there are a plurality of microprocessing units inside, and adopts special-purpose optimization instruction set and hardware multithreading technology to improve disposal ability; Adopt the high-speed interface bus, inner multibus mode improves the internal system data transfer bandwidth; Cooperate the high speed switching equipment to realize the linear speed exchange; Provide main control unit to realize the processing and the coordination of control plane.That is to say that it can remedy the deficiency of conventional router, provide realization service quality to control needed high speed processing ability, very likely become the standard platform of next generation network service.
The invention is characterized in: it is that a kind of processor Network Based is realized the method that dynamic partial buffer is shared, promptly dynamically adjust threshold value to guarantee to obtain the fair relatively performance of expection according to the behavior that abandons that arrives grouping, and can adapt to the network traffics variation to a certain extent, to improve the system buffer resource utilization; It is except being provided with drop threshold TH (a 0≤TH≤N, N is a capacity of queue) control the grouping abandon outside, also abandon grouping number count value and abandon grouping number higher limit for each priority is provided with one, increase to when abandoning higher limit when abandoning count value, the promotion drop threshold is moved certain step-length abandon, realize the dynamic partial buffer resource-sharing with the grouping that reduces this priority.
Facts have proved that the present invention has reached intended purposes, can well solve the problem of fair allocat buffer resource between different business priority, reduce the packet loss shake, and can adapt to the variation of network traffics to a certain extent, improve the utilance of system buffer resource.
Description of drawings
Fig. 1, network processor platform model and formation organization chart
Fig. 2, dynamic partial buffer is shared the method algorithm flow chart
Embodiment
The dynamic partial buffer that the present invention proposes is shared method and is realized at network processor platform as shown in Figure 1.Arrive grouping and enter system, be buffered in the storage system, via leaving system from the high speed input/output interface again after the processing of multiple processor system from the high speed input/output interface.
Wherein, accumulator system is made up of the static RAM SRAM and the jumbo Synchronous Dynamic Random Access Memory SDRAM of low access delay: as shown in Figure 1, the grouping packet buffer that need occupy a large amount of memory spaces is in SDRAM, and queue structure relevant, that need frequent access then is placed among the SRAM with the optimization system performance with the service quality control.Buffering of packets space among the SDRAM is according to the paging of MTU length, and every page only allows grouping of buffering, and corresponding foundation in SRAM and its static related packet descriptor is convenient to the distribution and the recovery of buffer resource.During system initialization, all packet descriptors are put the sky flag bit, and the formation free buffer resource chained list that is serially connected.Grouping in the same formation also is linked at together (shown in Fig. 1 upper left quarter), preserves the pointer of two groupings end to end in the formation, is convenient to the operation into, dequeue.When system was left in grouping, the system recoveries resource was gone into free buffer resource chained list with corresponding descriptor chain.
Amortization management method of the present invention (as shown in Figure 2) arrives at each and is called before grouping enters formation, contains successively to have the following steps:
(1) priority that check to arrive grouping whether be under the jurisdiction of height, priority in one, if not then abandon this grouping, reclaim the buffer resource that this grouping takies, leave for and nextly arrive grouping and use the algorithm flow end;
(2) if, then the information that provides according to front end route, sort module is determined to arrive the formation that grouping is subordinate to, and reads the basic parameter of formation and the Control Parameter of separator tube adjustment method in the static RAM from network processing unit (SRAM) the respective queue structure;
(3) judge whether respectively that for different priority-level allowing to arrive grouping enters formation: if high priority packet judges that then whether current queue length is less than capacity of queue; If low-priority packet judges that then whether current queue length is less than drop threshold;
(4) enter formation if allow to arrive grouping, then queue length increases progressively 1, and the descriptor of should dividing into groups is inserted in the queue linked list, and the formation basic parameter is write back in the queue structure among the SRAM algorithm flow end;
(5) do not enter formation if do not allow to arrive grouping, then respective priority abandons grouping number count increments 1, judge that then this abandons count value and whether is lower than and abandons higher limit accordingly, if then the control parameter of algorithm after upgrading is write back in the queue structure among the SRAM;
(6) surpass (contain and equal) corresponding higher limit that abandons if this abandons count value, then this is abandoned the counting zero clearing;
(7) carry out following steps respectively according to the difference of priority-level:
If arrive and to be grouped into low priority, judge then whether drop threshold can increase, if could drop threshold would be increased corresponding step-length;
If arrive and to be grouped into high priority, judge then whether drop threshold can reduce, if could drop threshold would be reduced corresponding step-length;
(8) control parameter of algorithm (abandoning count value, drop threshold) after will upgrading writes back in the queue structure among the SRAM, and reclaims the buffer resource that this grouping takies, and leaves for nextly to arrive grouping and use, and algorithm flow finishes.
Now, illustrate as follows in conjunction with case step:
Suppose: the N=100 of capacity of queue, it is LB=5000 that low-priority packet abandons higher limit, it is HB=5 that high priority packet abandons higher limit, adjusts step-length and is SP=1.Current queue length is L=99, and drop threshold is TH=80, and low-priority packet abandons counting and is LC=4999, and high priority packet abandons counting and is HC=4.Order arrives totally three groupings, and first is a low priority, and remaining two is high priority.Suppose that the grouping arrival interval is enough little, output not grouping leaves formation, does not promptly have the buffer queue resource to reclaim.
When low-priority packet arrived queue front, it satisfied the priority limit requirement, reads Control Parameter in the respective queue structure from SRAM; Current queue length is greater than drop threshold, and judgement should abandon this grouping; Low-priority packet abandons count increments 1, and LC=5000 arrives the corresponding upper limit that abandons, and immediately this is abandoned counting zero clearing, LC=0; Simultaneously because drop threshold TH=80 less than the N of capacity of queue, can continue increase, with drop threshold incremental steps SP, TH=81; The control parameter of algorithm LC after upgrading, TH writes back in the queue structure among the SRAM; Abandon this grouping, reclaim buffer resource, algorithm flow finishes.
When first high priority packet arrived queue front, it satisfied the priority limit requirement, reads Control Parameter in the respective queue structure from SRAM; Current queue length is less than capacity of queue, and judgement should be accepted this grouping, and queue length increases progressively 1, and L=100 revises the queue linked list parameter, and the basic parameter (L etc.) after upgrading is write back in the queue structure among the SRAM; Algorithm flow finishes;
When second high priority packet arrived queue front, it satisfied the priority limit requirement, reads Control Parameter in the respective queue structure from SRAM; Current queue length equals capacity of queue, and judgement should abandon this grouping; High priority packet abandons count increments 1, and HC=5 arrives the corresponding upper limit that abandons, and immediately this is abandoned counting zero clearing, HC=0; Because drop threshold TH=81 can continue to reduce, with drop threshold decrement step size SP, TH=80 allows the space of use with the compression low-priority packet, thereby reduces abandoning of high priority packet simultaneously; The control parameter of algorithm HC after upgrading, TH writes back in the queue structure among the SRAM; Abandon this grouping, reclaim buffer resource, algorithm flow finishes.
Its advantage is, by to abandoning the upper limit, adjusts the setting and the optimization of Control Parameter such as step-length, can obtain the highly stable relative Loss Rate performance that matches with desired value, and it doesn't matter with predefined drop threshold.Algorithm mechanism is simple simultaneously, is easy to realize.
Claims (2)
1. the dynamic partial buffer that processor platform Network Based is realized is shared method, contain and set drop threshold and limit the step that buffer resource that the different priorities grouping allows to use realizes abandoning control, it is characterized in that: it is that a kind of processor Network Based is realized the method that dynamic partial buffer is shared, promptly dynamically adjust threshold value to guarantee to obtain the fair relatively performance of expection according to the behavior that abandons that arrives grouping, and can adapt to the network traffics variation to a certain extent, to improve the system buffer resource utilization; It is except being provided with drop threshold TH (a 0≤TH≤N, N is a capacity of queue) control the grouping abandon outside, also abandon grouping number count value and abandon grouping number higher limit for each priority is provided with one, increase to when abandoning higher limit when abandoning count value, the promotion drop threshold is moved certain step-length abandon, realize the dynamic partial buffer resource-sharing with the grouping that reduces this priority.
2, the dynamic partial buffer of realizing according to the described processor platform Network Based of claim 0 is shared method, it is characterized in that, it contains successively and has the following steps:
(1) priority that check to arrive grouping whether be under the jurisdiction of height, priority in one, if not then abandon this grouping, reclaim the buffer resource that this grouping takies, leave for and nextly arrive grouping and use the algorithm flow end;
(2) if, then the information that provides according to front end route, sort module is determined to arrive the formation that grouping is subordinate to, and reads the basic parameter of formation and the Control Parameter of separator tube adjustment method in the static RAM from network processing unit (SRAM) the respective queue structure;
(3) judge whether respectively that for different priority-level allowing to arrive grouping enters formation: if high priority packet judges that then whether current queue length is less than capacity of queue; If low-priority packet judges that then whether current queue length is less than drop threshold;
(4) enter formation if allow to arrive grouping, then queue length increases progressively 1, and the descriptor of should dividing into groups is inserted in the queue linked list, and the formation basic parameter is write back in the queue structure among the SRAM algorithm flow end;
(5) do not enter formation if do not allow to arrive grouping, then respective priority abandons grouping number count increments 1, judge that then this abandons count value and whether is lower than and abandons higher limit accordingly, if then the control parameter of algorithm after upgrading is write back in the queue structure among the SRAM;
(6) surpass (contain and equal) corresponding higher limit that abandons if this abandons count value, then this is abandoned the counting zero clearing;
(7) carry out following steps respectively according to the difference of priority-level:
If arrive and to be grouped into low priority, judge then whether drop threshold can increase, if could drop threshold would be increased corresponding step-length;
If arrive and to be grouped into high priority, judge then whether drop threshold can reduce, if could drop threshold would be reduced corresponding step-length;
(8) control parameter of algorithm (abandoning count value, drop threshold) after will upgrading writes back in the queue structure among the SRAM, and reclaims the buffer resource that this grouping takies, and leaves for nextly to arrive grouping and use, and algorithm flow finishes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB021313350A CN1195361C (en) | 2002-09-29 | 2002-09-29 | Method for implementing dynamic partial buffer share based on network processor platform |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB021313350A CN1195361C (en) | 2002-09-29 | 2002-09-29 | Method for implementing dynamic partial buffer share based on network processor platform |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1402472A true CN1402472A (en) | 2003-03-12 |
CN1195361C CN1195361C (en) | 2005-03-30 |
Family
ID=4746654
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB021313350A Expired - Fee Related CN1195361C (en) | 2002-09-29 | 2002-09-29 | Method for implementing dynamic partial buffer share based on network processor platform |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN1195361C (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100376099C (en) * | 2005-07-04 | 2008-03-19 | 清华大学 | Method for realizing comprehensive queue managing method based network processor platform |
CN100413285C (en) * | 2005-06-03 | 2008-08-20 | 清华大学 | High-speed multi-dimension message classifying algorithm design and realizing based on network processor |
WO2011012023A1 (en) * | 2009-07-31 | 2011-02-03 | 中兴通讯股份有限公司 | Method and system for managing output port queue of network processor |
CN102113277A (en) * | 2008-08-07 | 2011-06-29 | 高通股份有限公司 | Efficient packet handling for timer-based discard in wireless communication system |
CN1799228B (en) * | 2003-04-02 | 2012-02-01 | 思科技术公司 | Data networks |
WO2023116611A1 (en) * | 2021-12-21 | 2023-06-29 | 华为技术有限公司 | Queue control method and apparatus |
-
2002
- 2002-09-29 CN CNB021313350A patent/CN1195361C/en not_active Expired - Fee Related
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1799228B (en) * | 2003-04-02 | 2012-02-01 | 思科技术公司 | Data networks |
CN100413285C (en) * | 2005-06-03 | 2008-08-20 | 清华大学 | High-speed multi-dimension message classifying algorithm design and realizing based on network processor |
CN100376099C (en) * | 2005-07-04 | 2008-03-19 | 清华大学 | Method for realizing comprehensive queue managing method based network processor platform |
CN102113277A (en) * | 2008-08-07 | 2011-06-29 | 高通股份有限公司 | Efficient packet handling for timer-based discard in wireless communication system |
CN102113277B (en) * | 2008-08-07 | 2014-03-12 | 高通股份有限公司 | Efficient packet handling for timer-based discard in wireless communication system |
CN103826260A (en) * | 2008-08-07 | 2014-05-28 | 高通股份有限公司 | Efficient packet handling for timer-based discard in wireless communication system |
CN103826260B (en) * | 2008-08-07 | 2017-09-05 | 高通股份有限公司 | Method and apparatus for being efficiently treated through in a wireless communication system to packet |
WO2011012023A1 (en) * | 2009-07-31 | 2011-02-03 | 中兴通讯股份有限公司 | Method and system for managing output port queue of network processor |
WO2023116611A1 (en) * | 2021-12-21 | 2023-06-29 | 华为技术有限公司 | Queue control method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN1195361C (en) | 2005-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1064500C (en) | Method and apparatus for temporarily storing data packets | |
US9160677B2 (en) | Segmentation of network packets for a switch fabric | |
US11637786B1 (en) | Multi-destination traffic handling optimizations in a network device | |
US20080063004A1 (en) | Buffer allocation method for multi-class traffic with dynamic spare buffering | |
CN100405786C (en) | Sharing cache dynamic threshold early drop device for supporting multi queue | |
US20030174708A1 (en) | High-speed memory having a modular structure | |
CN103139093A (en) | High speed network data flow load balancing scheduling method based on field programmable gate array (FPGA) | |
CN1666475A (en) | Buffer memory reservation | |
CN110266606B (en) | Active queue management optimization method and device in edge network | |
CN1165184C (en) | Dispatching method for comprehensive router service | |
CN1195361C (en) | Method for implementing dynamic partial buffer share based on network processor platform | |
US20030067931A1 (en) | Buffer management policy for shared memory switches | |
CN105245313B (en) | Unmanned plane multi-load data dynamic multiplexing method | |
CN1298149C (en) | Flow control device and method based on packet mode | |
Lin et al. | Two-stage fair queuing using budget round-robin | |
CN1669279A (en) | Increasing memory access efficiency for packet applications | |
CN1389799A (en) | Multiple-priority level and optimal dynamic threshold buffer storage managing algorithm | |
CN1471264A (en) | Dynamic RAM quene regulating method based on dynamic packet transmsision | |
CN113132258A (en) | Time-sensitive network data transmission system and time delay analysis method | |
CN112055382A (en) | Service access method based on refined differentiation | |
CN110891027B (en) | Named data network transmission control method, device and equipment based on queue scheduling | |
CN109450823B (en) | Network large-capacity switching device based on aggregation type cross node | |
CN1299477C (en) | Method for implementing multiplex line speed ATM interface in multi-layer network exchange | |
Chuang et al. | A dynamic partial buffer sharing scheme for packet loss control in congested networks | |
CN1306759C (en) | Method for exchange system for inputting end of two-stage queueing structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C19 | Lapse of patent right due to non-payment of the annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |