CN110011924B - Method and device for clearing cache congestion - Google Patents

Method and device for clearing cache congestion Download PDF

Info

Publication number
CN110011924B
CN110011924B CN201810008255.0A CN201810008255A CN110011924B CN 110011924 B CN110011924 B CN 110011924B CN 201810008255 A CN201810008255 A CN 201810008255A CN 110011924 B CN110011924 B CN 110011924B
Authority
CN
China
Prior art keywords
cache
linked list
module
dequeue
partition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810008255.0A
Other languages
Chinese (zh)
Other versions
CN110011924A (en
Inventor
张颖颖
李�浩
郑利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanechips Technology Co Ltd
Original Assignee
Sanechips Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanechips Technology Co Ltd filed Critical Sanechips Technology Co Ltd
Priority to CN201810008255.0A priority Critical patent/CN110011924B/en
Publication of CN110011924A publication Critical patent/CN110011924A/en
Application granted granted Critical
Publication of CN110011924B publication Critical patent/CN110011924B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • H04L47/323Discarding or blocking control packets, e.g. ACK packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method and a device for eliminating buffer congestion are provided, wherein a recovery threshold is set, a sorting linked list is generated according to an enqueue sequence in a message enqueue process, and an address quick recovery mechanism is triggered after the buffer usage reaches the recovery threshold. The rapid address recovery mechanism provided by the invention can quickly recover the address occupying the longest time of the cache from the oldest (i.e. the longest in the cache) message by searching the sorting linked list, thereby ensuring the QoS performance of the non-congestion queue. When the address is recovered, the buffer partitions formed by a plurality of messages under the same queue number are discarded as a unit, and useless messages occupying buffer space can be quickly eliminated, so that the queuing and dequeuing orders of normal messages are further guaranteed.

Description

Method and device for clearing cache congestion
Technical Field
The present invention relates to the field of data communications, and in particular, to a method and an apparatus for caching a packet queue.
Background
The continuously growing internet traffic has made the data communications field more and more demanding QoS performance requirements for data products. At present, the buffer capacity of each data product is limited, and the buffer is easily filled by a congestion queue, so that the QoS performance does not reach the standard.
Currently, for cache, the commonly adopted management mode is as follows: when a message enters a cache, enqueuing according to the congestion level (high-low priority) and the queue number, waiting for a token after the message is enqueued, and preferentially meeting the requirement of the high-priority message when distributing the token. Therefore, the high-priority messages in the cache can be rapidly dequeued after the tokens are obtained. However, under this management scheme, if there is always a packet at a high priority, the token is always assigned to the high priority. The result of this is: after the low priority enters the cache, because the high priority always occupies the token and the low priority message always lacks the token, the low priority message cannot be dequeued all the time, and the cache space needs to be occupied all the time in the cache. For a long time, the cache occupied by the low priority level is larger and larger, which can cause the cache to be exploded or the high priority level can not be enqueued, thus affecting the flow of the high priority level and causing the insufficient QoS performance.
To solve this problem, the current common solutions are: and aging or emptying the messages which are not always dequeued in the cache, namely, discarding all the messages in the queue. The aging and emptying modes can dequeue the messages occupied in the cache, but the two modes have the disadvantages that: aging and emptying both need to be performed after the cache of the queue reaches a certain threshold, and the cache recovery speed is relatively slow when one message is aged or emptied each time.
In order to solve the above problems, there is an urgent need for a method capable of recovering a cache in time, which can quickly recover an address occupying the cache for a long time, thereby ensuring the QoS performance of an uncongested queue.
Disclosure of Invention
In order to solve the defects in the prior art, the present invention aims to provide a method and an apparatus for clearing cache congestion.
First, in order to achieve the above object, a method for clearing cache congestion is provided, which includes the following steps: step one, receiving a message, and calling a corresponding cache partition to store the message according to a queue number or priority of the message; storing and calling the sequence of each cache partition by using a sequencing linked list;
when the depth of the cache is greater than a recovery threshold, searching for a cache partition with dequeue abnormality according to the sequence of calling each cache partition stored in the sorting linked list, emptying the cache partition with dequeue abnormality, and deleting a corresponding cache partition stored in the sorting linked list;
and thirdly, sequentially circulating the first step to the second step until no new message is queued.
Further, the above method, in the second step, determining a basis for dequeue exception of the cache partition, includes: the buffer partitions have no queue numbers or priorities, or the queue numbers or priorities corresponding to the buffer partitions have no authorization, or the packets corresponding to the buffer partitions do not enter the dequeue linked list.
Further, the dequeuing step in the second step of the method specifically includes:
and when the cache depth is not greater than the recovery threshold, moving the cache partitions into a dequeue linked list according to the sequence of the sorting linked list, and dequeuing the corresponding messages in the dequeue linked list after authorization is obtained.
Specifically, in the first step of the method, the step of storing the packet specifically includes: calling a corresponding cache partition according to the queue number or the priority of the message;
if the cache partitions under the queue numbers or the priorities of the messages are not filled, sequentially storing the messages to the unfilled cache partitions;
and if the buffer partitions under the queue numbers or the priorities of the messages are filled, establishing a new buffer partition storage sequence for the messages.
Specifically, the sorting linked list in the above method adopts a double linked list structure.
Secondly, for realizing above-mentioned purpose, still provide a device of cleaing away buffer congestion, including the enqueue module, buffer module and the dequeue module that connect gradually, its characterized in that still includes linked list management module: the input end of the linked list management module is simultaneously connected with the enqueue module and the cache module, and the control end of the linked list management module is connected with the dequeue module;
the linked list management module is configured to: calling a cache partition in the cache module to store the messages according to the queue numbers or the priorities of the messages in the enqueue module, and updating the sorting linked list according to the enqueue sequence; and searching the state of the cache partitions in the dequeuing module according to the sequence of the sorting linked list, controlling the cache module to release the cache partitions with dequeuing abnormality and deleting the corresponding nodes in the sorting linked list.
Further, in the above apparatus, the sorting linked list in the linked list management module is a double linked list structure, and the cache partition corresponding to the new enqueue packet is updated to the tail of the sorting linked list.
Specifically, in the above apparatus, the cache module is divided into a plurality of groups of storage units with a fixed size;
each cache partition comprises N groups of storage units with continuous addresses, N is more than or equal to 1, and each cache partition can store at least one message;
each queue number or priority corresponds to at least one of the cache partitions.
Further, in the above apparatus, the dequeuing module includes a discard chain table and a dequeuing chain table;
the dequeue linked list is used for storing the cache partition which is closest to the head of the linked list and is authorized in the sequencing linked list, extracting the message corresponding to the cache partition in the cache module and dequeuing the message;
and the discarding linked list is used for storing the cache subarea which is closest to the head of the linked list and is out of queue abnormally in the sequencing linked list when the depth of the cache module reaches a recovery threshold, extracting the message corresponding to the cache subarea in the cache module and discarding the message.
Advantageous effects
In the invention, by setting the recovery threshold, when the cache usage reaches the recovery threshold, the address quick recovery mechanism is triggered: firstly, by searching the sorting linked list, a group of messages (namely a plurality of messages stored in the same buffer partition by queue numbers and priorities) which are the oldest (namely the longest in the buffer) are recycled, so that the address occupying the longest time of the buffer is recycled in advance, and the QoS performance of the non-congestion queue is ensured.
Further, in order to ensure the balance of the cache utilization rate, when the recovery threshold is reached, the invention can ensure that the dequeue rate is consistent with the enqueue rate by searching the dequeue linked list, namely, one enqueue message and one dequeue message with the obtained authority. However, when the message occupying the cache resource for a long time needs to be discarded, the invention directly discards all messages in the whole cache partition under the queue number or the priority by searching the queue number or the priority corresponding to the message. Therefore, useless messages can be discarded quickly, and the utilization rate of the cache is effectively ensured.
Meanwhile, in the stage of distributing the cache partitions, the invention can judge whether the cache partitions under the queue numbers or the priority numbers of the messages are full in advance, and the new cache partitions can be redistributed only under the condition that one cache partition is completely full of the messages. By the method, the frequency of applying for caching when the message is enqueued is reduced, and the utilization rate of the caching is further improved.
Furthermore, the sorting linked list in the invention adopts a double linked list structure, so that the sequence corresponding to the subsequent nodes can be quickly determined after the intermediate nodes are directly deleted. And in the sorting linked list, updating the buffer subareas of the newly enqueued messages to the tail part of the sorting linked list, so that the messages occupying the buffer resources for the longest time can be directly and conveniently obtained through the sorting linked list and discarded.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a method of reducing cache congestion according to the present invention;
fig. 2 is a block diagram of an apparatus for reducing buffer congestion according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Fig. 1 is a method for clearing cache congestion according to the present invention, which includes the following steps:
step one, receiving a message, and calling a corresponding cache partition to store the message according to a queue number or priority of the message; meanwhile, the sequence of each cache partition is stored and called by a sequencing linked list; (this step is the basis for the subsequent implementation of the fast address recovery mechanism);
and secondly, when the depth of the cache is greater than a recovery threshold, executing an address fast recovery mechanism by a discarding linked list: searching a cache partition immigration discarding linked list with dequeue abnormality according to the sequence of the sorting linked list (namely, the sequence of each cache partition stored in the sorting linked list), emptying the cache partition with dequeue abnormality, and deleting the corresponding cache partition stored in the sorting linked list; (thus, after a group of abnormal messages which are dequeued for a long time are deleted, the cache depth can be updated, and the following messages can enter and use cache resources in time); and meanwhile, when the depth of the cache is not greater than the recovery threshold, the dequeuing step is executed: moving the cache partitions into a dequeue linked list according to the sequence of the sorting linked list, dequeuing the corresponding messages in the dequeue link according to RR rules or SP rules after authorization is obtained, and deleting the corresponding cache partitions stored in the sorting linked list; and thirdly, sequentially circulating the first step to the second step until no new message is queued.
Specifically, the above method, in the second step, determining that the dequeue exception occurs in the cache partition includes: the buffer partition has no queue number or priority, or the queue number or priority corresponding to the buffer partition has no authorization, or the packet corresponding to the buffer partition does not enter the dequeue linked list. If the queue number is not authorized, the dequeue condition is not met, so that the dequeue condition cannot exist in the dequeue linked list, and the dequeue linked list can be added again only after the condition is met. If the queue is empty, it indicates that there is no message, and it should not exist in the dequeue linked list. In order to ensure the balance of the cache utilization rate, after the recovery threshold is reached, the enqueue rate and the dequeue rate are ensured to be consistent, messages enter one by one, for dequeue, normal messages exit one by one, and for messages to be discarded, which are lost in a chunk form, several messages can be lost, so that the cache utilization rate is ensured.
Further, in order to reduce the frequency of applying for caching when enqueuing the message, in the second step, the step of storing the message specifically includes:
at least one cache partition is corresponding to each queue number or priority;
if the buffer partition (chunk) under the queue number or priority of the message is not full, sequentially storing the message next to the previous message into an idle storage unit (block) in the buffer partition. Only in the situation that the cache partition is completely occupied, a new cache partition needs to be allocated for a new message, and the messages are stored in sequence. In this way, each cache partition can store one or more messages (the specific number of the messages is related to the actual length of the messages), and when the chunk is discarded, a plurality of messages can be discarded in time, so that the cache address can be quickly recovered.
Specifically, when the message obtains authorization, the method dequeues the corresponding message in the dequeue linked list according to the RR rule or the SP rule. Wherein, the RR rule indicates dequeuing in sequence according to the queue number. The SP rule dequeues according to the priority of the queue number: if the high-priority queue has messages, high-priority dequeuing is performed first, otherwise, low-priority messages are dequeued). And when dequeuing, a dequeue process is executed by using the dequeue linked list. The dequeue linked list is divided according to the queue number, and the dequeue linked list between queues dequeue according to the RR rule or SP rule under the condition that no back pressure is received. When the queue number has no message or authorization, the dequeue linked list of the queue is deleted.
Specifically, the sorting linked list in the above method adopts a double linked list structure, so that the first-come sorting can be conveniently ensured to be in the top under the condition of deleting nodes in the middle.
Secondly, referring to fig. 2, in order to achieve the above object, a device for clearing cache congestion is further provided, which includes an enqueue module, a cache module and a dequeue module that are connected in sequence, and is characterized by further including a linked list management module:
the input end of the linked list management module is simultaneously connected with the enqueue module and the cache module, and the control end of the linked list management module is connected with the dequeue module.
The linked list management module is used for searching the messages which are not dequeued for the longest time in the cache module: calling a cache partition in the cache module to store the messages according to the queue numbers or the priorities of the messages in the enqueue module, and updating the sorting linked list according to the enqueue sequence; and searching the state of the cache partition in the dequeuing module according to the sequence of the sorting linked list, controlling the cache module to release the cache partition with dequeuing exception (corresponding to a dequeuing process) and deleting the corresponding node in the sorting linked list.
The enqueuing module: the queue number and congestion level (i.e., high-low priority) are extracted for the enqueue packet.
A caching module: and managing the cache depth according to the queue number and the application and deletion conditions of the linked list. When setting the recovery threshold (managing the depth of the cache), the flow with high priority needs to be considered, so as to ensure that the non-congestion is not influenced.
A dequeue module: and dequeuing the normal message and the discarded message, wherein the discarded message can be dequeued only when the normal message is not dequeued.
Further, in the above apparatus, the sorting linked list in the linked list management module is of a double linked list structure, and the buffer partition of the new enqueue packet is updated to the tail of the sorting linked list.
Specifically, in the above apparatus, the cache module is divided into a plurality of groups of storage units with a fixed size (e.g., 256 bits);
each cache partition comprises N groups of storage units with continuous addresses, wherein N is more than or equal to 1 (8 is taken as N in the embodiment), and the messages are stored by taking a storage unit block as a unit, so that each cache partition can store at least one message;
each queue number or priority corresponds to at least one cache partition chunk, and when the dequeue exception is discarded, the cache partition (chunk) is taken as a unit, and a plurality of messages in the same cache partition are discarded at one time, so that the recovery efficiency is higher.
Further, in the above apparatus, the dequeue module includes a discard chain table and a dequeue chain table;
the dequeue linked list is used for storing a cache partition corresponding to a queue number or priority which is closest to a linked list head and obtains authorization in the sorting linked list, extracting a message corresponding to the cache partition in the cache module and dequeuing the message; and the discarding linked list is used for storing a cache partition which is closest to a link list head and is dequeued abnormally (not in the dequeuing linked list) in the sequencing linked list when the depth of the cache module reaches a recovery threshold, and extracting and discarding a message corresponding to the cache partition in the cache module.
The technical scheme of the invention has the advantages that: by setting a recovery threshold and generating a sorting linked list according to an enqueue sequence in the message enqueuing process, after the cache usage reaches the recovery threshold, triggering a rapid address recovery mechanism. The rapid address recovery mechanism provided by the invention can quickly recover the address occupying the longest time of the cache from the oldest (i.e. the longest in the cache) message by searching the sorting linked list, thereby ensuring the QoS performance of the non-congestion queue. When the address is recovered, the buffer zone composed of a plurality of messages under the same queue number is used as a unit for discarding, and useless messages occupying buffer space can be quickly cleared, so that the queuing and dequeuing orders of normal messages are further guaranteed.
Those of ordinary skill in the art will understand that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described above, or equivalents may be substituted for elements thereof. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for removing cache congestion, the method comprising the steps of:
step one, receiving a message, and calling a corresponding cache partition to store the message according to a queue number or priority of the message; storing and calling the sequence of each cache partition by using a sorting linked list;
secondly, when the depth of the cache is larger than a recovery threshold, searching for cache partitions with dequeue abnormality according to the sequence of calling each cache partition stored in the sorting linked list, emptying the cache partitions with dequeue abnormality, and deleting corresponding cache partitions stored in the sorting linked list;
thirdly, sequentially circulating the first step to the second step until no new message is queued;
in the second step, the criterion for judging the dequeue exception of the cache partition comprises: the buffer partitions have no queue numbers or priorities, or the queue numbers or priorities corresponding to the buffer partitions have no authorization, or the packets corresponding to the buffer partitions do not enter the dequeue linked list.
2. The method of clearing cache congestion of claim 1, wherein the dequeuing in the second step comprises:
and moving the cache partitions into a dequeue linked list according to the sequence of the sorting linked list, and dequeuing the corresponding messages in the dequeue linked list after authorization is obtained.
3. The method for removing the cache congestion according to claim 1, wherein in the first step, the step of storing the packet comprises:
calling a corresponding cache partition according to the queue number or the priority of the message;
if the cache partitions under the queue numbers or the priorities of the messages are not filled, sequentially storing the messages to the unfilled cache partitions;
and if the buffer partitions under the queue numbers or the priorities of the messages are filled, establishing a new buffer partition storage sequence for the messages.
4. A method for clearing cache congestion according to any of claims 1 to 3, wherein said sorted linked list is a double linked list structure.
5. A device for eliminating buffer congestion comprises an enqueue module, a buffer module and a dequeue module which are connected in sequence, and is characterized by also comprising a linked list management module;
the input end of the linked list management module is simultaneously connected with the enqueue module and the cache module, and the control end of the linked list management module is connected with the dequeue module;
the linked list management module is configured to: calling a cache partition in the cache module to store the messages according to the queue numbers or the priorities of the messages in the enqueue module, and updating a sorting linked list according to the enqueue sequence; searching the state of each cache partition in the dequeuing module according to the sequence of the sorting linked list, controlling the cache module to release the cache partition with dequeuing abnormality and deleting the corresponding node in the sorting linked list; the basis for judging whether the dequeue exception occurs in the cache partition comprises the following steps: the buffer partition has no queue number or priority, or the queue number or priority corresponding to the buffer partition has no authorization, or the packet corresponding to the buffer partition does not enter the dequeue linked list.
6. The apparatus according to claim 5, wherein the sorting chain table in the chain table management module has a double chain table structure, and the buffer partition corresponding to the new enqueue packet is updated to the tail of the sorting chain table.
7. The apparatus for eliminating cache congestion of claim 6, wherein the cache module is divided into a plurality of groups of storage units with a fixed size;
each cache partition comprises N groups of storage units with continuous addresses, wherein N is more than or equal to 1,
each queue number or priority corresponds to at least one of the cache partitions.
8. The apparatus to clear cache congestion of claim 6, wherein the dequeue module comprises a discard link table and a dequeue link table;
the dequeue linked list is used for storing the cache partition which is closest to the head of the linked list and is authorized in the sequencing linked list, extracting the message corresponding to the cache partition in the cache module and dequeuing the message;
and the discarding linked list is used for storing the cache subarea which is closest to the head of the linked list and is out of queue abnormally in the sequencing linked list when the depth of the cache module reaches a recovery threshold, extracting the message corresponding to the cache subarea in the cache module and discarding the message.
CN201810008255.0A 2018-01-04 2018-01-04 Method and device for clearing cache congestion Active CN110011924B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810008255.0A CN110011924B (en) 2018-01-04 2018-01-04 Method and device for clearing cache congestion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810008255.0A CN110011924B (en) 2018-01-04 2018-01-04 Method and device for clearing cache congestion

Publications (2)

Publication Number Publication Date
CN110011924A CN110011924A (en) 2019-07-12
CN110011924B true CN110011924B (en) 2023-03-10

Family

ID=67164363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810008255.0A Active CN110011924B (en) 2018-01-04 2018-01-04 Method and device for clearing cache congestion

Country Status (1)

Country Link
CN (1) CN110011924B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111698160A (en) * 2019-12-27 2020-09-22 国网上海市电力公司 Ring network system, and data processing method and device of nodes in network system
CN113973085B (en) * 2020-07-22 2023-10-20 华为技术有限公司 Congestion control method and device
CN112835818A (en) * 2021-02-01 2021-05-25 芯河半导体科技(无锡)有限公司 Method for recovering address of buffer space of flow queue

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5043885A (en) * 1989-08-08 1991-08-27 International Business Machines Corporation Data cache using dynamic frequency based replacement and boundary criteria
CN1855881A (en) * 2005-04-28 2006-11-01 华为技术有限公司 Method for dynamically sharing space of memory
CN101551736A (en) * 2009-05-20 2009-10-07 杭州华三通信技术有限公司 Cache management device and method based on address pointer linked list
CN101834801A (en) * 2010-05-20 2010-09-15 哈尔滨工业大学 Data caching and sequencing on-line processing method based on cache pool
CN104516828A (en) * 2013-09-27 2015-04-15 伊姆西公司 Method and device for removing caching data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231747A (en) * 2011-07-18 2011-11-02 杭州华三通信技术有限公司 Method and equipment for obtaining attack message
CN105072048B (en) * 2015-09-24 2018-04-10 浪潮(北京)电子信息产业有限公司 A kind of packet storage dispatching method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5043885A (en) * 1989-08-08 1991-08-27 International Business Machines Corporation Data cache using dynamic frequency based replacement and boundary criteria
CN1855881A (en) * 2005-04-28 2006-11-01 华为技术有限公司 Method for dynamically sharing space of memory
CN101551736A (en) * 2009-05-20 2009-10-07 杭州华三通信技术有限公司 Cache management device and method based on address pointer linked list
CN101834801A (en) * 2010-05-20 2010-09-15 哈尔滨工业大学 Data caching and sequencing on-line processing method based on cache pool
CN104516828A (en) * 2013-09-27 2015-04-15 伊姆西公司 Method and device for removing caching data

Also Published As

Publication number Publication date
CN110011924A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
US10341260B2 (en) Early queueing network device
CN109479032B (en) Congestion avoidance in network devices
US7970888B2 (en) Allocating priority levels in a data flow
US7535835B2 (en) Prioritizing data with flow control
US6810426B2 (en) Methods and systems providing fair queuing and priority scheduling to enhance quality of service in a network
US11637786B1 (en) Multi-destination traffic handling optimizations in a network device
KR101421240B1 (en) A router and queue process method thereof
CN110011924B (en) Method and device for clearing cache congestion
WO2022016889A1 (en) Congestion control method and device
US7756977B2 (en) Random early detect and differential packet aging flow control in switch queues
US8174985B2 (en) Data flow control
EP2641362A1 (en) Dynamic flow redistribution for head line blocking avoidance
WO2006091175A1 (en) Method and apparatus for buffer management in shared memory packet processors
CN113064738B (en) Active queue management method based on summary data
EP1327336B1 (en) Packet sequence control
JP7241194B2 (en) MEMORY MANAGEMENT METHOD AND APPARATUS
US20080165689A9 (en) Information flow control in a packet network based on variable conceptual packet lengths
US7573817B2 (en) Policing data based on data load profile
CN112055382A (en) Service access method based on refined differentiation
WO2002030061A1 (en) Filtering data flows
JP3587080B2 (en) Packet buffer management device and packet buffer management method
CN112311678B (en) Method and device for realizing message distribution
AU7243400A (en) Method and apparatus for call buffer protection in the event of congestion
CN111277513B (en) PQ queue capacity expansion realization method, device, equipment and storage medium
CN116155812A (en) Message processing method and network equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant