CN112804156A - Congestion avoidance method and device and computer readable storage medium - Google Patents

Congestion avoidance method and device and computer readable storage medium Download PDF

Info

Publication number
CN112804156A
CN112804156A CN201911105574.4A CN201911105574A CN112804156A CN 112804156 A CN112804156 A CN 112804156A CN 201911105574 A CN201911105574 A CN 201911105574A CN 112804156 A CN112804156 A CN 112804156A
Authority
CN
China
Prior art keywords
chip cache
threshold
received message
port
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911105574.4A
Other languages
Chinese (zh)
Inventor
梁卫敬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanechips Technology Co Ltd
Original Assignee
Sanechips Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanechips Technology Co Ltd filed Critical Sanechips Technology Co Ltd
Priority to CN201911105574.4A priority Critical patent/CN112804156A/en
Publication of CN112804156A publication Critical patent/CN112804156A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • H04L47/2433Allocation of priorities to traffic types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Abstract

The embodiment of the invention discloses a congestion avoiding method and a device and a computer readable storage medium, comprising the following steps: and when determining that the received message does not need to be discarded and determining that the received message needs to be moved to an off-chip cache, moving the received message to the off-chip cache. The embodiment of the invention adopts the off-chip cache to relieve the pressure of the on-chip cache without increasing the on-chip cache, so that the discarded messages are reduced, the data volume of the lost messages is less, and the QoS of the service is improved while the congestion is avoided when the congestion occurs.

Description

Congestion avoidance method and device and computer readable storage medium
Technical Field
Embodiments of the present invention relate to, but not limited to, the field of communication network devices, and in particular, to a method and an apparatus for congestion avoidance and a computer-readable storage medium.
Background
For network traffic, network resources are always limited, and in order to balance or guarantee Quality of Service (QoS) of the traffic, concepts of traffic management are introduced, including congestion avoidance, queue management, traffic shaping, congestion management, and the like.
The traffic management is commonly present in the switching network chip, and the common processing procedure is described by the following processing steps:
1. a Congestion avoidance (CGAVD) module receives a message descriptor sent by a preceding stage unit and extracts message information from the message descriptor;
2. the CGAVD module judges whether to discard the message according to the message information and the queue depth of internal maintenance;
3. a Queue Management Unit (QMU, Queue Management Unit) manages the messages through a linked list, performs enqueue Management on the received messages, applies for the authorization of the Queue to a congestion Management module, performs dequeue Management on the messages in the authorized Queue, and reads the messages from a cache;
4. the congestion management module performs authorization distribution according to a certain strategy, such as strict priority, weighted polling, weighted fair queuing and other strategies;
5. and the congestion management module performs congestion management at the same time, and controls the authorization and issuing rate of each queue through the shaper based on the bandwidth set by the user, so as to realize the control of the queue flow.
The above mode is to improve the performance of traffic management by adopting different algorithms from the aspects of queue management mode, authorized distribution strategy, shaping strategy and the like. However, for the traffic management itself, the internal memory has a great influence on the QoS to a certain extent, but for one chip, the internal cache cannot be increased all the time from the viewpoint of power consumption and cost, and at present, the pressure of the cache is relieved by many ways to improve the QoS, for example, by a way of discarding messages randomly, for example, by a way of discarding messages by adaptively modifying a discard threshold, and for example, discarding messages according to the time of the messages in a queue. However, in a large bandwidth scenario, the number of services is large, and when network congestion occurs, a large amount of message data may be lost only by discarding the message, thereby affecting the QoS of the services.
Disclosure of Invention
Embodiments of the present invention provide a congestion avoidance method and apparatus, and a computer-readable storage medium, which can improve QoS of a service while avoiding congestion.
An embodiment of the present invention provides a congestion avoidance method, including:
and when determining that the received message does not need to be discarded and determining that the received message needs to be moved to an off-chip cache, moving the received message to the off-chip cache.
In this embodiment of the present invention, before moving the received packet to the off-chip cache, the method further includes:
and judging that the buffer message count in the off-chip buffer is smaller than the moving and discarding threshold value of the flow queue in which the received message is positioned.
In this embodiment of the present invention, when the count of the buffered packets in the off-chip cache is greater than or equal to the moving discard threshold of the flow queue where the received packet is located, the method further includes: and discarding the received message.
In the embodiment of the present invention, the moving and discarding threshold of the flow queue with the high priority is greater than the moving and discarding threshold of the flow queue with the low priority.
In this embodiment of the present invention, when it is determined that the received packet does not need to be discarded and it is determined that the received packet does not need to be moved to the off-chip cache, the method further includes:
and putting the received message into an on-chip cache.
In the embodiment of the invention, whether the received message needs to be moved to an off-chip cache is determined according to whether a first condition is met;
wherein the first condition comprises any one or more of:
the total count of the on-chip cache is greater than or equal to a system level migration threshold;
the port cache count of the port where the received message is located is greater than or equal to a port level moving threshold;
the queue depth of the flow queue where the received message is located is greater than or equal to the flow level moving threshold value.
In this embodiment of the present invention, the determining whether the received packet needs to be moved to the off-chip cache according to whether the first condition is satisfied includes any one or more of the following:
when one or more conditions in the first conditions are met, determining that the received message needs to be moved to the off-chip cache;
when any one of the first conditions is not satisfied, it is determined that the received packet does not need to be moved to the off-chip cache.
In the embodiment of the present invention, the port level shift threshold of the port with the high priority is greater than the port level shift threshold of the port with the low priority.
In the embodiment of the present invention, the flow level shifting threshold of the flow queue with the high priority is greater than the flow level shifting threshold of the flow queue with the low priority.
An embodiment of the present invention provides a congestion avoidance apparatus, including a processor and a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed by the processor, any one of the congestion avoidance methods is implemented.
An embodiment of the present invention proposes a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of any of the above-mentioned congestion avoidance methods.
One embodiment of the invention comprises: and when determining that the received message does not need to be discarded and determining that the received message needs to be moved to an off-chip cache, moving the received message to the off-chip cache. The embodiment of the invention adopts the off-chip cache to relieve the pressure of the on-chip cache without increasing the on-chip cache, so that the discarded messages are reduced, the data volume of the lost messages is less, and the QoS of the service is improved while the congestion is avoided when the congestion occurs.
In another embodiment of the present invention, before moving the received packet to the off-chip cache, the method further includes: and judging that the buffer message count in the off-chip buffer is smaller than the moving and discarding threshold value of the flow queue in which the received message is positioned. Because the off-chip cache is limited, whether the received message is discarded or not is determined by comparing the count of the cache messages in the off-chip cache with the moving discarding threshold of the flow queue where the received message is located, so that the problem caused by the limitation of the off-chip cache, such as the difficulty in enqueuing the message due to the back pressure caused by the off-chip cache pressure, is avoided.
In another embodiment of the present invention, the move discard threshold of the flow queue with high priority is greater than the move discard threshold of the flow queue with low priority. The embodiment of the invention sets the moving and discarding threshold value of the flow queue with high priority to be larger than the moving and discarding threshold value of the flow queue with low priority, so that the messages with low priority are discarded preferentially, and the QoS of the messages with high priority is guaranteed preferentially.
In another embodiment of the present invention, the port level move threshold of the port with high priority is greater than the port level move threshold of the port with low priority. The embodiment of the invention ensures that the message with low priority is preferentially moved to the off-chip cache by setting the port level moving threshold of the port with high priority to be larger than the port level moving threshold of the port with low priority, thereby preferentially ensuring the QoS of the message with high priority.
In another embodiment of the present invention, the flow level shifting threshold of the flow queue with high priority is greater than the flow level shifting threshold of the flow queue with low priority. The embodiment of the invention ensures that the messages with low priority are preferentially moved to the off-chip cache, and the messages with high priority are kept in the on-chip cache as much as possible by setting the flow level moving threshold of the flow queue with high priority to be greater than the flow level moving threshold of the flow queue with low priority, thereby preferentially ensuring the QoS of the messages with high priority.
Additional features and advantages of embodiments of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of embodiments of the invention. The objectives and other advantages of the embodiments of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the examples of the invention serve to explain the principles of the embodiments of the invention and not to limit the embodiments of the invention.
Fig. 1 is a flowchart of a congestion avoidance method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a congestion avoidance method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a congestion avoidance apparatus according to another embodiment of the present invention;
fig. 4 is a schematic structural diagram of a congestion avoidance apparatus according to an example of the embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments of the present invention may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Referring to fig. 1, an embodiment of the present invention provides a congestion avoidance method, including:
step 100, when it is determined that the received message does not need to be discarded and it is determined that the received message needs to be moved to an off-chip cache, moving the received message to the off-chip cache.
In another embodiment of the present invention, when it is determined that the received packet does not need to be discarded and it is determined that the received packet does not need to be moved to the off-chip cache, the method further includes:
and putting the received message into an on-chip cache.
In another embodiment of the present invention, the received message is discarded when it is determined that the received message needs to be discarded.
In the embodiment of the invention, after a message is received, adding the queue depth of the in-chip cache maintained inside the chip to the length of the current message to obtain a new queue depth, comparing the obtained new queue depth with a discard threshold, and determining that the received message needs to be discarded when the new queue depth is greater than or equal to the discard threshold; and when the depth of the new queue is smaller than the discarding threshold, determining that the received message does not need to be discarded.
In an exemplary example, the queue depth is statistics of packet sizes in a flow queue, a block is used as a measurement unit, 1block is 384Byte, and when a packet is enqueued and dequeued, the flow queue depth of the flow queue where the packet is located needs to be updated according to the packet size.
In an exemplary embodiment, in the case of the off-chip cache, in order to discard the received packets as little as possible, the discard threshold may be set higher than the discard threshold in the case of the off-chip cache not existing.
In an exemplary embodiment, in order to determine whether the received packet needs to be discarded effectively when there is no off-chip cache, the discard threshold may be set to be lower.
In the embodiment of the invention, whether the received message needs to be moved to an off-chip cache is determined according to whether a first condition is met;
wherein the first condition comprises any one or more of:
the total count of the on-chip cache is greater than or equal to a system level migration threshold;
the port cache count of the port where the received message is located is greater than or equal to a port level moving threshold;
the queue depth of the flow queue where the received message is located is greater than or equal to the flow level moving threshold value.
Specifically, determining whether the received packet needs to be moved to the off-chip cache according to whether the first condition is met includes any one or more of the following steps:
when one or more conditions in the first conditions are met, determining that the received message needs to be moved to the off-chip cache;
when any one of the first conditions is not satisfied, it is determined that the received packet does not need to be moved to the off-chip cache.
In an exemplary embodiment, the system-level shift threshold is greater than the port-level shift threshold, the port-level shift threshold is greater than the flow-level shift threshold, the system-level shift threshold, the port-level shift threshold, and the flow-level shift threshold may be set according to a service condition, a flow queue where a packet is located is hooked to a certain port, and a plurality of flow queues may be hooked to the same port.
In the embodiment of the present invention, the total count of the on-chip caches refers to the sum of the sizes of all messages cached in the on-chip caches and the size of a received message, and takes block as a unit, where 1block is 384 bytes, and when a message is queued, dequeued, discarded, or moved, the total count of the on-chip caches needs to be updated according to the size of the message.
The port cache count of the port where the received message is located refers to the sum of the sizes of all messages in the flow queue corresponding to the port and the size of the received message, and 1block is 384 bytes, and when a message is queued, dequeued, discarded or moved, the port cache count of the port where the message is located needs to be updated according to the size of the message.
The queue depth of the flow queue where the received message is located refers to the sum of the sizes of all the messages in the flow queue and the size of the received message, and takes block as a unit, where 1block is 384 bytes, and when a message enters into the flow queue and leaves the flow queue, the flow queue depth of the flow queue where the message is located needs to be updated according to the size of the message.
In the embodiment of the present invention, the port-level migration thresholds of different ports may be the same or different. In an exemplary embodiment, by setting the port level shift threshold of the port with the high priority to be greater than the port level shift threshold of the port with the low priority, the packet with the low priority is preferentially shifted to the off-chip cache, the packet with the high priority stays in the chip as much as possible, the processing speed is high, and the QoS of the packet with the high priority is guaranteed as much as possible.
In the embodiment of the present invention, the flow level moving thresholds of different flow queues may be the same or different. In an exemplary embodiment, by setting the flow level shift threshold of the flow queue with a high priority to be greater than the flow level shift threshold of the flow queue with a low priority, the low-priority packet is preferentially shifted to the off-chip cache, the high-priority packet is retained in the on-chip cache as much as possible, the processing speed is high, and the QoS of the high-priority packet is ensured as much as possible.
The embodiment of the invention adopts the off-chip cache to relieve the pressure of the on-chip cache without increasing the on-chip cache, so that the discarded messages are reduced, the data volume of the lost messages is less, and the QoS of the service is improved while the congestion is avoided when the congestion occurs.
In another embodiment of the present invention, before moving the received packet to the off-chip cache, the method further includes:
and judging that the buffer message count in the off-chip buffer is smaller than the moving and discarding threshold value of the flow queue in which the received message is positioned.
In another embodiment of the present invention, when the count of the buffered packets in the off-chip cache is greater than or equal to the move discard threshold of the flow queue where the received packet is located, the method further includes: and discarding the received message.
In one illustrative example, the buffer packet count refers to the size of the packet buffered in the off-chip buffer plus the size of the received packet, in blocks, with 1block being 384 bytes.
In the embodiment of the present invention, the moving discard thresholds of different flow queues may be the same or different, and this is not limited in the embodiment of the present invention. In an exemplary embodiment, the shifting discard threshold of the flow queue with a high priority may be set to be greater than the shifting discard threshold of the flow queue with a low priority, so that the packet with a low priority is discarded preferentially, and the QoS of the packet with a high priority is guaranteed preferentially.
Because the off-chip cache is limited, whether the received message is discarded or not is determined by comparing the size of the message cached in the off-chip cache with the moving discarding threshold of the flow queue where the received message is located, so that the problem caused by the limitation of the off-chip cache, such as the difficulty in enqueuing the message due to the back pressure caused by the pressure of the off-chip cache, is avoided.
The implementation process of the congestion avoidance method according to the embodiment of the present invention is described below by an example, which is only for convenience of description, and the congestion avoidance method according to the embodiment of the present invention is not considered to be only one implementation.
Example 1
Referring to fig. 2, the method includes:
and step 200, receiving a message descriptor sent by a front-stage unit.
Step 201, determining whether the message needs to be discarded according to the message information (i.e. the size or length of the message) received in the message descriptor and the queue depth, and when the message needs to be discarded, executing step 202 and ending the process; when the message does not need to be discarded, step 203 is continued.
Step 202, marking a drop (drop) mark on the received message descriptor, returning the drop marked message descriptor to the preceding unit, and discarding the received message by the preceding unit.
Step 203, comparing the total on-chip cache count with the system level move threshold, and executing step 207 when the total on-chip cache count is greater than or equal to the system level move threshold; when the total on-die cache count is less than the system level move threshold, step 204 is performed.
Step 204, comparing the port cache count of the port where the received message is located with the port level moving threshold, and executing step 207 when the port cache count of the port where the received message is located is greater than or equal to the port level moving threshold; when the port cache count of the port where the received packet is located is smaller than the port level move threshold, step 205 is executed.
Step 205, comparing the queue depth of the flow queue where the received message is located with the flow level shift threshold, and executing step 207 when the queue depth of the flow queue where the received message is located is greater than or equal to the flow level shift threshold; when the queue depth of the flow queue in which the received packet is located is smaller than the flow level shift threshold, step 206 is executed, and the process is ended.
And step 206, putting the received message into an on-chip cache, and performing enqueuing operation.
Step 207, comparing the buffer message count in the off-chip buffer with the moving and discarding threshold of the flow queue where the received message is, and when the buffer message count in the off-chip buffer is smaller than the moving and discarding threshold of the flow queue where the received message is, executing step 208, and ending the flow; when the buffer packet count in the off-chip buffer is greater than or equal to the move discard threshold of the flow queue where the received packet is located, step 202 is executed, and the process is ended.
And step 208, moving the received message to the off-chip cache.
Another embodiment of the present invention provides a congestion avoidance apparatus, including a processor and a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are executed by the processor, the congestion avoidance apparatus implements any one of the congestion avoidance methods described above.
Another embodiment of the invention proposes a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of any of the above-mentioned congestion avoidance methods.
Referring to fig. 3, another embodiment of the present invention provides a congestion avoidance apparatus, including:
a judging module 301, configured to determine whether the received packet needs to be discarded, and determine whether the received packet needs to be moved to an off-chip cache; and sends the determination result to the processing module 302;
the processing module 302 is configured to move the received packet to the off-chip cache when it is determined that the received packet does not need to be discarded and it is determined that the received packet needs to be moved to the off-chip cache.
In another embodiment of the present invention, the processing module 302 is further configured to:
and when determining that the received message does not need to be discarded and the received message does not need to be moved to the off-chip cache, putting the received message into the on-chip cache.
In another embodiment of the present invention, the processing module 302 is further configured to:
and when the received message is determined to need to be discarded, discarding the received message.
In the embodiment of the present invention, the determining module 301 may add the queue depth cached in the chip to the length of the current packet to obtain a new queue depth, compare the obtained new queue depth with the discard threshold, and determine that the received packet needs to be discarded when the new queue depth is greater than or equal to the discard threshold; and when the depth of the new queue is smaller than the discarding threshold, determining that the received message does not need to be discarded.
In an exemplary example, the queue depth is statistics of packet sizes in a flow queue, a block is used as a measurement unit, 1block is 384Byte, and when a packet is enqueued and dequeued, the flow queue depth of the flow queue where the packet is located needs to be updated according to the packet size.
In an exemplary embodiment, in the case of the off-chip cache, in order to discard the received packets as little as possible, the discard threshold may be set higher than the discard threshold in the case of the off-chip cache not existing.
In an exemplary embodiment, in order to determine whether the received packet needs to be discarded effectively when there is no off-chip cache, the discard threshold may be set to be lower.
In this embodiment of the present invention, the determining module 301 is further configured to:
determining whether the received message needs to be moved to an off-chip cache according to whether a first condition is met;
wherein the first condition comprises any one or more of:
the total count of the on-chip cache is greater than or equal to a system level migration threshold;
the port cache count of the port where the received message is located is greater than or equal to a port level moving threshold;
the queue depth of the flow queue where the received message is located is greater than or equal to the flow level moving threshold value.
Specifically, the determining module 301 is specifically configured to determine whether to move the received packet to the off-chip cache according to whether the first condition is met by using any one or more of the following manners:
when one or more conditions in the first conditions are met, determining that the received message needs to be moved to the off-chip cache;
when any one of the first conditions is not satisfied, it is determined that the received packet does not need to be moved to the off-chip cache.
In an exemplary embodiment, the system-level shift threshold is greater than the port-level shift threshold, the port-level shift threshold is greater than the flow-level shift threshold, the system-level shift threshold, the port-level shift threshold, and the flow-level shift threshold may be set according to a service condition, a flow queue where a packet is located is hooked to a certain port, and a plurality of flow queues may be hooked to the same port.
In the embodiment of the present invention, the total count of the on-chip caches refers to the sum of the sizes of all messages cached in the on-chip caches and the size of a received message, and takes block as a unit, where 1block is 384 bytes, and when a message is queued, dequeued, discarded, or moved, the total count of the on-chip caches needs to be updated according to the size of the message.
The port cache count of the port where the received message is located refers to the sum of the sizes of all messages in the flow queue corresponding to the port and the size of the received message, and 1block is 384 bytes, and when a message is queued, dequeued, discarded or moved, the port cache count of the port where the message is located needs to be updated according to the size of the message.
The flow queue depth of the flow queue where the received message is located refers to the sum of the sizes of all the messages in the flow queue and the size of the received message, and takes block as a unit, where 1block is 384Byte, and when a message enters into the flow queue or leaves the flow queue, the flow queue depth of the flow queue where the message is located needs to be updated according to the size of the message.
In the embodiment of the present invention, the port-level migration thresholds of different ports may be the same or different. In an exemplary embodiment, by setting the port level shift threshold of the port with the high priority to be greater than the port level shift threshold of the port with the low priority, the packet with the low priority is preferentially shifted to the off-chip cache, the packet with the high priority stays in the chip as much as possible, the processing speed is high, and the QoS of the packet with the high priority is guaranteed as much as possible.
In the embodiment of the present invention, the flow level moving thresholds of different flow queues may be the same or different. In an exemplary embodiment, by setting the flow level shift threshold of the flow queue with a high priority to be greater than the flow level shift threshold of the flow queue with a low priority, the low-priority packet is preferentially shifted to the off-chip cache, while the high-priority packet is retained in the on-chip cache as much as possible, so that the processing speed is high, and the QoS of the high-priority packet is ensured as much as possible.
The embodiment of the invention adopts the off-chip cache to relieve the pressure of the on-chip cache without increasing the on-chip cache, so that the discarded messages are reduced, the data volume of the lost messages is less, and the QoS of the service is improved while the congestion is avoided when the congestion occurs.
In another embodiment of the present invention, the determining module 301 is further configured to:
judging whether the message count cached in the off-chip cache is smaller than the moving and discarding threshold value of the flow queue where the received message is located;
the processing module 302 is further configured to:
and judging that the message count cached in the off-chip cache is smaller than the moving and discarding threshold value of the flow queue where the received message is positioned, and moving the received message to the off-chip cache, wherein the received message is positioned in the flow queue.
In another embodiment of the present invention, the processing module 302 is further configured to:
and when the message count cached in the off-chip cache is greater than or equal to the moving and discarding threshold value of the flow queue where the received message is positioned, discarding the received message.
In the embodiment of the present invention, the moving discard thresholds of different flow queues may be the same or different, and this is not limited in the embodiment of the present invention. In an exemplary embodiment, the shifting discard threshold of the flow queue with a high priority may be set to be greater than the shifting discard threshold of the flow queue with a low priority, so that the packet with a low priority is discarded preferentially, and the QoS of the packet with a high priority is guaranteed preferentially.
Because the off-chip cache is limited, whether the received message is discarded or not is determined by comparing the size of the message cached in the off-chip cache with the moving discarding threshold of the flow queue where the received message is located, so that the problem caused by the limitation of the off-chip cache, such as the difficulty in enqueuing the message due to the back pressure caused by the pressure of the off-chip cache, is avoided.
The implementation process of the congestion avoidance apparatus according to the embodiment of the present invention is described below by an example, which is only for convenience of description, and the congestion avoidance apparatus according to the embodiment of the present invention is not considered to be only one implementation manner.
Example 2
In this example, referring to fig. 4, a packet cache module is added in a Memory Management Unit (TMMU) and is configured to count a cache packet count moved to an off-chip cache, and feed back the cache packet count in the off-chip cache to a CGAVD module, where the apparatus includes: the system comprises a CGAVD module, a TMMU (including a message cache module arranged in the TMMU), and an off-chip cache (such as a High Bandwidth Memory (HBM).
In another example, the apparatus further comprises: QMU, congestion management module, etc.
The CGAVD module is used for receiving the message descriptor sent by the preceding stage unit;
when determining that the received message does not need to be discarded and determining that the received message needs to be moved to an off-chip cache, judging that the message count cached in the off-chip cache is smaller than the moving and discarding threshold of a flow queue where the received message is located, and moving the received message to the off-chip cache;
when the received message is determined not to be discarded and the received message is determined to be moved to an off-chip cache, judging that the message count cached in the off-chip cache is greater than or equal to the moving discarding threshold value of a flow queue where the received message is located, marking a drop (drop) mark on the received message descriptor, returning the drop-marked message descriptor to a preceding unit, and discarding the received message by the preceding unit;
when determining that the received message does not need to be discarded and moving the received message to the off-chip cache is not needed, putting the received message into the on-chip cache;
when the received message is determined to need to be discarded, marking a discard (drop) mark on the received message descriptor, returning the drop marked message descriptor to the preceding unit, and discarding the received message by the preceding unit.
Determining whether the received message needs to be moved to an off-chip cache according to whether a first condition is met;
wherein the first condition comprises any one or more of:
the total count of the on-chip cache is greater than or equal to a system level migration threshold;
the port cache count of the port where the received message is located is greater than or equal to a port level moving threshold;
and the flow queue depth of the flow queue where the received message is located is greater than or equal to the flow level shifting threshold value.
Specifically, determining whether the received packet needs to be moved to the off-chip cache according to whether the first condition is met includes any one or more of the following steps:
when one or more conditions in the first conditions are met, determining that the received message needs to be moved to the off-chip cache;
when any one of the first conditions is not satisfied, it is determined that the received packet does not need to be moved to the off-chip cache.
Wherein the functions of the QMU, the congestion management module, etc. are not changed.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
Although the embodiments of the present invention have been described above, the descriptions are only used for understanding the embodiments of the present invention, and are not intended to limit the embodiments of the present invention. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the embodiments of the invention as defined by the appended claims.

Claims (11)

1. A congestion avoidance method, comprising:
and when determining that the received message does not need to be discarded and determining that the received message needs to be moved to an off-chip cache, moving the received message to the off-chip cache.
2. The method of claim 1, wherein before moving the received packet to the off-chip cache, the method further comprises:
and judging that the buffer message count in the off-chip buffer is smaller than the moving and discarding threshold value of the flow queue in which the received message is positioned.
3. The method according to claim 2, wherein when the buffer packet count in the off-chip buffer is greater than or equal to a move discard threshold of the flow queue in which the received packet is located, the method further comprises: and discarding the received message.
4. A congestion avoidance method according to claim 2 or 3, wherein the move discard threshold of the flow queue with high priority is greater than the move discard threshold of the flow queue with low priority.
5. The method of claim 1, wherein when it is determined that the received packet does not need to be discarded and it is determined that the received packet does not need to be moved to the off-chip cache, the method further comprises:
and putting the received message into an on-chip cache.
6. The congestion avoidance method of claim 1 or 5, wherein determining whether the received packet needs to be moved to an off-chip cache is based on whether a first condition is satisfied;
wherein the first condition comprises any one or more of:
the total count of the on-chip cache is greater than or equal to a system level migration threshold;
the port cache count of the port where the received message is located is greater than or equal to a port level moving threshold;
the queue depth of the flow queue where the received message is located is greater than or equal to the flow level moving threshold value.
7. The method of claim 6, wherein the determining whether the received packet needs to be moved to an off-chip cache according to whether the first condition is satisfied comprises any one or more of:
when one or more conditions in the first conditions are met, determining that the received message needs to be moved to the off-chip cache;
when any one of the first conditions is not satisfied, it is determined that the received packet does not need to be moved to the off-chip cache.
8. The congestion avoidance method of claim 6, wherein the port level move threshold of a port with a high priority is greater than the port level move threshold of a port with a low priority.
9. The congestion avoidance method according to claim 6, wherein the flow level shuffling threshold of the flow queue with high priority is greater than the flow level shuffling threshold of the flow queue with low priority.
10. A congestion avoidance apparatus comprising a processor and a computer readable storage medium having instructions stored thereon, wherein the instructions, when executed by the processor, implement a congestion avoidance method according to any of claims 1 to 9.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the congestion avoidance method according to any one of claims 1 to 9.
CN201911105574.4A 2019-11-13 2019-11-13 Congestion avoidance method and device and computer readable storage medium Withdrawn CN112804156A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911105574.4A CN112804156A (en) 2019-11-13 2019-11-13 Congestion avoidance method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911105574.4A CN112804156A (en) 2019-11-13 2019-11-13 Congestion avoidance method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112804156A true CN112804156A (en) 2021-05-14

Family

ID=75803291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911105574.4A Withdrawn CN112804156A (en) 2019-11-13 2019-11-13 Congestion avoidance method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112804156A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023130997A1 (en) * 2022-01-07 2023-07-13 华为技术有限公司 Method for managing traffic management (tm) control information, tm module, and network forwarding device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216911A (en) * 2011-05-31 2011-10-12 华为技术有限公司 Data managing method, apparatus, and data chip
CN103888377A (en) * 2014-03-28 2014-06-25 华为技术有限公司 Message cache method and device
WO2017107363A1 (en) * 2015-12-22 2017-06-29 深圳市中兴微电子技术有限公司 Cache management method and device, and computer storage medium
US20180083882A1 (en) * 2016-09-22 2018-03-22 Oracle International Corporation Methods, systems, and computer readable media for discarding messages during a congestion event
CN108234348A (en) * 2016-12-13 2018-06-29 深圳市中兴微电子技术有限公司 A kind of processing method and processing device in queue operation
CN109729014A (en) * 2017-10-31 2019-05-07 深圳市中兴微电子技术有限公司 A kind of message storage method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102216911A (en) * 2011-05-31 2011-10-12 华为技术有限公司 Data managing method, apparatus, and data chip
CN103888377A (en) * 2014-03-28 2014-06-25 华为技术有限公司 Message cache method and device
WO2017107363A1 (en) * 2015-12-22 2017-06-29 深圳市中兴微电子技术有限公司 Cache management method and device, and computer storage medium
US20180083882A1 (en) * 2016-09-22 2018-03-22 Oracle International Corporation Methods, systems, and computer readable media for discarding messages during a congestion event
CN108234348A (en) * 2016-12-13 2018-06-29 深圳市中兴微电子技术有限公司 A kind of processing method and processing device in queue operation
CN109729014A (en) * 2017-10-31 2019-05-07 深圳市中兴微电子技术有限公司 A kind of message storage method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜彬彬;王;: "一种用于拥塞网络节点缓存队列长度控制方法", 计算机仿真, no. 08 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023130997A1 (en) * 2022-01-07 2023-07-13 华为技术有限公司 Method for managing traffic management (tm) control information, tm module, and network forwarding device

Similar Documents

Publication Publication Date Title
US8467295B2 (en) System and methods for distributed quality of service enforcement
CN113973085B (en) Congestion control method and device
US20210021545A1 (en) Congestion drop decisions in packet queues
JP5431467B2 (en) Providing back pressure flow control for specific traffic flows
CN107948103B (en) Switch PFC control method and control system based on prediction
CN113064738B (en) Active queue management method based on summary data
US9438523B2 (en) Method and apparatus for deriving a packet select probability value
CN110138678B (en) Data transmission control method and device, network transmission equipment and storage medium
CN113810309A (en) Congestion processing method, network device and storage medium
CN113315720B (en) Data flow control method, system and equipment
JP7211765B2 (en) PACKET TRANSFER DEVICE, METHOD AND PROGRAM
US7286552B1 (en) Method and apparatus for providing quality of service across a switched backplane for multicast packets
CN108173780B (en) Data processing method, data processing device, computer and storage medium
US7408876B1 (en) Method and apparatus for providing quality of service across a switched backplane between egress queue managers
WO2021143913A1 (en) Congestion control method, apparatus and system, and storage medium
CN112804156A (en) Congestion avoidance method and device and computer readable storage medium
CN113572655A (en) Congestion detection method and system for loss-free network
US9088507B1 (en) Dummy queues and virtual queues in a network device
EP4181479A1 (en) Method for identifying flow, and apparatus
CN113835611A (en) Storage scheduling method, device and storage medium
US20040042397A1 (en) Method for active queue management with asymmetric congestion control
CN109729014B (en) Message storage method and device
EP3996352A1 (en) System and method for congestion management in computer networks
CN116889024A (en) Data stream transmission method, device and network equipment
CN112055382A (en) Service access method based on refined differentiation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210514

WW01 Invention patent application withdrawn after publication