WO2020168563A1 - 一种存储器的管理方法及装置 - Google Patents
一种存储器的管理方法及装置 Download PDFInfo
- Publication number
- WO2020168563A1 WO2020168563A1 PCT/CN2019/075935 CN2019075935W WO2020168563A1 WO 2020168563 A1 WO2020168563 A1 WO 2020168563A1 CN 2019075935 W CN2019075935 W CN 2019075935W WO 2020168563 A1 WO2020168563 A1 WO 2020168563A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- message
- message queue
- queue
- memory
- threshold
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9063—Intermediate storage in different physical parts of a node or terminal
- H04L49/9078—Intermediate storage in different physical parts of a node or terminal using an external memory or storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0805—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/16—Threshold monitoring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/32—Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/56—Queue scheduling implementing delay-aware scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/56—Queue scheduling implementing delay-aware scheduling
- H04L47/562—Attaching a time tag to queues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
- H04L49/109—Integrated on microchip, e.g. switch-on-chip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/50—Overload detection or protection within a single switching element
- H04L49/501—Overload detection
- H04L49/503—Policing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/901—Buffering arrangements using storage descriptor, e.g. read or write pointers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9021—Plurality of buffers per packet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9036—Common buffer combined with individual queues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9084—Reactions to storage capacity overflow
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
- H04L49/9084—Reactions to storage capacity overflow
- H04L49/9089—Reactions to storage capacity overflow replacing packets in a storage arrangement, e.g. pushout
Definitions
- This application relates to the field of communication technology, and in particular to a method and device for managing memory.
- Network equipment includes memory.
- the memory can also be referred to as a buffer.
- the memory can store message queues. When a network device receives a message, it can queue the message to the message queue.
- the message queue needs to be managed according to the length of time the last message in the message queue needs to stay in the message queue.
- the implementation of the above scheme is relatively complicated and the overhead is relatively large.
- the embodiments of the present application provide a memory management method and device, which are used to solve the problems of high implementation complexity and high cost in the prior art.
- an embodiment of the present application provides a memory management method, including: determining that the available storage space of a first memory in a network device is less than a first threshold, the first threshold is greater than 0, and the first memory stores the first A message queue; based on that the available storage space of the first memory is less than the first threshold, at least one message at the end of the first message queue is deleted from the first memory.
- the first memory or the second memory described in the embodiment of the present application may specifically be a cache.
- the first message queue is a message queue among a plurality of message queues stored in the first memory, and the parameter value of the first message queue is greater than that of the plurality of message queues.
- Parameter values of other message queues in a message queue, the parameter value of the message queue is the delay of the message queue or the length of the message queue, and the delay of the message queue is located in the message queue. The length of time a message at the end of the message queue is expected to stay in the message queue.
- the message queue with the largest parameter value is selected, and the messages at the end of the first message queue with the largest parameter value are deleted, thereby avoiding delays to a certain extent.
- Large messages occupy the resources of the first memory for a long time, which can improve the utilization of the first memory.
- the subsequent arrival of short-delay packets can also be lost due to the full memory, which can reduce the packet loss rate.
- the network device stores a queue information table, and the queue information table records the parameter values of the first message queue and the identifier of the first message queue; and the first message Deleting at least one message at the end of the queue from the first memory includes: determining the first message queue according to the identifier of the first message queue recorded in the queue information table, and combining all At least one message at the end of the first message queue is deleted from the first memory.
- the parameter value of the message queue is recorded through the queue information table, which is simple and easy to implement and has low complexity.
- the network device includes a second memory, and the bandwidth of the second memory is smaller than the bandwidth of the first memory
- the method further includes:
- the updated parameter value of the first message queue is from the at least one message from the
- the first memory is deleted, it is estimated that the first message will stay in the first message queue for the length of time, the first message is a message in the first message queue, and the first message The message is adjacent to the at least one message, and the first message is located before the at least one message; or, when the parameter value of the first message queue is the length of the first message queue, update The parameter value of the subsequent first message queue is the length of the first message queue stored in the first memory for deleting the at least one message.
- the message at the end of the message queue stored on the first memory is used as a reference, and the accuracy rate is high.
- the bandwidth of the memory refers to the rate at which data is stored (written) to the memory or the rate at which data is read from the memory.
- the bandwidth of the second memory is less than the bandwidth of the first memory means that the rate of storing data to the first memory is greater than the rate of storing data to the second memory, or the rate of reading data from the first memory is greater than the rate of reading data from the first memory.
- the second memory read rate.
- the method further includes: storing the identifier of the first message in the queue information table.
- the identification of the first message at the end of the message queue stored in the first memory is stored in the queue information table, which is convenient for the next time the first message is stored in the first message queue in the first message queue. delete.
- the method further includes: receiving a second message; enqueuing the second message into a second message queue; determining that the second message is located in the second message The end of the queue is the parameter value of the second message queue; when the first message is at the end of the second message queue, the parameter value of the second message queue is greater than the queue information
- the identifier of the first message queue recorded in the queue information table is replaced with the identifier of the second message queue, and the queue is The parameter value of the first packet queue recorded in the information table is replaced with the parameter value of the second packet queue.
- the queue information table is updated in real time, so that when the first message queue is selected based on the parameter value of the message queue, the complexity is low and the accuracy is high.
- the network device includes a second memory, the bandwidth of the second memory is smaller than the bandwidth of the first memory, and the first memory further stores a third message queue;
- the method further includes:
- the method further includes:
- a message queue is selected for the move operation (deleted from the first memory, there is a second memory), and then another message queue is selected Performing the delete operation can reduce the storage space of the first memory more quickly, so that newly entered packets with smaller parameter values can occupy it, which can increase the utilization rate of the first memory and reduce the packet loss rate.
- the method further includes:
- the first condition is: when determining whether to enqueue the third message into the first message queue, the length of the first message queue stored in the first memory and the second memory is less than the length of the first message queue. Three thresholds, the third threshold is greater than 0;
- the second condition is: when determining whether to enqueue the third message into the first message queue, the first message queue stored in the first memory and the second memory The message at the end of the queue is expected to stay in the first message queue for a time less than the fourth threshold, which is greater than 0.
- the method further includes:
- the first condition is: when determining whether to enqueue the third message into the first message queue, the length of the first message queue stored in the first memory and the second memory is less than the length of the first message queue. Three thresholds, the third threshold is greater than 0;
- the second condition is: when determining whether to enqueue the third message into the first message queue, the first message queue stored in the first memory and the second memory The message at the end of the queue is expected to stay in the first message queue for a time less than the fourth threshold, which is greater than 0.
- the first message queue is a message queue among a plurality of message queues included in the first message queue set, and each message queue in the first message queue set The parameter value of is greater than the parameter value of each message queue in the plurality of message queues included in the second message queue set, and each message queue in the first message queue set is stored in the first memory, Each message queue in the second message queue set is stored in the first memory, and the network device saves the identifier of the first message queue set and the identifier of the second message queue set;
- the parameter value of the message queue is the delay of the message queue or the length of the message queue, and the delay of the message queue is that the message at the end of the message queue is expected to be at all The length of time the message queue stays;
- the deleting at least one message at the end of the first message queue from the first memory based on the available storage space of the first memory being less than the first threshold includes:
- the first message queue when selecting the first message queue, select the message queue in the set with the larger parameter value, and delete the messages at the end of the selected first message queue, so that time can be avoided to a certain extent.
- the longer packets occupy the resources of the first memory for a long time, which can improve the utilization of the first memory.
- the subsequent arrival of short-delay packets can also be lost due to the full memory, which can reduce the packet loss rate.
- the network device saves a first linked list corresponding to the first packet queue set, and the multiple nodes included in the first linked list correspond to the multiple nodes in the first packet queue set.
- the network device saves a first linked list corresponding to the first packet queue set, and the multiple nodes included in the first linked list correspond to the multiple nodes in the first packet queue set.
- Determining the first message queue according to the stored identifier of the first message queue set includes:
- the message queues in the collection are connected in series through a linked list, which is simple and easy to implement and has low complexity.
- an embodiment of the present application provides a storage management device, including:
- a determining unit configured to determine that the available storage space of the first memory in the network device is less than a first threshold, the first threshold is greater than 0, and the first message queue is stored in the first memory;
- the management unit is configured to delete at least one message at the end of the first message queue from the first memory based on the available storage space of the first memory being less than the first threshold.
- the first message queue is a message queue among a plurality of message queues stored in the first memory, and the parameter value of the first message queue is greater than that of the plurality of message queues.
- Parameter values of other message queues in a message queue, the parameter value of the message queue is the delay of the message queue or the length of the message queue, and the delay of the message queue is located in the message queue. The length of time a message at the end of the message queue is expected to stay in the message queue.
- the network device stores a queue information table, and the queue information table records the parameter values of the first message queue and the identifier of the first message queue;
- the management unit is specifically configured to determine the first message queue according to the identifier of the first message queue recorded in the queue information table, and set at least one of the tails of the first message queue The message is deleted from the first memory.
- the network device includes a second memory, and the bandwidth of the second memory is smaller than the bandwidth of the first memory
- the management unit is further configured to store the at least one message at the end of the first message queue after the determining unit determines that the available storage space of the first memory in the network device is less than a first threshold In the second memory, and update the parameter value of the first packet queue recorded in the queue information table;
- the updated parameter value of the first message queue is from the at least one message from the
- the first memory is deleted, it is estimated that the first message will stay in the first message queue for the length of time, the first message is a message in the first message queue, and the first message The message is adjacent to the at least one message, and the first message is located before the at least one message; or, when the parameter value of the first message queue is the length of the first message queue, update The parameter value of the subsequent first message queue is the length of the first message queue stored in the first memory for deleting the at least one message.
- the management unit is further configured to store the identifier of the first message in the queue information table.
- the device further includes:
- the management unit is further configured to enqueue the second message into a second message queue; when it is determined that the second message is at the end of the second message queue, the second message queue When the first message is at the end of the second message queue, the parameter value of the second message queue is greater than the parameter value of the first message queue recorded in the queue information table Value, replace the identifier of the first message queue recorded in the queue information table with the identifier of the second message queue, and change the first message queue recorded in the queue information table The parameter value of is replaced with the parameter value of the second packet queue.
- the network device includes a second memory, the bandwidth of the second memory is smaller than the bandwidth of the first memory, and the first memory further stores a third message queue;
- the management unit is further configured to store the at least one message at the end of the first message queue after the determining unit determines that the available storage space of the first memory in the network device is less than a first threshold In the second storage;
- the determining unit is further configured to remove at least one message at the end of the first message queue from the first memory when the management unit is based on the fact that the available storage space of the first memory is less than the first threshold. After deleting, it is determined that the available storage space of the first memory is less than a second threshold, the second threshold is less than the first threshold, and the second threshold is greater than 0;
- the management unit is further configured to delete at least one message at the end of the third message queue from the first memory based on the available storage space of the first memory being less than the second threshold, and Avoid storing the at least one message at the end of the third message queue in the second memory.
- the device further includes:
- the receiving unit is configured to receive a third message to be queued to the first message queue after the management unit deletes at least one message at the end of the first message queue from the first memory Text
- the management unit is further configured to store the third message in the second memory when the first message queue meets at least one of the first condition and the second condition; or
- the first condition is: when determining whether to enqueue the third message into the first message queue, the length of the first message queue stored in the first memory and the second memory is less than the length of the first message queue. Three thresholds, the third threshold is greater than 0;
- the second condition is: when determining whether to enqueue the third message into the first message queue, the first message queue stored in the first memory and the second memory The message at the end of the queue is expected to stay in the first message queue for a time less than the fourth threshold, which is greater than 0.
- the device further includes:
- the receiving unit is configured to receive a third message to be queued to the first message queue after the management unit deletes at least one message at the end of the first message queue from the first memory Text
- the management unit is further configured to avoid storing the third message in the first memory when the first message queue meets at least one of the following first and second conditions; or,
- the first condition is: when determining whether to enqueue the third message into the first message queue, the length of the first message queue stored in the first memory and the second memory is less than the length of the first message queue. Three thresholds, the third threshold is greater than 0;
- the second condition is: when determining whether to enqueue the third message into the first message queue, the first message queue stored in the first memory and the second memory The message at the end of the queue is expected to stay in the first message queue for a time less than the fourth threshold, which is greater than 0.
- the first message queue is a message queue among a plurality of message queues included in the first message queue set, and each message queue in the first message queue set The parameter value of is greater than the parameter value of each message queue in the plurality of message queues included in the second message queue set, and each message queue in the first message queue set is stored in the first memory, Each message queue in the second message queue set is stored in the first memory, and the network device saves the identifier of the first message queue set and the identifier of the second message queue set;
- the parameter value of the message queue is the delay of the message queue or the length of the message queue, and the delay of the message queue is that the message at the end of the message queue is expected to be at all The length of time the message queue stays;
- the management unit is specifically configured to determine the first message queue according to the saved identifier of the first message queue set; and remove the at least one message at the end of the first message queue from all The first memory is deleted.
- the network device saves a first linked list corresponding to the first packet queue set, and the multiple nodes included in the first linked list correspond to the multiple nodes in the first packet queue set.
- the network device saves a first linked list corresponding to the first packet queue set, and the multiple nodes included in the first linked list correspond to the multiple nodes in the first packet queue set.
- the management unit when determining the first message queue according to the saved identifier of the first message queue set, is specifically configured to: determine the first message queue according to the saved identifier of the first message queue set The first linked list; determining the first message queue according to the identifier of the first message queue in the first linked list.
- a device in the third aspect, may be used to implement the method provided by the first aspect or any one of the possible designs of the first aspect.
- the device includes a processor and a memory coupled with the processor.
- the memory stores a computer program.
- the processor executes the computer program, the apparatus is caused to execute the method provided by the first aspect or any one of the possible designs of the first aspect.
- a computer-readable storage medium is provided.
- the computer-readable storage medium is used to store computer programs.
- the computer program When executed, the computer can be caused to execute the method provided by the first aspect or any one of the possible designs of the first aspect.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- the computer may be a network device.
- the network device may be a forwarding device.
- the forwarding device may be a router, a network switch, a firewall or a load balancer.
- a computer program product includes a computer program.
- the computer program When the computer program is executed, the computer can be caused to execute the method provided by the first aspect or any one of the possible designs of the first aspect.
- Figure 1 is a network structure diagram provided by an embodiment of the application.
- Figure 2 is a schematic structural diagram of a router provided by an embodiment of the application.
- FIG. 3 is a schematic structural diagram of an interface board provided by an embodiment of the application.
- FIG. 4 is a schematic structural diagram of an interface board provided by an embodiment of the application.
- FIG. 5A is a schematic flowchart of a method for managing a memory provided by an embodiment of this application.
- FIG. 5B is a schematic diagram of an alarm area and a prohibited area provided by an embodiment of the application.
- 6A is a schematic diagram of a relationship between a traffic manager and a first memory provided by an embodiment of this application;
- 6B is a schematic diagram of another relationship between the traffic manager and the first memory provided by an embodiment of the application.
- FIG. 7A is a schematic diagram of the unmoved message queue 1 provided by an embodiment of the application.
- FIG. 7B is a schematic diagram of the message queue 1 after being moved according to an embodiment of the application.
- FIG. 8 is a schematic diagram of a set of message queues provided by an embodiment of the application.
- FIG. 9 is a schematic diagram of a moving process of selecting a first message queue according to an embodiment of the application.
- FIG. 10 is a schematic diagram of moving through a hash algorithm according to an embodiment of the application.
- FIG. 11 is another schematic diagram of moving through a hash algorithm provided by an embodiment of the application.
- FIG. 12 is a schematic diagram of a method for realizing moving of a total memory through a hash algorithm according to an embodiment of the application.
- FIG. 13 is a schematic structural diagram of a memory management device provided by an embodiment of the application.
- FIG. 14 is a schematic structural diagram of a memory management device provided by an embodiment of the application.
- the forwarding device may be a router.
- the router can forward Internet Protocol (IP) packets.
- IP Internet Protocol
- the forwarding device may be a network switch.
- the network switch can forward Ethernet frames.
- the forwarding device can also be a firewall or a load balancer.
- the network device mentioned in the embodiment of this application may be a forwarding device.
- Figure 1 is a network structure diagram provided by this application. Refer to Figure 1.
- the network structure diagram contains 7 routers, namely Router 1 to Router 7.
- Each router can contain multiple physical interface cards.
- Each physical interface card can contain multiple ports.
- Figure 1 shows two outgoing ports (first outgoing port and second outgoing port) in router 1 and two outgoing ports (third outgoing port and fourth outgoing port) in router 2.
- Router 1 is connected to router 2 through the first outgoing port.
- the router 1 is connected to the router 3 through the second outgoing port.
- Router 2 is connected to router 4 through the third out port.
- Router 2 is connected to router 5 through the fourth out port.
- router 1 After router 1 receives a message, router 1 determines the outgoing port for forwarding the message, such as the first outgoing port, and forwards the message from the first outgoing port. After router 2 receives the message forwarded by router 1, router 2 determines the outgoing port for forwarding the message, such as the third outgoing port, and forwards the message from the third outgoing port.
- Fig. 2 is a schematic diagram of a possible structure of the router 2 in Fig. 1.
- Other routers in FIG. 1 may also adopt the structural schematic diagram shown in FIG. 2.
- the router 2 includes a control board 1210, a switching network board 1220, an interface board 1230, and an interface board 1240.
- the control board 1210 includes a central processing unit 1211.
- the control board 1210 can be used to execute routing protocols.
- the routing protocol may be a border gateway protocol (border gateway protocol, BGP) or an interior gateway protocol (interior gateway protocol, IGP).
- the control board 1210 may generate a routing table by executing a routing protocol, and send the routing table to the interface boards 1230 and 1240.
- the router 2 in FIG. 1 may also adopt a structure different from the structure shown in FIG. 2.
- the router 2 in FIG. 1 may include only one control board and one interface board, and does not include the switching network board.
- the router 2 in FIG. 1 may include more than two interface boards.
- the IP packets received via the ingress port of the interface board can be processed by the interface board and then exit through the egress port of the interface board.
- the IP packets received through the ingress port of one interface board of router 2 can be processed from the outgoing port of another interface board of router 2 after being processed by the switch network board. Get out.
- This application does not limit the specific structure of Router 2 and other routers in Figure 1.
- the interface board 1230 can forward IP packets by looking up the routing table.
- the interface board 1230 includes a central processor 1231, a network processor 1232, a physical interface card 1233, and a memory 1234. It should be noted that FIG. 2 does not show all the components that the interface board 1230 can include.
- the interface board 1230 may also include other components. For example, in order to enable the interface board 1230 to have queue scheduling and management functions, the interface board 1230 may also include a traffic manager. In addition, in order to enable messages from the interface board 1230 to be switched to the interface board 1240 via the switching network board 1220, the interface board 1230 may also include an ingress switching network interface chip (ingress fabric interface chip, iFIC).
- iFIC ingress switching network interface chip
- the central processing unit 1231 can receive the routing table sent by the central processing unit 1211 and save the routing table in the memory 1234.
- the physical interface card 1233 can be used to receive IP packets sent by router 1.
- the network processor 1232 may search for a routing table entry matching the IP packet received by the physical interface card 1233 in the routing table of the memory 1234, and send the IP packet to the switching network board 1220 according to the matching routing table entry.
- the switching network board 1220 may be used to switch IP packets from one interface board to another interface board. For example, the switching network board 1220 can switch the IP packets from the interface board 1230 to the interface board 1240.
- the switching network board 1220 can switch the IP packet from the interface board 1230 to the interface board 1240 in the manner of cell switching.
- the network processor 1232 may obtain the destination IP address in the IP packet.
- the network processor 1232 may search for a routing table entry that matches the IP packet in the routing table according to the longest prefix matching algorithm, and determine the outgoing port according to the routing table entry that matches the IP packet.
- the routing table entry matching the IP packet contains the identification of the outgoing port.
- the interface board 1230 can perform queue scheduling and management on the IP packets.
- the interface board 1230 can use the traffic manager 301 in FIG. 3 to perform queue scheduling and management on IP packets.
- the interface board 1240 can forward IP packets by looking up the routing table.
- the interface board 1240 includes a central processor 1241, a network processor 1242, a physical interface card 1243, and a memory 1244.
- FIG. 2 does not show all the components that the interface board 1240 can include.
- the interface board 1240 may also include other components.
- the interface board 1240 may also include a traffic manager.
- the interface board 1240 may also include an egress fabric interface chip (eFIC).
- eFIC egress fabric interface chip
- the central processing unit 1241 may receive the routing table sent by the central processing unit 1211, and save the routing table in the memory 1244.
- the network processor 1242 may be used to receive IP packets from the switching network board 1220.
- the IP packet from the switching network board 1220 may be an IP packet sent by the router 1 received by the physical interface card 1233.
- the network processor 1242 may search for a routing table entry matching the IP packet from the switching network board 1220 in the routing table of the memory 1244, and send the IP packet to the physical interface card 1243 according to the matching routing table entry.
- the physical interface card 1243 can be used for IP packets sent to the router 4.
- the interface board 1240 can perform queue scheduling and management on the IP packets. Specifically, the interface board 1240 can use the traffic manager 402 in FIG. 4 to perform queue scheduling and management on IP packets.
- the router contains memory.
- the memory may be a first-in-first-out memory (first in first out memory).
- the router can use the memory to perform queue scheduling and management of the message flow to be forwarded.
- the router may receive a large number of messages in a short period of time, and a large number of messages may cause a relatively high degree of congestion in the first-in first-out queue in the router's memory.
- the router may perform discard management on the packets that enter the first-in first-out queue.
- FIG. 3 is a schematic structural diagram of the interface board 1230 shown in FIG. 2 in a possible implementation manner.
- the interface board 1230 includes a network processor (NP) 1232, a traffic manager (TM) 301, a memory 302, and an iFIC303.
- NP network processor
- TM traffic manager
- FIG. 3 only shows part of the components included in the interface board 1230.
- the interface board 1230 shown in FIG. 3 may also include components in the interface board 1230 shown in FIG. 2.
- the interface board shown in Figure 3 can perform queue scheduling and management on upstream traffic.
- the upstream traffic may refer to traffic received by the interface board 1230 via the physical interface card 1233 and to be sent to the switching network board 1220.
- the message received via the physical interface card 1233 is processed by the network processor 1232 and the traffic manager 301, and then sent to the ingress switching network interface chip 303.
- the ingress switching network interface chip 303 may generate multiple cells according to the message, and send the multiple cells to the switching network board 1220.
- the processing of the message by the traffic manager 301 may include enqueue processing and dequeue processing.
- the traffic manager 301 may enqueue the message by storing the message in the memory 302.
- the traffic manager 301 can dequeue the message by deleting the message stored in the memory 302.
- the memory 302 may be used to store and maintain the message queue.
- the message queue contains multiple messages.
- the message queue may be a first-in first-out queue.
- the memory 302 may be a first-in first-out memory.
- the traffic manager 301 can perform enqueue management for the messages that enter the message queue, and perform dequeue management for the messages that leave the message queue. Specifically, the traffic manager 301 can save and maintain the message descriptor queue.
- the message descriptor queue contains multiple message descriptors. The multiple packets contained in the packet queue correspond one-to-one with multiple packet descriptors contained in the packet descriptor queue. Each message descriptor is used to indicate the information of the corresponding message. For example, the message descriptor may include the storage location of the message corresponding to the message descriptor in the memory 302.
- the message descriptor may also include the time when the message corresponding to the message descriptor enters the router 2. Specifically, the time when the message corresponding to the message descriptor enters the router 2 may be the time when the message corresponding to the message descriptor is received by the physical interface card 1233.
- the message descriptor may also include the length of the message queue when the message corresponding to the message descriptor is queued to the message queue. For example, when message 1 enqueues to message queue 1, message queue 1 contains message 2, message 3, and message 4. Message 2 has 100 bits, message 3 has 200 bits, and message 4 has 300 bits. Therefore, when message 1 is queued to message queue 1, the length of message queue 1 is 600 bits.
- message queue 1 does not contain message 1.
- the message descriptor may also include the expected length of time that the message corresponding to the message descriptor will stay in the message queue after the message corresponding to the message descriptor is queued to the message queue. For the convenience of description, the duration is referred to as the delay of the message queue.
- the traffic manager 301 can perform enqueue management of messages from the network processor 1232. For example, the traffic manager 301 may determine whether to discard the packet from the network processor 1232 according to the WRED algorithm. Of course, the traffic manager 301 may also determine whether to discard the packet from the network processor 1232 according to other algorithms. If the traffic manager 301 determines not to discard the message from the network processor 1232, the traffic manager 301 may save the message in the message queue of the memory 302. Specifically, the traffic manager 301 may store the message at the end of the message queue of the memory 302. In addition, the traffic manager 301 generates a message descriptor corresponding to the message according to the storage location of the message in the memory 302, and saves the message descriptor in the message descriptor queue.
- the traffic manager 301 may store the message descriptor at the end of the message descriptor queue.
- the message descriptor queue may be stored in the traffic manager 301.
- the message descriptor queue may be stored in the queue manager in the traffic manager.
- the traffic manager 301 can perform dequeue management on the message queue stored in the memory 302. For example, when the traffic manager 301 determines according to weighted fair queueing (WFQ) that a message needs to be sent in the message queue stored in the memory 302, the traffic manager 301 may, according to the head of the message descriptor queue, The scheduling signal is sent to the memory 302.
- WFQ weighted fair queueing
- the scheduling signal is sent to the memory 302.
- the traffic manager 301 may also determine the packets that need to be sent in the packet queue stored in the memory 302 according to other queue scheduling algorithms.
- the scheduling signal includes the storage location of the message at the head of the message queue.
- the scheduling signal is used to instruct the memory 302 to provide the traffic manager 301 with the message at the head of the message queue.
- the memory 302 provides the message at the head of the message queue to the traffic manager 301 and deletes the sent message in the message queue.
- the traffic manager 301 obtains the message at the head of the message queue from the memory 302, and sends the message to the ingress switching network interface chip 303. After the traffic manager 301 sends a message to the ingress switching network interface chip 303, the traffic manager 301 deletes the message descriptor corresponding to the sent message in the message descriptor queue.
- the message queue is stored in the memory 302.
- the traffic manager 301 and the memory 302 may be packaged in the same chip.
- the memory 302 can be called an on-chip memory.
- the operating frequency of the traffic manager 301 may be equal to the operating frequency of the memory 302.
- On-chip memory can also be called high-speed memory.
- a memory can also be provided outside the chip.
- the memory provided outside the chip can be called an off-chip memory.
- the bandwidth of the on-chip memory is greater than that of the off-chip memory.
- the off-chip memory can also be called For low-speed memory.
- the traffic manager 301 and the memory 302 may be packaged in different chips.
- the memory 302 can be called an off-chip memory.
- the operating frequency of the traffic manager 301 may be greater than the operating frequency of the memory 302.
- a memory is provided inside the traffic manager 301, and the bandwidth of the memory inside the traffic manager 301 is greater than the bandwidth of the memory 302, and the on-chip memory may also be referred to as a low-speed memory.
- the bandwidth of the memory refers to the rate at which data is stored (written) to the memory or the rate at which data is read from the memory.
- the bandwidth of memory 1 is greater than that of memory 2 means that the rate of storing data to memory 1 is greater than the rate of storing data to memory 2, or the rate of reading data from memory 1 is greater than the rate of reading data from memory 2.
- the interface board 1230 may also include other circuits with storage functions.
- the interface board 1230 may also include a memory 1234.
- the functions of the memory 1234 and the memory 302 are different.
- the memory 1234 is used to store the routing table.
- the network processor accesses the memory 1234 to find the routing table.
- the memory 302 is used to store the first-in first-out queue.
- the traffic manager 301 realizes the management of the first-in first-out queue by accessing the memory 302.
- the memory 1234 and the memory 302 may be relatively independent memories. In a possible implementation, the memory 1234 and the memory 302 may be included in one chip.
- the memory 302 and the memory 1234 may also include only one.
- FIG. 4 is a schematic structural diagram of the interface board 1240 shown in FIG. 2 in a possible implementation manner.
- the interface board 1240 includes a network processor 1242, a traffic manager 402, a memory 403, a physical interface card 1243, and an eFIC401.
- FIG. 4 only shows part of the components included in the interface board 1240.
- the interface board 1240 shown in FIG. 4 may also include components in the interface board 1240 shown in FIG. 2.
- the interface board shown in Figure 4 can perform queue scheduling and management on downstream traffic. Downstream traffic may refer to traffic received by the interface board 1240 via the switching network board 1220 and to be sent to the physical interface card 1243.
- the physical interface card 1243 can send the downstream traffic to the router 4 via the third out port.
- the egress switching network interface chip 401 receives multiple cells from the switching network board 1220, the egress switching network interface chip 401 can generate a message according to the multiple cells and send the message to the network processor 1242.
- the traffic manager 402 may perform discard management on the packets received by the network processor 1242.
- the traffic manager 402 may perform enqueue management on the messages received by the network processor 1242. Specifically, the received message is placed in the message queue in the memory 403 according to the scheduling algorithm, such as at the end of the message queue.
- the traffic manager 402 can perform dequeue management on the message queue stored in the memory 403.
- the message queue may be a first-in first-out queue.
- the memory 403 may be a first-in first-out memory. After the traffic manager 402 obtains the messages in the message queue stored in the memory 403, the traffic manager 402 may send the obtained messages to the physical interface card 1243.
- the physical interface card 1243 can send a message to the router 4 via the third egress port.
- FIG. 5A is a schematic flowchart of a method for managing a memory according to an embodiment of the application.
- the method may include: S501 and S502.
- the method shown in FIG. 5A may be executed by a network device.
- the method shown in FIG. 5A may be executed by the interface board 1230 in the network device shown in FIG. 3.
- the method shown in FIG. 5A can be executed by the traffic manager 301.
- the method shown in FIG. 5A may also be executed by the interface board 1240 in the network device shown in FIG. 4.
- the method shown in FIG. 5A may be executed by the traffic manager 402.
- the method shown in FIG. 5A can also be executed by other software and hardware systems.
- S501 Determine that the available storage space of the first memory in the network device is less than a first threshold, the first threshold is greater than 0, and the first message queue is stored in the first memory.
- the first threshold is equal to 1/3, 1/4, 1/5, 1/6, etc. of the total storage space of the first memory, and other setting methods may also be adopted.
- the specific setting of the first threshold can be configured according to actual conditions.
- the network device involved in S501 may be the network device shown in FIG. 3.
- the traffic manager can detect the occupancy of the memory. For example, the traffic manager can save the total storage space of the memory in advance. When no data is stored in the memory, the traffic manager can determine that the available storage space of the memory is equal to the total storage space. When the traffic manager performs a write operation on the memory, the traffic manager can determine that the available storage space of the memory is equal to the total storage space minus the size of the data corresponding to the write operation. When the traffic manager performs a read operation on the memory, the traffic manager can determine that the available storage space of the memory is equal to the last determined available storage space plus the size of the data corresponding to the read operation.
- the traffic manager can determine that the available storage space of the memory is equal to the last determined available storage space plus the size of the data corresponding to the delete operation. It can be understood that the traffic manager can queue messages to the message queue through a write operation. The message can be dequeued from the message queue through a read operation. The message stored in the memory can be deleted through the delete operation.
- S502 Delete at least one message at the end of the first message queue from the first memory based on the available storage space of the first memory being less than the first threshold.
- the available storage space of the first memory is less than the first threshold, from the perspective of available storage space, of course, it can also be viewed from the perspective of used storage space, for example, based on the total storage of the first memory.
- the traffic manager may save the first threshold in advance.
- the traffic manager can delete the messages stored in the memory through the delete operation.
- one or more memories may be used to maintain the message queue.
- a memory is used to maintain the message queue, and when at least one message at the end of the first message queue is deleted from the first memory, the end of the first message queue is directly At least one message of is deleted from the first memory to avoid storing it in another location.
- the traffic manager 301 and the first memory may be packaged in the same chip, or may be packaged in different chips. Refer to the relationship between the traffic manager 301 and the first memory shown in FIG. 6A, that is, packaged in the same chip. It should be understood that when the first memory and the traffic manager 301 are packaged in the same chip, the first memory may be referred to as a high-speed memory. When the first memory and the traffic manager 301 are packaged in different chips, the first memory may be called a low-speed memory. Among them, the bandwidth of low-speed memory is smaller than that of high-speed memory.
- the message queue can be maintained through at least two memories, and when at least one message at the end of the first message queue is deleted from the first memory, that is, the first message At least one message at the end of the message queue is moved from the first memory to the second memory, that is, at least one message at the end of the first message queue is copied to the second memory, and then the first message is copied to the second memory. At least one message at the end of a message squadron is deleted from the first memory.
- the traffic manager 301 and the first memory are packaged in the same chip, and the traffic manager 301 and the second memory are packaged in different chips, see FIG.
- the bandwidth of the first memory is greater than the bandwidth of the second memory.
- the first memory may be called a high-speed memory or an on-chip memory, and the second memory may be called a low-speed memory or an off-chip memory.
- the first memory may be static random-access memory (SRAM), and the second memory may be dynamic random-access memory (DRAM). It should be understood that both the first memory and the second memory may be packaged in different chips from the traffic manager 301, but the bandwidth of the first memory is greater than the bandwidth of the second memory.
- the first memory and the second memory described in the embodiments of the present application may specifically be caches.
- the first message queue can be selected from the multiple message queues maintained in various ways. The following two examples are described in the embodiment of the present application.
- the first possible implementation is to directly select a message queue from the multiple message queues stored in the first memory, that is, the first message queue. You can select the largest delay or the largest length from multiple message queues.
- the message queue. Therefore, the selected first message queue is a message queue among the plurality of message queues stored in the first memory, and the parameter value of the first message queue is larger than that of the plurality of message queues.
- the parameter value of other message queues, the parameter value of the message queue is the delay of the message queue or the length of the message queue, and the delay of the message queue is at the end of the message queue The length of time the message is expected to stay in the message queue.
- the second possible implementation manner is to divide the multiple message queues stored in the first memory into multiple message queue sets. Different message queue sets can correspond to different parameter value ranges, and there is no overlap between parameter value ranges corresponding to different message queue sets.
- the parameter value can be the delay of the message queue or the length of the message queue.
- the selected first message queue may be one of the message queue sets with the largest parameter value range. For example, the multiple message queues stored in the first memory are divided into two message queue sets, namely the first message queue set and the second message queue set, and the selected first message queue is the first message queue.
- Each message queue in the first message queue set is stored in the first memory, and each message queue in the second message queue set is stored in all In the first memory;
- the parameter value of the message queue is the delay of the message queue or the length of the message queue, and the delay of the message queue is the message at the end of the message queue The length of time the message is expected to stay in the message queue.
- the method shown in FIG. 5A is executed by the traffic manager 301 as an example.
- the traffic manager 301 selects the message queue with the largest parameter value from the plurality of message queues stored in the first memory, and stores the plurality of messages from the first memory In the queue, select the message queue with the largest delay or the message queue with the largest length, that is, the first message queue.
- the parameter value of the message queue is taken as an example for description.
- the delay of the message queue described later refers to the length of time that the last message stored in the first memory in the message queue is expected to stay in the message queue.
- the message queue is message queue 1 as an example, and the estimated time that message N stays in message queue 1 indicates the delay of the first message queue.
- the delay of the message queue 1 is the message N Estimated length of stay in message queue 1.
- the message N in message queue 1 is moved from the first memory to the second memory After being moved, the delay of message queue 1 becomes the length of time that message N-1 is expected to stay in the message queue, as shown in FIG. 7B.
- the embodiment of this application also considers it to be at the end of the team. For example, taking at least one message as an example, in FIG. 7A, the messages located at the end of the message queue 1 in the first memory are message N and message N-1.
- the traffic manager 301 may maintain the delay (and/or length) of each message queue.
- the traffic manager 301 may only maintain the delay (and/or length) of a part of the message queue in the first memory.
- the part of the message queue is among multiple message queues stored in the first memory.
- m is an integer greater than 0.
- the traffic manager 301 may record the delay of m message queues with a larger delay through the message queue information table.
- the message queue information table includes the identifiers of m message queues, taking QID as an example, and the delays corresponding to the identifiers of the m message queues.
- Table 1 shows the message queue information table, and Table 1 takes the delay of recording two message queues as an example.
- the traffic manager 301 When the traffic manager 301 deletes at least one packet at the end of the packet queue 1 with the largest delay from the first memory, it determines according to the identifier of the packet queue 1 recorded in the queue information table The message queue 1 deletes at least one message at the end of the queue in the message queue 1 from the first memory. After deleting at least one packet at the end of the packet queue 1 from the first memory, the traffic manager 301 updates the time delay of the packet queue 1 in the queue information table. Taking the deletion of a message from the message queue 1 as an example, that is, the updated delay of the message queue 1 is the length of time that the message N-1 is expected to stay in the message queue 1.
- the traffic manager 301 When the message with the head of the queue is dequeued from the message queue 1, the traffic manager 301 also modifies the delay of the message queue 1 in the queue information table accordingly. Of course, for other message queues shown in Table 1, when a message with the head of the queue is dequeued from the other message queue, the traffic manager 301 also updates the delay of other message queues in Table 1 accordingly. . The traffic manager 301 maintains m message queues with large delays stored in the queue information table. When there is a delay of other message queues other than the m message queues, other messages will be caused by the enqueuing of the messages.
- the minimum delay in the queue information table and the identifier of the message queue corresponding to the minimum delay are updated. For example, because message queue 3 has a message into the queue, the delay of message queue 3 after the message is queued is 790us, that is, the delay of message queue 3 is greater than the delay of message queue 2, therefore, the delay in Table 1
- the identifier of the message queue 2 is replaced with the identifier of the message queue 3, and the delay of the message queue 2 is replaced with the delay of the message queue 3.
- the traffic manager 301 After the traffic manager 301 deletes at least one packet at the end of the packet queue 1 from the first memory, and the available storage space of the first memory is greater than or equal to the first threshold, the deletion operation is not continued. , When the available storage space of the first memory is still less than the first threshold, the traffic manager 301 continues to select the message queue with the largest delay in the queue information table, for example, message queue 1 deletes the end of the queue After the message N, the delay of the message queue 1 is still greater than the delay of the message queue 2, and the last message N-1 in the message queue 1 is deleted from the first memory. For example, after message queue 1 deletes the message N at the end of the queue, and the delay of message queue 2 is greater than the delay of message queue 1, then the message at the end of the message queue 2 is selected from the first memory delete.
- a message 1 to be queued to the message queue 1 is received;
- the message queue 1 meets at least one of the following first and second conditions, avoid storing the message 1 in the first memory; when the message queue 1 does not meet the first condition and In the second condition, the message 1 is stored in the first memory.
- the first condition is: before the message 1 is queued to the first message queue, the length of the message queue 1 stored in the first memory is less than a third threshold, and the third The threshold is greater than 0;
- the second condition is: before the message 1 is enqueued into the message queue 1, the message at the end of the message queue 1 stored in the first memory is expected to be The staying time of the message queue 1 is less than a fourth threshold, and the fourth threshold is greater than zero.
- the traffic manager 301 after deleting at least one message at the end of the message queue 1 from the first memory, if a message 1 belonging to the message queue 1 is received, the traffic manager 301 can directly send the message Message 1 is discarded until the overall message queue 1 becomes shorter (the delay of message queue 1 is less than the fourth threshold or the length of message queue 1 is less than the third threshold), so that there is a certain degree of free space for storage Packets with lower latency join the queue.
- the third threshold and the fourth threshold can be preset based on empirical values. For example, when the length of the message queue reaches K messages, congestion will generally occur, and the space of the first memory is insufficient, the third threshold can be Set to K/2, K/3, K/4, etc.
- the third threshold and the fourth threshold may also be dynamically set. For example, when deciding to perform the operation of deleting end-of-line messages for message queue 1, the delay of message queue 1 is X. For example, as shown in Figure 7A, X is equal to the expected stay of message N in message queue 1. Duration, the fourth threshold can be set to X/2, X/3, X/4, etc.
- the following describes in detail the length and/or delay of maintaining multiple message queues in the case of storing multiple message queues through the first memory and the second memory.
- the traffic manager 301 may maintain the delay (and/or length) of each message queue stored in the first memory.
- the traffic manager 301 may maintain the delay (and/or length) of a part of the message queue in the first memory.
- the part of the message queue may be stored in multiple message queues in the first memory. Extend the larger w message queues, where w is an integer greater than 0.
- the time delay of the message queue stored in the first memory refers to the length of time the last message stored in the first memory in the message queue stays in the message queue.
- the traffic manager 301 may record the delays of w message queues with larger delays through the message queue information table.
- Table 2 is the message queue information table.
- the message queue information table includes the identifiers of w message queues. Taking QID as an example, the message queue information table also includes the identifiers of w message queues. Time delay. Since some message queues may be moved, the messages at the end of the queue are moved to the second memory. In order to facilitate the next move processing, the last message queue stored in the first memory can be recorded in the message queue information.
- the identification of the message may be the ID number of the message or the storage address of the message. Table 2 takes the recording of the delay of three message queues as an example.
- the traffic manager 301 When the traffic manager 301 deletes at least one message at the end of the message queue 1 from the first memory, it determines the message according to the identifier of the message queue 1 recorded in the queue information table Queue 1, and move at least one message at the end of the queue in the message queue 1 from the first memory to the second memory. After moving at least one message at the end of the message queue 1 from the first memory to the second memory, the traffic manager 301 updates the queue information table of the message queue 1 Delay, taking the deletion of a message from the message queue 1 as an example, that is, the updated delay of the message queue 1 is the length of time that the message N-1 is expected to stay in the message queue 1.
- the traffic manager 301 When a message with the head of the queue is dequeued from the message queue 1, the traffic manager 301 also updates the delay of the message queue 1 in the queue information table accordingly. Of course, for other message queues shown in Table 2, when a message with the head of the queue is dequeued from the other message queue, the traffic manager 301 also updates the other message queue in the queue information table accordingly. Extension. The traffic manager 301 maintains the queue information table and stores w message queues with large delays. When there is a delay of other message queues, the delay of other message queues is greater than the queue information table due to the enqueue of messages. The minimum delay in the queue information table and the identifier of the message queue corresponding to the minimum delay are updated in the queue information table.
- the delay of message queue 5 is 775us, that is, the delay of message queue 5 is greater than the delay of message queue 3. Therefore, the delay of message queue 3 in Table 2 The identifier is replaced with the identifier of message queue 5, and the delay of message queue 3 is replaced with the delay of message queue 5.
- the traffic manager 301 After the traffic manager 301 deletes at least one packet at the end of the packet queue 1 from the first memory, and the available storage space of the first memory is greater than or equal to the first threshold, it does not continue to perform the move operation , When the available storage space of the first memory is still less than the first threshold, the traffic manager 301 continues to select the message queue with the largest delay in the queue information table, such as the message queue moving the end of the message queue 1 After the message N is transferred from the first memory to the second memory, if the delay of message queue 1 is still greater than the delay of message queue 2, then the last message N-1 in the message queue 1 is removed from the first memory Moved to the second storage.
- the delay of the message queue 2 is greater than the delay of the message queue 1, then the message queue 2 is placed at the end of the queue The message is moved from the first storage to the second storage.
- the traffic manager 301 may also use two tables to record the delays of w message queues with larger delays.
- Table 3 is a message queue information table that has not been moved.
- the message queue information table includes the identifiers of w1 message queues, taking QID as an example, and w1 message queues. The identifiers correspond to the delays respectively.
- Table 4 is the message queue information table that has been moved.
- the moved message queue information table records the identifiers of the w2 message queues that have been moved with larger delays, the delays corresponding to the identifiers of the w2 message queues, and the information in the message queues stored in the first memory.
- the identifier of the last message The time delay corresponding to the identifier of a certain message queue in the identifiers of w2 message queues is the length of time the last message in the message queue stored in the first memory stays in the message queue.
- the traffic manager 301 when the traffic manager 301 deletes at least one message at the end of the message queue 1 from the first memory, it determines the message queue according to the identifier of the message queue 1 recorded in Table 3. 1. Move at least one message at the end of the message queue 1 from the first memory to the second memory. After moving at least one message at the end of the message queue 1 from the first memory to the second memory, to delete a message (message N) from the message queue 1 shown in FIG. 7A ) As an example, the traffic manager 301 re-determines the delay of the message queue 1, and compares the re-determined delay of the message queue 1 with the minimum delay in Table 4.
- the minimum delay in Table 4 is updated to the delay of the message queue 1, and the identifier of the message queue corresponding to the minimum delay is updated to the The identifier of message queue 1 and the Last On-Chip Packet corresponding to the minimum delay are updated to the identifier of message N-1 in message queue 1, and the identifier of message queue 1 in Table 3 and the The delay of packet queue 1 is deleted.
- the updated time delay of the message queue 1 is the length of time that the message N-1 is expected to stay in the message queue 1.
- the traffic manager 301 When a message with the head of the queue is dequeued from the message queue 1, the traffic manager 301 also modifies the delay of the message queue 1 in Table 3 or Table 4 accordingly, which is of course as shown in Table 3. For other message queues, when a message with the head of the queue is dequeued from the other message queue, the traffic manager 301 also updates the delay of the other message queue in the queue information table accordingly.
- the traffic manager 301 maintains the queue information table that stores w message queues with larger delays. When the delay of other message queues is greater than the minimum delay in Table 3 due to the enqueue of messages, the table 3 The minimum delay in and the identifier update of the packet queue corresponding to the minimum delay.
- the delay of message queue 5 is 775us, that is, the delay of message queue 5 is greater than the delay of message queue 3. Therefore, the delay of message queue 3 in Table 2
- the identifier is replaced with the identifier of message queue 5, and the delay of message queue 3 is replaced with the delay of message queue 5.
- the delay of the message queue 5 is 775us, that is, the message queue 5
- the delay of the message queue 5 is greater than the delay of the message queue 3. Therefore, the identifier of the message queue 5 and the delay of the message queue 5 are stored in a free position in Table 3.
- the traffic manager 301 moves at least one message at the end of the queue in the message queue 1 from the first memory to the second memory, and then receives the message queue to be queued to the message queue.
- Message 1 of 1 when the message queue 1 meets at least one of the following first and second conditions, avoid storing the message 1 in the first memory, and store the message 1 Stored in the second memory; when the message queue 1 does not meet the first condition and the second condition, the message 1 is stored in the first memory.
- the first condition is: before the message 1 is queued to the first message queue, the length of the message queue 1 stored in the first memory is less than a third threshold, and the third The threshold is greater than 0;
- the second condition is: before the message 1 is enqueued into the message queue 1, the message at the end of the message queue 1 stored in the first memory is expected to be The staying time of the message queue 1 is less than a fourth threshold, and the fourth threshold is greater than zero.
- the traffic manager 301 after deleting at least one message at the end of the message queue 1 from the first memory, if a message 1 belonging to the message queue 1 is received, the traffic manager 301 can directly send the message Message 1 is discarded until the overall message queue 1 becomes shorter (the delay of message queue 1 is less than the fourth threshold or the length of message queue 1 is less than the third threshold), so that there is a certain degree of free space for storage Packets with lower latency join the queue.
- the third threshold and the fourth threshold can be preset based on empirical values. For example, when the length of the message queue reaches K messages, congestion will generally occur, and the space of the first memory is insufficient, the third threshold can be Set to K/2, K/3, K/4, etc.
- the third threshold and the fourth threshold may also be dynamically set. For example, when deciding to perform the operation of deleting end-of-line messages for message queue 1, the delay of message queue 1 is X. For example, as shown in Figure 7A, X is equal to the expected stay of message N in message queue 1. Duration, the fourth threshold can be set to X/2, X/3, X/4, etc.
- the message queue 2 of the message queue 2 deletes at least one message at the end of the message queue 2 from the first memory, and avoids storing at least one message deleted from the message queue 2 in the second memory .
- the available storage space of the first memory is less than the fifth threshold, from the perspective of available storage space, of course, it can also be viewed from the perspective of used storage space, for example, based on the total storage of the first memory.
- the space setting forbidden zone threshold as shown in Figure 5B, HL represents the forbidden zone threshold.
- the rate of movement is lower than the rate at which new messages enter the queue, which makes the resource occupation of the first memory (on-chip memory) continue to grow. Therefore, when the occupied space of the first memory reaches the threshold of the forbidden zone, When the message queue is moved, delete other message queues, so that the resources of the first memory can be released as soon as possible. Of course, when the occupied space of the first memory is less than the forbidden zone threshold, the deletion operation on the message queue is stopped.
- a message queue from the total queue information table, such as message queue 3. At least one message at the end of the message queue 3 (at least one message whether stored in the first storage or the second storage) is deleted.
- the total queue information table can be stored in the network device.
- the total queue information table includes the identifier of the message queue 3 and the total delay or total length of the message queue 3.
- the total delay of the message queue 3 refers to the length of time the messages at the end of the queue stored in the first memory and the second memory stay in the message queue 3.
- the message queue 3 includes 100 messages, which are message 1, message 2, message 3, ..., message 100.
- Message 1-message 98 are stored in the first memory, and message 99 and message 100 are stored in the second memory.
- the total delay of message queue 3 means that message 100 is in message queue 3
- the length of stay, the total length of message queue 3 is equal to 100.
- the total delay or total length of message queue 3 does not change, only when dequeuing or entering the queue is performed for message queue 3, or packets at the end of the queue After deleting from the first memory and the second memory, the total delay or total length of the message queue 3 will change. Specifically, when a message with the head of the queue is dequeued from the message queue 3, the total delay or total length of the message queue 3 is modified in Table 5 accordingly.
- message queue 4 is less than the total delay of message queue 3, but message queue 4 The total delay is greater than the total delay of other message queues.
- the total queue information table maintained by the traffic manager 301 stores d message queues with a larger delay, and d is an integer greater than 0.
- the delay of other message queues is greater than the minimum total delay in Table 5 due to the enqueuing of messages, the minimum total delay in Table 5 and the identifier of the message queue corresponding to the minimum total delay are updated.
- the total delay of message queue 5 is greater than the total delay of message queue 4. Therefore, replace the identifier of message queue 4 in Table 5 with the identifier of message queue 5. , And replace the total delay of message queue 4 with the total delay of message queue 5.
- the multiple message queues stored in the first memory are divided into multiple message queue sets. Different message queue sets can correspond to different parameter value ranges, and there is no overlap between parameter value ranges corresponding to different message queue sets.
- the parameter value interval can be a delay interval or a length interval. In the following description, take the delay interval as an example.
- each packet queue set is stored in the network device.
- multiple message queues belonging to the same message queue set can be concatenated in the form of a linked list.
- the network device stores a first linked list corresponding to the first packet queue set, and multiple nodes included in the first linked list are in the first packet queue set
- the first linked list may be stored in the first memory of the network device.
- the network device further includes other memories, the first linked list may also be stored in other memories.
- FIG. 8 to define 16 delay intervals: ⁇ 20us, ⁇ 40us, ⁇ 80us,...>1ms.
- the packet queue sets belonging to the ⁇ 20us delay interval include Q1, and the message queue sets belonging to the ⁇ 40us delay interval include Q2 and Q4, Q2. It is connected with Q4 through a linked list, and the set of message queues belonging to a delay interval of ⁇ 80us includes Q3 and Q5, and Q3 and Q5 are connected through a linked list.
- the traffic manager 301 deletes at least one message at the end of the first message queue from the first memory based on that the available storage space of the first memory is less than the first threshold, the The identifier of the first packet queue set is determined to determine the first linked list; and then the first packet queue is determined according to the identifier of the first packet queue in the first linked list.
- the following describes in detail the maintenance of the length and/or delay of the multiple packet queues in the case where the network device stores multiple packet queues through the first memory and the second memory.
- the network equipment records message queue information, where the message queue information includes the identifier of each message queue set, and the delay interval corresponding to the identifier of each message queue set, and the messages included in each message queue set.
- the identifier of the message queue and the parameter value of each message queue may be stored in the first memory or the second memory. If the network device further includes other memories, the message queue information may also be stored in other memories.
- a message queue is selected from the set of message queues corresponding to the recorded maximum delay interval to move. For example, referring to the message queues corresponding to the different delay intervals shown in Figure 8, the largest delay interval is the ⁇ 80us delay interval. Choose a message queue from the ⁇ 80us delay interval, such as message queue Q3, When moving, move at least one message queue at the end of the message queue Q3 from the first memory to the second memory, as shown in FIG. 9, to move the two message queues at the end of the message queue Q3 from the first memory.
- One memory is moved to the second memory as an example, thereby releasing the resources of the first memory (on-chip memory) for use by low-latency messages, and improving the use efficiency of the on-chip memory.
- the moving process if the new tail message of the message queue Q3 in the first memory stays in the message queue Q3 for a duration less than the threshold of the delay interval corresponding to the set of message queues where Q3 is currently located, the The message queue Q3 is moved to the message queue set corresponding to the smaller delay interval.
- the new tail message of the message queue Q3 left in the first memory stays in the message queue Q3
- the message queue Q3 can be concatenated after Q4 through the linked list.
- the new tail message in the queue Q3 left in the first memory is in the first memory.
- the new message can be directly stored in the second memory, and the subsequent messages that are enqueued to the message queue Q3 can be stored until the second In the memory, until the message queue Q3 becomes short as a whole, if a new message is queued to the message queue Q3, it will no longer be stored in the second memory but in the first memory.
- some network devices need to pass through the first memory when storing messages in the second memory when setting up, that is, the messages need to be stored in the first memory first, and then directly moved to the second memory. Memory.
- the overall shortening of the message queue Q3 means that at least one of the foregoing first condition and second condition is satisfied.
- first condition and the second condition please refer to the above description, which will not be repeated here.
- another message queue can be selected from the set of message queues with the largest delay interval to be moved, for example, see Figure As shown in 9, the message queue Q5 is selected for moving processing, and the two messages at the end of the queue stored on the first memory of the message queue Q5 are moved from the first memory to the second memory.
- the fifth threshold is less than the first threshold, and it can be selected from the packet queue set with the largest delay interval
- a message queue is deleted. For example, when moving the message queue Q3, when it is determined that the available storage space of the first memory is less than the fifth threshold, the message queue Q5 is selected from the message queue set with the largest delay interval, and the end of the message queue Q5 is selected. Delete two messages from the first memory, and avoid storing at least one message deleted from the message queue Q5 in the second memory.
- the moving rate is lower than the rate at which new messages enter the queue, which makes the resource occupation of the first memory (on-chip memory) continue to grow. Therefore, when the occupied space of the first memory reaches the forbidden zone threshold, that is The available storage space of the first memory is less than the fifth threshold.
- the deletion operation is performed on other message queues, so that the resources of the first memory can be released as soon as possible. Of course, when the occupied space of the first memory is less than the forbidden zone threshold, the deletion operation on the message queue is stopped.
- the message queue information is recorded in the network device.
- the message queue information of the message queue needs to be updated, and the message queue information needs to be traversed based on the identifier of the message queue information Determine the storage location for storing the message queue information, thereby modifying the message queue information, such as updating the parameter value of the message queue, updating the storage location of the message queue stored at the end of the first memory of the message queue, and so on.
- hash may be used to assist in recording message queue information.
- the message queue information of some message queues can be stored and indexed by hashing, for example, the message queues with delays greater than T1 Message queues, or message queue information of message queues with a length greater than L1 are recorded by hashing.
- the network device stores a hash table (hash table), an on-chip queue information table (On-Chip Queue Info Table), a list of on-chip delay intervals, and the maximum time when there is a message queue.
- the identifier of the packet queue set corresponding to the extension interval, such as the Group ID.
- the hash table is used to store the identifier (such as QID) of the message queue with a delay greater than T1 and the index value (such as index) of the message queue.
- the index value is used to index the message queue that stores the message queue.
- the storage location of the information is used to store the message queue that stores the message queue.
- the on-chip queue information table (On-Chip Queue Info Table) is used to store information such as the identifier of the message queue and the delay or length of the message queue.
- the on-chip queue information table is also used to store the storage address of the last message of the message queue stored in the first memory. If the message queue collection is concatenated in the form of a linked list, the on-chip queue information table can also store the previous node of the message queue and the next node of the message queue.
- the on-chip delay interval list includes the identifier of the message queue set, the delay interval corresponding to the message queue set, and the index value of the head end message queue of the linked list corresponding to the message queue set.
- the delay of message queue 1 is greater than T1, and the delay of message queue 1 is used to determine which message queue 1 belongs to.
- the identifier of the message queue set is based on a hash algorithm to query whether the hash table includes the index of the message queue corresponding to the identifier (QID) of the message queue 1 to which the message 2 belongs; when the hash table does not When the index of the message queue 1 corresponding to the identifier of the message queue 1 is included, the identifier of the message queue 1 and the index of the corresponding message queue 1 are added to the hash table; based on the added The index of the message queue 1 adds the storage address of the message 1 in the first memory to the on-chip queue information list.
- the QID for example, 1512
- the stored content includes [QID; index], where the storage location of QID is stored at 1512, and the storage location corresponding to index is used for storage
- the row address allocated in the On-Chip Queue Info Table (such as number 99).
- the text queue set corresponds to the previous node of the linked list.
- each row address can store multiple entries (for example, 4, that is, there are 4 columns) to accommodate hash conflicts. The relationship between these 4 entries is completely parallel.
- multiple hash tables such as two hash tables, may be used in the embodiments of this application.
- Each hash table uses a different hash function: f1(x) and f2(x), of course. More hash tables can be used, such as 3-left or 4-left hash table, but the complexity of the solution will increase accordingly.
- the time is obtained from the on-chip delay interval list.
- the storage address of at least one message at the end of the selected message queue 1 in the first memory is determined in the on-chip queue information list based on the index of the head end message queue of the linked list; then the message is stored based on the storage address At least one message at the end of the message queue 1 is moved from the memory to the second memory.
- the multiple message queues stored in the total buffer are divided into multiple total buffer message queue sets based on the total delay or total length of the message queue.
- the network device can also store a total cache hash table (hash table), a long queue information table (Long Queue Info Table), a list of total cache delay intervals, and the corresponding maximum delay interval of the message queue.
- the identifier of the total buffered message queue set such as Group ID.
- the total cache hash table is used to store the identifier (such as QID) of the message queue whose total delay is greater than T2 and the index value (such as index) of the message queue, and the index value is used to index to determine the storage of the message queue Storage location of message queue information.
- T2 can be set to be relatively large, such as recording only after more than 1ms, and the delay interval span corresponding to the total buffered message queue set can also be larger, thereby reducing the number of recorded message queues and the total buffered message queue set.
- the long queue information table (Long Queue Info Table) is used to store information such as the identifier of the message queue and the total delay or total length of the message queue.
- the long queue information table can also be used to store the storage address of the last message stored in the total buffer in the message queue. If the total buffer message queue set is connected in series in the form of a linked list, the long queue information table can also store the previous node of the message queue and the next node of the message queue.
- the total buffer delay interval list includes the identifier of the total buffer message queue set, the total delay interval corresponding to the total buffer message queue set, and the index value of the head end message queue of the linked list corresponding to the total buffer message queue set.
- the total cache hash table (hash table), the long queue information table (Long Queue Info Table), the total buffer delay interval list, and the update method of the identifier of the total buffer message queue set corresponding to the maximum delay interval of the message queue , Is the same as the update method of the hash table, the on-chip queue information table, the on-chip delay interval list, and the identifier of the message queue set corresponding to the maximum delay interval in which the message queue exists, and will not be repeated here.
- a message queue may be selected from the set of total buffer message queues with the largest total delay interval for deletion processing according to the total buffer delay interval list. For example, when it is determined that the sum of the available storage space of the first memory and the second memory is less than the sixth threshold, the total buffer message corresponding to the identifier of the total buffer message queue set with the largest total delay interval is obtained from the total buffer delay interval list.
- the index of the head-end queue of the linked list in the queue set select a message queue from the long queue information list based on the index of the head-end queue of the linked list, such as message queue Q3, and determine that at least one message at the end of the message queue Q3 is in the total
- the storage address in the buffer is based on the storage address of at least one message at the end of the message queue Q3 in the general buffer, and at least one message at the end of the message queue Q3 is deleted from the buffer.
- the maintenance of the length and/or delay of the multiple packet queues will be described in detail below in the case that the network device stores multiple packet queues through the first memory.
- the network equipment records message queue information, where the message queue information includes the identifier of each message queue set, and the delay interval corresponding to the identifier of each message queue set, and the messages included in each message queue set.
- the identifier of the message queue and the parameter value of each message queue may be stored in the first memory or the second memory. If the network device further includes other memories, the message queue information may also be stored in other memories.
- a message queue is selected for deletion from the set of message queues corresponding to the recorded maximum delay interval. For example, referring to the message queues corresponding to different delay intervals shown in FIG. 8, such as message queue Q3, when discarding, at least one message queue located at the tail of the message queue Q3 is deleted from the first memory to The two message queues at the tail of the message queue Q3 are deleted from the first memory as an example, so as to release the resources of the first memory (on-chip buffer) for low-latency messages, and improve the efficiency of using the on-chip buffer.
- the new tail message of the message queue Q3 in the first memory stays in the message queue Q3 for a duration less than the threshold of the delay interval corresponding to the set of message queues where Q3 is currently located, the The message queue Q3 is moved to the message queue set corresponding to the smaller delay interval. For example, after deleting at least one message queue at the end of the message queue Q3 from the first memory, the new tail message of the message queue Q3 left in the first memory stays in the message queue Q3 for a duration of 38us At this time, the message queue Q3 can be concatenated after Q4 through the linked list.
- new messages are subsequently enqueued to message queue Q3, and the new messages can be directly discarded, and subsequent received messages belonging to message queue Q3 can be discarded until the entire message queue Q3 becomes shorter. After that, a new message is queued to the message queue Q3 and then stored in the first memory.
- the overall shortening of the message queue Q3 means that at least one of the foregoing first condition and second condition is satisfied.
- first condition and the second condition please refer to the above description, which will not be repeated here.
- Another message queue can be selected from the set of message queues with the largest delay interval for deletion processing, for example, select
- the message queue Q5 performs deletion processing, and the two messages at the end of the message queue Q5 are deleted from the first memory.
- the message queue information of some message queues can be stored and indexed in a hash mode. For example, the delay is greater than T1.
- Message queues, or message queue information of message queues with a length greater than L1 are recorded by hashing. For details, refer to the above.
- the network device stores multiple message queues through the first memory and the second memory, maintaining the length and/or delay of multiple message queues I won’t repeat the description here.
- the use efficiency of the on-chip memory can be improved, and the off-chip memory will not be used before the on-chip memory usage reaches the alarm area, thereby reducing power consumption and time delay when the device is in use.
- FIG. 13 is a schematic structural diagram of a memory management device provided by an embodiment of the application.
- the memory management device can be used to execute the method shown in FIG. 5A. Specifically, it can be used to execute S501 and S502.
- the storage management apparatus 1300 may include a determining unit 1301 and a management unit 1302.
- the determining unit 1301 is configured to determine that the available storage space of the first memory in the network device is less than a first threshold, the first threshold is greater than 0, and the first message queue is stored in the first memory;
- the management unit 1302 is configured to delete at least one message at the end of the first message queue from the first memory based on the available storage space of the first memory being less than the first threshold.
- the determining unit 1301 may be used to perform S501.
- the management unit 1302 may be used to perform S502.
- S501 the determining unit 1301
- the management unit 1302 may be used to perform S502.
- the storage management apparatus 1300 may be a network device.
- the storage management apparatus 1300 may be an interface board 1230 in a network device.
- the storage management device 1300 may be the traffic manager 301.
- the memory management apparatus 1300 may be the interface board 1240 in the network device shown in FIG. 4.
- the storage management device 1300 may be the traffic manager 402.
- the traffic manager 301 may implement the function of the memory management apparatus 1300.
- the first message queue is a message queue among a plurality of message queues stored in the first memory, and the parameter value of the first message queue is greater than that of the plurality of message queues.
- Parameter values of other message queues in a message queue, the parameter value of the message queue is the delay of the message queue or the length of the message queue, and the delay of the message queue is located in the message queue. The length of time a message at the end of the message queue is expected to stay in the message queue.
- the network device stores a queue information table, and the queue information table records the parameter values of the first message queue and the identifier of the first message queue;
- the management unit 1302 is specifically configured to: determine the first message queue according to the identifier of the first message queue recorded in the queue information table, and determine at least the tail of the first message queue A message is deleted from the first memory.
- the network device includes a second memory, and the bandwidth of the second memory is smaller than the bandwidth of the first memory
- the management unit 1302 is further configured to: after the determining unit 1301 determines that the available storage space of the first memory in the network device is less than a first threshold, remove the at least one message at the end of the first message queue The message is stored in the second memory, and the parameter value of the first message queue recorded in the queue information table is updated;
- the updated parameter value of the first message queue is from the at least one message from the
- the first memory is deleted, it is estimated that the first message will stay in the first message queue for the length of time, the first message is a message in the first message queue, and the first message The message is adjacent to the at least one message, and the first message is located before the at least one message; or, when the parameter value of the first message queue is the length of the first message queue, update The parameter value of the subsequent first message queue is the length of the first message queue stored in the first memory for deleting the at least one message.
- the management unit 1302 is further configured to store the identifier of the first message in the queue information table.
- the device may further include: a receiving unit 1303, configured to receive messages to be enqueued.
- the receiving unit 1303 receives the second message; the management unit 1302 is further configured to queue the second message to the second message queue; determine that the second message is located in the The end of the second message queue is the parameter value of the second message queue; when the first message is at the end of the second message queue, the parameter value of the second message queue is greater than When the parameter value of the first message queue recorded in the queue information table, replace the identifier of the first message queue recorded in the queue information table with the identifier of the second message queue, and Replace the parameter value of the first packet queue recorded in the queue information table with the parameter value of the second packet queue.
- the receiving unit 1303 is configured to receive, after the management unit 1302 deletes at least one packet at the end of the first packet queue from the first memory, to receive the The third message in the first message queue;
- the management unit 1302 is further configured to store the third message in the second memory when the first message queue meets at least one of the first condition and the second condition; or
- the first condition is: when determining whether to enqueue the third message into the first message queue, the length of the first message queue stored in the first memory and the second memory is less than the length of the first message queue. Three thresholds, the third threshold is greater than 0;
- the second condition is: when determining whether to enqueue the third message into the first message queue, the first message queue stored in the first memory and the second memory The message at the end of the queue is expected to stay in the first message queue for a time less than the fourth threshold, which is greater than 0.
- the receiving unit 1303 is configured to receive at least one message at the end of the queue in the first message queue from the first memory after the management unit 1302 deletes the message to be queued to the The third message of the first message queue; the management unit 1302 is further configured to avoid sending the third message to the third message queue when the first message queue meets at least one of the following first and second conditions The message is stored in the first memory; or, when the first message queue does not meet the first condition and does not meet the second condition, store the third message in the first A memory;
- the first condition is: when determining whether to enqueue the third message into the first message queue, the length of the first message queue stored in the first memory and the second memory is less than the length of the first message queue. Three thresholds, the third threshold is greater than 0;
- the second condition is: when determining whether to enqueue the third message into the first message queue, the first message queue stored in the first memory and the second memory The message at the end of the queue is expected to stay in the first message queue for a time less than the fourth threshold, which is greater than 0.
- the network device includes a second memory, the bandwidth of the second memory is smaller than the bandwidth of the first memory, and the first memory further stores a third message queue;
- the management The unit 1302 is further configured to, after the determining unit 1301 determines that the available storage space of the first memory in the network device is less than a first threshold, store the at least one message at the end of the first message queue in In the second memory; the determining unit 1301 is further configured to, based on the management unit 1302 that the available storage space of the first memory is less than the first threshold, place at least one of the tails in the first message queue After the message is deleted from the first memory, it is determined that the available storage space of the first memory is less than a second threshold, the second threshold is less than the first threshold, and the second threshold is greater than 0; the management The unit 1302 is further configured to delete at least one message at the end of the third message queue from the first memory based on the available storage space of the first memory being less than the second threshold. , And avoid storing the at
- the first message queue is a message queue among a plurality of message queues included in the first message queue set, and each message queue in the first message queue set The parameter value of is greater than the parameter value of each message queue in the plurality of message queues included in the second message queue set, and each message queue in the first message queue set is stored in the first memory, Each message queue in the second message queue set is stored in the first memory, and the network device saves the identifier of the first message queue set and the identifier of the second message queue set;
- the parameter value of the message queue is the delay of the message queue or the length of the message queue, and the delay of the message queue is that the message at the end of the message queue is expected to be at all The length of time the message queue stays;
- the management unit 1302 is specifically configured to determine the first message queue according to the saved identifier of the first message queue set; and remove the at least one message at the end of the first message queue from Delete from the first memory.
- the network device saves a first linked list corresponding to the first packet queue set, and the multiple nodes included in the first linked list correspond to the multiple nodes in the first packet queue set.
- the network device saves a first linked list corresponding to the first packet queue set, and the multiple nodes included in the first linked list correspond to the multiple nodes in the first packet queue set.
- the management unit 1302 when determining the first message queue according to the saved identifier of the first message queue set, is specifically configured to: determine according to the saved identifier of the first message queue set The first linked list; determining the first message queue according to the identifier of the first message queue in the first linked list.
- FIG. 14 is a schematic structural diagram of another memory management device provided by this application.
- the memory management device 1400 may be used to execute the method shown in FIG. 5A.
- the memory management device 1400 includes an input interface 1401, an output interface 1402, a processor 1403, a memory 1404 and a bus 1405.
- the input interface 1401, the output interface 1402, the processor 1403, and the memory 1404 can communicate through the bus 1405.
- the input interface 1401 is used to receive messages.
- the output interface 1402 is used to send messages.
- the memory 1404 is used to store computer programs.
- the processor 1403 can execute the method shown in FIG. 5 by accessing the computer program in the memory 1404.
- the storage management apparatus 1400 may be a network device.
- the memory management apparatus 1400 may be an interface board 1230 in a network device.
- the storage management device 1400 may be the traffic manager 301.
- the memory management apparatus 1300 may be the interface board 1240 in the network device shown in FIG. 4.
- the storage management device 1300 may be the traffic manager 402.
- the flow manager 301 may implement the function of the storage management device 1400.
- the input interface 1401 may be implemented by a physical interface card 1233.
- the output interface 1402 can be implemented by iFIC303.
- the processor 1403 may be implemented by the WRED circuit 603 and the queue threshold determination circuit 604.
- the input interface 1401, the output interface 1402, the processor 1403, and the memory 1404 reference may be made to the description of the embodiment shown in FIG. 6, which will not be repeated here.
- the application also provides a computer-readable storage medium.
- the computer-readable storage medium is used to store computer programs. When the computer program is executed, the computer can be caused to execute the method shown in FIG. 5A.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- the computer program product includes a computer program.
- the computer program When the computer program is executed, the computer can be caused to execute the method shown in Fig. 5A.
- FIG. 5A For details, please refer to the description of the embodiment shown in FIG. 5A, which will not be repeated here.
- the size of the sequence number of each process does not mean the order of execution.
- the execution order of each process should be determined by its function and internal logic, and should not correspond to the different The implementation process constitutes any limitation.
- modules and method steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed by hardware or a combination of computer software and electronic hardware depends on the specific application and design constraints of the technical solution. Professional technicians can use different methods for each specific application to achieve the described functions.
- the software involved in the above embodiments can be implemented in the form of a computer program product.
- the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are generated in whole or in part.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium.
- the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website site, computer, server or data center via wired or wireless means.
- the wired manner may be coaxial cable, optical fiber, or digital subscriber line (digital subscriber line, DSL).
- the wireless mode may be infrared, wireless or microwave.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
- the usable medium may be a magnetic medium, an optical medium, or a semiconductor medium.
- Magnetic media can be floppy disks, hard drives, or tapes.
- the optical medium may be a digital versatile disc (DVD).
- the semiconductor medium may be a solid state disk (solid state disk, SSD).
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Environmental & Geological Engineering (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
QID(报文队列的标识) | Queue latency(报文队列的时延) |
报文队列1 | 800us |
报文队列2 | 780us |
QID(报文队列的标识) | Queue latency(报文队列的时延) |
报文队列1 | 800us |
报文队列2 | 780us |
QID(报文队列的标识) | Queue latency(报文队列的时延) |
报文队列3 | 900us |
报文队列4 | 880us |
Claims (20)
- 一种存储器的管理方法,其特征在于,包括:确定网络设备中第一存储器的可用存储空间小于第一阈值,所述第一阈值大于0,所述第一存储器中存储第一报文队列;基于所述第一存储器的可用存储空间小于所述第一阈值,将第一报文队列中队尾的至少一个报文从所述第一存储器中删除。
- 如权利要求1所述的方法,其特征在于,所述第一报文队列为存储在所述第一存储器中的多个报文队列中一个报文队列,所述第一报文队列的参数值大于所述多个报文队列中其它报文队列的参数值,报文队列的参数值为所述报文队列的时延或者所述报文队列的长度,所述报文队列的时延为位于所述报文队列的队尾的报文预计在所述报文队列停留的时长。
- 如权利要求2所述的方法,其特征在于,所述网络设备保存队列信息表,所述队列信息表记录了所述第一报文队列的参数值以及所述第一报文队列的标识;将第一报文队列中队尾的至少一个报文从所述第一存储器中删除,包括:根据所述队列信息表中记录的所述第一报文队列的标识,确定所述第一报文队列,并将所述第一报文队列中队尾的至少一个报文从所述第一存储器中删除。
- 如权利要求3所述的方法,其特征在于,所述网络设备包括第二存储器,所述第二存储器的带宽小于所述第一存储器的带宽;确定网络设备中第一存储器的可用存储空间小于第一阈值之后,所述方法还包括:将所述第一报文队列中队尾的所述至少一个报文存储在所述第二存储器中,并更新所述队列信息表记录的所述第一报文队列的参数值;其中,当所述第一报文队列的参数值为所述第一报文队列的时延时,更新后的所述第一报文队列的参数值为从所述至少一个报文从所述第一存储器删除的时间开始,预计第一报文将在所述第一报文队列停留的时长,所述第一报文为所述第一报文队列中的报文,所述第一报文与所述至少一个报文相邻,所述第一报文位于所述至少一个报文之前;或者,当所述第一报文队列的参数值为第一报文队列的长度时,更新后的第一报文队列的参数值为存储在所述第一存储器中的删除所述至少一个报文的所述第一报文队列的长度。
- 如权利要求4所述的方法,其特征在于,所述方法还包括:在所述队列信息表中保存所述第一报文的标识。
- 如权利要求3-5任一项所述的方法,其特征在于,所述方法还包括:接收第二报文;将所述第二报文入队到第二报文队列;确定所述第二报文位于所述第二报文队列的队尾时所述第二报文队列的参数值;当所述第一报文位于所述第二报文队列的队尾时所述第二报文队列的参数值大于所述队列信息表记录的所述第一报文队列的参数值时,将所述队列信息表中记录的所述第一报文队列的标识替换为所述第二报文队列的标识,并将所述队列信息表中记录的所述第一报文队列的参数值替换为所述第二报文队列的参数值。
- 如权利要求1-3任一项所述的方法,其特征在于,所述网络设备中包括第二存储器,所述第二存储器的带宽小于所述第一存储器的带宽,所述第一存储器还存储第三报文队列;确定网络设备中第一存储器的可用存储空间小于第一阈值之后,所述方法还包括:将所述第一报文队列中队尾的所述至少一个报文存储在所述第二存储器中;基于所述第一存储器的可用存储空间小于所述第一阈值,将第一报文队列中队尾的至少一个报文从所述第一存储器中删除之后,所述方法还包括:确定所述第一存储器的可用存储空间小于第二阈值,所述第二阈值小于所述第一阈值,所述第二阈值大于0;基于所述第一存储器的可用存储空间小于所述第二阈值,将所述第三报文队列中队尾的至少一个报文从所述第一存储器中删除,并且避免将所述第三报文队列中队尾的所述至少一个报文存储在所述第二存储器中。
- 如权利要求1所述的方法,其特征在于,所述第一报文队列为第一报文队列集合包括的多个报文队列中的一个报文队列,所述第一报文队列集合中每个报文队列的参数值大于第二报文队列集合包括的多个报文队列中每个报文队列的参数值,所述第一报文队列集合中每个报文队列存储在所述第一存储器中,所述第二报文队列集合中每个报文队列存储在所述第一存储器中,所述网络设备保存所述第一报文队列集合的标识以及所述第二报文队列集合的标识;其中,报文队列的参数值为所述报文队列的时延或者所述报文队列的长度,所述报文队列的时延为位于所述报文队列的队尾的报文预计在所述报文队列停留的时长;基于所述第一存储器的可用存储空间小于所述第一阈值,将第一报文队列中队尾的至少一个报文从所述第一存储器中删除包括:根据保存的所述第一报文队列集合的标识,确定所述第一报文队列;将所述第一报文队列中队尾的所述至少一个报文从所述第一存储器中删除。
- 如权利要求8所述的方法,其特征在于,所述网络设备保存与所述第一报文队列集合对应的第一链表,所述第一链表包括的多个节点与所述第一报文队列集合中的多个报文队列一一对应,所述第一链表中的每个节点包括对应的报文队列的标识;根据保存的所述第一报文队列集合的标识,确定所述第一报文队列,包括:根据保存的所述第一报文队列集合的标识,确定所述第一链表;根据所述第一链表中的所述第一报文队列的标识,确定所述第一报文队列。
- 一种存储器的管理装置,其特征在于,包括:确定单元,用于确定网络设备中第一存储器的可用存储空间小于第一阈值,所述第一阈值大于0,所述第一存储器中存储第一报文队列;管理单元,用于基于所述第一存储器的可用存储空间小于所述第一阈值,将第一报文队列中队尾的至少一个报文从所述第一存储器中删除。
- 如权利要求10所述的装置,其特征在于,所述第一报文队列为存储在所述第一存储器中的多个报文队列中一个报文队列,所述第一报文队列的参数值大于所述多个报文队列中其它报文队列的参数值,报文队列的参数值为所述报文队列的时延或者所述报文队列的长度,所述报文队列的时延为位于所述报文队列的队尾的报文预计在所述报文队列停留的时长。
- 如权利要求11所述的装置,其特征在于,所述网络设备保存队列信息表,所述队列信息表记录了所述第一报文队列的参数值以及所述第一报文队列的标识;所述管理单元具体用于:根据所述队列信息表中记录的所述第一报文队列的标识,确 定所述第一报文队列,并将所述第一报文队列中队尾的至少一个报文从所述第一存储器中删除。
- 如权利要求12所述的装置,其特征在于,所述网络设备包括第二存储器,所述第二存储器的带宽小于所述第一存储器的带宽;所述管理单元,还用于在所述确定单元确定所述网络设备中第一存储器的可用存储空间小于第一阈值之后,将所述第一报文队列中队尾的所述至少一个报文存储在所述第二存储器中,并更新所述队列信息表记录的所述第一报文队列的参数值;其中,当所述第一报文队列的参数值为所述第一报文队列的时延时,更新后的所述第一报文队列的参数值为从所述至少一个报文从所述第一存储器删除的时间开始,预计第一报文将在所述第一报文队列停留的时长,所述第一报文为所述第一报文队列中的报文,所述第一报文与所述至少一个报文相邻,所述第一报文位于所述至少一个报文之前;或者,当所述第一报文队列的参数值为第一报文队列的长度时,更新后的第一报文队列的参数值为存储在所述第一存储器中的删除所述至少一个报文的所述第一报文队列的长度。
- 如权利要求13所述的装置,其特征在于,所述管理单元,还用于在所述队列信息表中保存所述第一报文的标识。
- 如权利要求12-14任一项所述的装置,其特征在于,所述装置还包括:接收单元,用于接收第二报文;所述管理单元,还用于将所述第二报文入队到第二报文队列;确定所述第二报文位于所述第二报文队列的队尾时所述第二报文队列的参数值;当所述第一报文位于所述第二报文队列的队尾时所述第二报文队列的参数值大于所述队列信息表记录的所述第一报文队列的参数值时,将所述队列信息表中记录的所述第一报文队列的标识替换为所述第二报文队列的标识,并将所述队列信息表中记录的所述第一报文队列的参数值替换为所述第二报文队列的参数值。
- 如权利要求10-12任一项所述的装置,其特征在于,所述网络设备中包括第二存储器,所述第二存储器的带宽小于所述第一存储器的带宽,所述第一存储器还存储第三报文队列;所述管理单元,还用于在所述确定单元确定所述网络设备中第一存储器的可用存储空间小于第一阈值之后,将所述第一报文队列中队尾的所述至少一个报文存储在所述第二存储器中;所述确定单元,还用于在所述管理单元基于所述第一存储器的可用存储空间小于所述第一阈值,将第一报文队列中队尾的至少一个报文从所述第一存储器中删除之后,确定所述第一存储器的可用存储空间小于第二阈值,所述第二阈值小于所述第一阈值,所述第二阈值大于0;所述管理单元,还用于基于所述第一存储器的可用存储空间小于所述第二阈值,将所述第三报文队列中队尾的至少一个报文从所述第一存储器中删除,并且避免将所述第三报文队列中队尾的所述至少一个报文存储在所述第二存储器中。
- 如权利要求10所述的装置,其特征在于,所述第一报文队列为第一报文队列集合包括的多个报文队列中的一个报文队列,所述第一报文队列集合中每个报文队列的参数值大于第二报文队列集合包括的多个报文队列中每个报文队列的参数值,所述第一报文队列集合中每个报文队列存储在所述第一存储器中,所述第二报文队列集合中每个报文队列 存储在所述第一存储器中,所述网络设备保存所述第一报文队列集合的标识以及所述第二报文队列集合的标识;其中,报文队列的参数值为所述报文队列的时延或者所述报文队列的长度,所述报文队列的时延为位于所述报文队列的队尾的报文预计在所述报文队列停留的时长;所述管理单元,具体用于根据保存的所述第一报文队列集合的标识,确定所述第一报文队列;将所述第一报文队列中队尾的所述至少一个报文从所述第一存储器中删除。
- 如权利要求17所述的装置,其特征在于,所述网络设备保存与所述第一报文队列集合对应的第一链表,所述第一链表包括的多个节点与所述第一报文队列集合中的多个报文队列一一对应,所述第一链表中的每个节点包括对应的报文队列的标识;所述管理单元,在根据保存的所述第一报文队列集合的标识,确定所述第一报文队列时,具体用于:根据保存的所述第一报文队列集合的标识,确定所述第一链表;根据所述第一链表中的所述第一报文队列的标识,确定所述第一报文队列。
- 一种存储器的管理装置,其特征在于,包括:处理器以及与所述处理器耦合的存储器,所述存储器存储有计算机程序,当所述处理器执行所述计算机程序时,使得所述装置执行权利要求1至9中任一所述的方法。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质保存有计算机程序,当所述计算机程序被执行时,可以使得计算机执行权利要求1至9中任一所述的方法。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19915801.5A EP3920475A4 (en) | 2019-02-22 | 2019-02-22 | MEMORY MANAGEMENT METHOD AND DEVICE |
PCT/CN2019/075935 WO2020168563A1 (zh) | 2019-02-22 | 2019-02-22 | 一种存储器的管理方法及装置 |
JP2021549288A JP7241194B2 (ja) | 2019-02-22 | 2019-02-22 | メモリ管理方法及び装置 |
KR1020217030182A KR20210130766A (ko) | 2019-02-22 | 2019-02-22 | 메모리 관리 방법 및 장치 |
CN201980092378.XA CN113454957B (zh) | 2019-02-22 | 2019-02-22 | 一种存储器的管理方法及装置 |
US17/408,057 US11695710B2 (en) | 2019-02-22 | 2021-08-20 | Buffer management method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/075935 WO2020168563A1 (zh) | 2019-02-22 | 2019-02-22 | 一种存储器的管理方法及装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/408,057 Continuation US11695710B2 (en) | 2019-02-22 | 2021-08-20 | Buffer management method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020168563A1 true WO2020168563A1 (zh) | 2020-08-27 |
Family
ID=72144041
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/075935 WO2020168563A1 (zh) | 2019-02-22 | 2019-02-22 | 一种存储器的管理方法及装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US11695710B2 (zh) |
EP (1) | EP3920475A4 (zh) |
JP (1) | JP7241194B2 (zh) |
KR (1) | KR20210130766A (zh) |
CN (1) | CN113454957B (zh) |
WO (1) | WO2020168563A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11381515B2 (en) * | 2019-06-28 | 2022-07-05 | Intel Corporation | On-demand packet queuing in a network device |
CN114257559B (zh) * | 2021-12-20 | 2023-08-18 | 锐捷网络股份有限公司 | 一种数据报文的转发方法及装置 |
WO2023130997A1 (zh) * | 2022-01-07 | 2023-07-13 | 华为技术有限公司 | 管理流量管理tm控制信息的方法、tm模块和网络转发设备 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080192765A1 (en) * | 2007-02-12 | 2008-08-14 | Jong-Sang Oh | Apparatus and method for packet buffer management in IP network system |
CN102789336A (zh) * | 2012-07-04 | 2012-11-21 | 广东威创视讯科技股份有限公司 | 多屏拼接触控方法和系统 |
CN106598495A (zh) * | 2016-12-07 | 2017-04-26 | 深圳市深信服电子科技有限公司 | 一种混合存储服务质量的控制方法及控制装置 |
CN109343790A (zh) * | 2018-08-06 | 2019-02-15 | 百富计算机技术(深圳)有限公司 | 一种基于nand flash的数据存储方法、终端设备及存储介质 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3526269B2 (ja) | 2000-12-11 | 2004-05-10 | 株式会社東芝 | ネットワーク間中継装置及び該中継装置における転送スケジューリング方法 |
US7657706B2 (en) * | 2003-12-18 | 2010-02-02 | Cisco Technology, Inc. | High speed memory and input/output processor subsystem for efficiently allocating and using high-speed memory and slower-speed memory |
US20060031565A1 (en) * | 2004-07-16 | 2006-02-09 | Sundar Iyer | High speed packet-buffering system |
WO2010089886A1 (ja) | 2009-02-06 | 2010-08-12 | 富士通株式会社 | パケットバッファ装置及びパケット廃棄方法 |
US8441927B2 (en) | 2011-01-13 | 2013-05-14 | Alcatel Lucent | System and method for implementing periodic early discard in on-chip buffer memories of network elements |
CN102223675B (zh) * | 2011-06-08 | 2014-06-04 | 大唐移动通信设备有限公司 | 拥塞告警及处理方法、系统和设备 |
CN102957629B (zh) * | 2011-08-30 | 2015-07-08 | 华为技术有限公司 | 队列管理的方法和装置 |
CN102404206A (zh) * | 2011-11-04 | 2012-04-04 | 深圳市海思半导体有限公司 | 入队处理方法及设备 |
CN102595512B (zh) * | 2012-03-19 | 2014-08-27 | 福建星网锐捷网络有限公司 | 一种报文缓存方法及接入点 |
CN103647726B (zh) * | 2013-12-11 | 2017-01-11 | 华为技术有限公司 | 一种报文调度方法及装置 |
EP2884707B1 (en) * | 2013-12-16 | 2016-04-27 | Alcatel Lucent | Method for controlling buffering of packets in a communication network |
CN106325758B (zh) * | 2015-06-17 | 2019-10-22 | 深圳市中兴微电子技术有限公司 | 一种队列存储空间管理方法及装置 |
CN106330760A (zh) * | 2015-06-30 | 2017-01-11 | 深圳市中兴微电子技术有限公司 | 一种缓存管理的方法和装置 |
US20170134282A1 (en) | 2015-11-10 | 2017-05-11 | Ciena Corporation | Per queue per service differentiation for dropping packets in weighted random early detection |
CN105978821B (zh) * | 2016-07-21 | 2019-09-06 | 杭州迪普科技股份有限公司 | 网络拥塞避免的方法及装置 |
CN106789729B (zh) * | 2016-12-13 | 2020-01-21 | 华为技术有限公司 | 一种网络设备中的缓存管理方法及装置 |
CN108234348B (zh) * | 2016-12-13 | 2020-09-25 | 深圳市中兴微电子技术有限公司 | 一种队列操作中的处理方法及装置 |
-
2019
- 2019-02-22 WO PCT/CN2019/075935 patent/WO2020168563A1/zh unknown
- 2019-02-22 JP JP2021549288A patent/JP7241194B2/ja active Active
- 2019-02-22 EP EP19915801.5A patent/EP3920475A4/en active Pending
- 2019-02-22 CN CN201980092378.XA patent/CN113454957B/zh active Active
- 2019-02-22 KR KR1020217030182A patent/KR20210130766A/ko active IP Right Grant
-
2021
- 2021-08-20 US US17/408,057 patent/US11695710B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080192765A1 (en) * | 2007-02-12 | 2008-08-14 | Jong-Sang Oh | Apparatus and method for packet buffer management in IP network system |
CN102789336A (zh) * | 2012-07-04 | 2012-11-21 | 广东威创视讯科技股份有限公司 | 多屏拼接触控方法和系统 |
CN106598495A (zh) * | 2016-12-07 | 2017-04-26 | 深圳市深信服电子科技有限公司 | 一种混合存储服务质量的控制方法及控制装置 |
CN109343790A (zh) * | 2018-08-06 | 2019-02-15 | 百富计算机技术(深圳)有限公司 | 一种基于nand flash的数据存储方法、终端设备及存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3920475A4 * |
Also Published As
Publication number | Publication date |
---|---|
JP2022523195A (ja) | 2022-04-21 |
KR20210130766A (ko) | 2021-11-01 |
EP3920475A4 (en) | 2022-02-16 |
CN113454957A (zh) | 2021-09-28 |
US11695710B2 (en) | 2023-07-04 |
EP3920475A1 (en) | 2021-12-08 |
CN113454957B (zh) | 2023-04-25 |
US20210392092A1 (en) | 2021-12-16 |
JP7241194B2 (ja) | 2023-03-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11637786B1 (en) | Multi-destination traffic handling optimizations in a network device | |
US8799507B2 (en) | Longest prefix match searches with variable numbers of prefixes | |
US8184540B1 (en) | Packet lifetime-based memory allocation | |
US10193831B2 (en) | Device and method for packet processing with memories having different latencies | |
US11695710B2 (en) | Buffer management method and apparatus | |
US20220303217A1 (en) | Data Forwarding Method, Data Buffering Method, Apparatus, and Related Device | |
JP2013507022A (ja) | フローアウェアネットワークノード内でデータパケットを処理するための方法 | |
US11223568B2 (en) | Packet processing method and apparatus | |
US20170195227A1 (en) | Packet storing and forwarding method and circuit, and device | |
US11646970B2 (en) | Method and apparatus for determining packet dequeue rate | |
CN108989233B (zh) | 拥塞管理方法及装置 | |
Pan et al. | Nb-cache: Non-blocking in-network caching for high-performance content routers | |
US20060187963A1 (en) | Method for sharing single data buffer by several packets | |
US12101260B1 (en) | Multi-destination traffic handling optimizations in a network device | |
CN113347112B (zh) | 一种基于多级缓存的数据包转发方法及装置 | |
US9922000B2 (en) | Packet buffer with dynamic bypass | |
US12067397B2 (en) | NIC line-rate hardware packet processing | |
CN114785744B (zh) | 数据处理方法、装置、计算机设备和存储介质 | |
CN114449046B (zh) | 一种网络数据处理方法及系统 | |
US20240283724A1 (en) | Sampled packet information accumulation | |
CN117749726A (zh) | Tsn交换机输出端口优先级队列混合调度方法和装置 | |
Li et al. | Design and buffer sizing of TCAM-based pipelined forwarding engines |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19915801 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021549288 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019915801 Country of ref document: EP Effective date: 20210831 |
|
ENP | Entry into the national phase |
Ref document number: 20217030182 Country of ref document: KR Kind code of ref document: A |