WO2023125430A1 - Appareil de gestion de trafic, procédé de mise en cache de paquets, puce et dispositif de réseau - Google Patents

Appareil de gestion de trafic, procédé de mise en cache de paquets, puce et dispositif de réseau Download PDF

Info

Publication number
WO2023125430A1
WO2023125430A1 PCT/CN2022/141981 CN2022141981W WO2023125430A1 WO 2023125430 A1 WO2023125430 A1 WO 2023125430A1 CN 2022141981 W CN2022141981 W CN 2022141981W WO 2023125430 A1 WO2023125430 A1 WO 2023125430A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
message
module
shared memory
storage
Prior art date
Application number
PCT/CN2022/141981
Other languages
English (en)
Chinese (zh)
Inventor
白宇
杨文斌
李广
王小忠
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023125430A1 publication Critical patent/WO2023125430A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/167Interprocessor communication using a common memory, e.g. mailbox
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling

Definitions

  • the present application relates to the field of network technologies, and in particular to a flow management device, a message buffering method, a chip and network equipment.
  • Forwarding chips in network equipment usually include traffic management memory and queue management memory.
  • the traffic management memory and queue management memory are usually set according to the most stringent network scenario (that is, the traffic management memory and queue management memory are both set to be larger, so that the traffic management memory and queue management memory It can meet the needs of various network scenarios).
  • this may result in at least one of flow management memory and queue management memory being underutilized.
  • the forwarding chip in the edge network scenario, the forwarding chip usually has a large demand for queue management memory, and the demand for traffic management memory is usually small, and the traffic management memory may not be fully utilized; The demand for memory is usually large, the demand for queue management memory is usually small, and queue management memory may be underutilized. Therefore, a solution is needed to improve the utilization rate of memory resources of the forwarding chip.
  • the present application provides a flow management device, a message buffering method, a chip and a network device.
  • the technical solution is as follows:
  • a traffic management device in a first aspect, includes a shared memory.
  • the capacity of the shared memory is smaller than the preset capacity.
  • the shared memory is used for buffering messages input into the shared memory, and storing queue information of queues where the messages in the shared memory are located.
  • the preset capacity may be equal to the sum of the capacity of the internal flow management memory and the capacity of the queue management memory in the related art.
  • the traffic management device since the traffic management device includes a shared memory, the capacity of the shared memory is smaller than the preset capacity, so the capacity of the shared memory is small, which helps to reduce the area of the chip, and reduce the cost and power consumption of the chip . Since the shared memory can cache messages and store the queue information of the queues where the messages in the shared memory are located, the utilization rate of the shared memory is high, helping to avoid waste of chip memory resources.
  • the shared memory includes m storage modules, the m storage modules include n shared storage modules, m ⁇ n, and both m and n are positive integers.
  • the shared memory module is used for buffering messages input to the shared memory module, and/or storing queue information of the queue where at least one message in the shared memory is located.
  • the shared memory may be called a fully shared memory.
  • the n shared storage modules are isomorphic storage modules.
  • the storage bit widths of the n shared memory modules are equal.
  • the m storage modules also include at least one of a traffic exclusive storage module and a queue exclusive storage module.
  • the traffic exclusive storage module is used for caching the messages input to the traffic exclusive storage module.
  • the queue exclusive storage module is used for storing the queue information of the queue where at least one message in the shared memory is located.
  • the queue information stored in any storage module in the shared storage module and the queue exclusive storage module includes: the queue information of the queue where the message in the shared storage module is located and the queue where the message in the flow exclusive storage module is located At least one of the queue information.
  • the m storage modules include p flow exclusive storage modules and q queue exclusive storage modules, m ⁇ n+p+q, p>1, q>1, n>1, and p and q are all integers.
  • the n shared storage modules are isomorphic storage modules.
  • the p traffic exclusive storage modules are homogeneous storage modules.
  • the q queue exclusive storage modules are isomorphic storage modules.
  • the storage bit widths of the n shared memory modules are equal.
  • the storage bit widths of the p traffic exclusive storage modules are equal.
  • the storage bit widths of the q queue exclusive storage modules are equal.
  • the traffic exclusive storage module and the queue exclusive storage module are heterogeneous storage modules.
  • the shared storage module and the flow exclusive storage module are isomorphic storage modules, or the shared storage module and the queue exclusive storage module are isomorphic storage modules.
  • the storage bit width of the flow exclusive storage module is not equal to the storage bit width of the queue exclusive storage module.
  • the storage bit width of the shared storage module is equal to the storage bit width of the traffic exclusive storage module.
  • the storage bit width of the shared storage module is equal to the storage bit width of the exclusive storage module of the queue.
  • n the n shared storage modules are all used to store messages
  • q n
  • the n shared storage modules are all used for the queue information of the queue where the message in the shared memory is located.
  • the traffic management device further includes a processing module.
  • the processing module is connected with the shared memory.
  • the processing module is configured to input at least one of a message and queue information of a queue where the message is located into the shared memory.
  • the flow management device also includes a message writing module and a queue management module.
  • the message writing module is connected to the queue management module, and the message writing module and the queue management module are respectively connected to the processing module.
  • the message writing module is used to apply for a queue resource for the message from the queue management module for any message to be cached, and input the message to the processing module according to the queue resource applied for for the message; the queue
  • the management module is configured to input the queue information of the queue where the message is located to the processing module according to the queue resource applied for by the message writing module for the message.
  • the message writing module is connected to the processing module through the traffic data line
  • the queue management module is connected to the processing module through the queue data line
  • the processing module is connected to the shared memory through the data bus.
  • the message writing module is used to input messages to the processing module through the flow data line.
  • the queue management module is used to input the queue information of the queue where the message is located to the processing module through the queue data line.
  • the processing module is used to input the message and the queue information of the queue where the message is located to the shared memory through the data bus.
  • the traffic management device also includes a message reading module.
  • the message reading module is respectively connected with the shared memory and the queue management module.
  • the queue management module is also used to output the address of the message in the shared memory to the message readout module according to the queue information of the queue where the message in the shared memory is located.
  • the message reading module is used for reading the message from the shared memory according to the address of the message in the shared memory.
  • the processing module is further configured to configure the function of the shared memory according to the obtained configuration information, and the function of the shared memory includes buffering messages and storing queue information of the queue where the message is located.
  • the processing module configures the shared storage module in the shared memory to cache messages according to the obtained configuration information; or, the processing module configures the shared storage module in the shared memory to store queues according to the obtained configuration information information; or, the processing module configures a part of the shared storage modules in the shared memory to cache messages according to the obtained configuration information, and configures another part of the shared storage modules in the shared memory to store queue information.
  • the processing module can configure the function of the shared memory according to the obtained configuration information, the storage resources of the shared memory can be flexibly configured, and the use of the storage resources of the shared memory is more flexible.
  • a second aspect provides a chip, including the traffic management device provided in the first aspect or any optional implementation manner of the first aspect.
  • the chip can be an application-specific integrated circuit (ASIC) chip, a field-programmable gate array (FPGA) chip, a network processor (network processor, NP) chip or a generic array logic (generic array logic, GAL) chips, etc.
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • NP network processor
  • GAL generic array logic
  • a third aspect provides a network device, including the chip as provided in the second aspect.
  • a packet buffering method is provided, which is applied to a traffic management device, and the traffic management device includes a shared memory, and the capacity of the shared memory is smaller than a preset capacity.
  • the method includes: the traffic management device caches the message in the shared memory, and stores the queue information of the queue where the message in the shared memory is located in the shared memory.
  • the shared memory includes m storage modules, the m storage modules include n shared storage modules, m ⁇ n, and both m and n are positive integers.
  • the flow management device caches the message in the shared memory, and stores the queue information of the queue in which the message in the shared memory is located in the shared memory, including: the flow management device caches the message in the shared storage module; and/or, The flow management device stores queue information of at least one queue in the shared memory in the shared storage module.
  • n 1
  • the n shared storage modules are isomorphic storage modules.
  • the m storage modules also include at least one of a traffic exclusive storage module and a queue exclusive storage module.
  • the flow management device caches the message in the shared memory, and stores the queue information of the queue where the message in the shared memory is located in the shared memory, and also includes: the flow management device caches the message in the flow exclusive storage module;
  • the management device stores the queue information of the queue where at least one message in the shared memory is located in the queue exclusive storage module.
  • the queue information stored in any storage module in the shared storage module and the queue exclusive storage module includes: the queue information of the queue where the message in the shared storage module is located and the queue where the message in the flow exclusive storage module is located At least one of the queue information.
  • the m storage modules include p flow exclusive storage modules and q queue exclusive storage modules, m ⁇ n+p+q, p>1, q>1, n>1, and p and q are all integers.
  • the n shared storage modules are isomorphic storage modules.
  • the p traffic exclusive storage modules are homogeneous storage modules.
  • the q queue exclusive storage modules are isomorphic storage modules.
  • the traffic exclusive storage module and the queue exclusive storage module are heterogeneous storage modules.
  • the shared storage module and the flow exclusive storage module are isomorphic storage modules, or the shared storage module and the queue exclusive storage module are isomorphic storage modules.
  • the storage bit width of the flow exclusive storage module is not equal to the storage bit width of the queue exclusive storage module
  • the storage bit width of the shared storage module is equal to the storage bit width of the traffic exclusive storage module, or the storage bit width of the shared storage module is equal to the storage bit width of the queue exclusive storage module.
  • the flow management device further includes a processing module connected to the shared memory.
  • the traffic management device caches the message in the shared memory, and stores the queue information of the queue where the message in the shared memory is located in the shared memory, including: the processing module inputs the message to the shared memory and the queue information of the queue where the message is located at least one of the
  • the flow management device further includes a message writing module and a queue management module, the message writing module is connected to the queue management module, and the message writing module and the queue management module are respectively connected to a processing module.
  • the traffic management device caches the message in the shared memory, and stores the queue information of the queue where the message in the shared memory is located in the shared memory, and also includes: the message writing module writes to the queue for any message to be cached
  • the management module applies for a queue resource for the message, and inputs the message to the processing module according to the queue resource applied for the message; the queue management module inputs the queue resource to the processing module according to the queue resource applied for by the message writing module. Queue information of the queue where the packet is located.
  • the message writing module is connected to the processing module through the traffic data line
  • the queue management module is connected to the processing module through the queue data line
  • the processing module is connected to the shared memory through the data bus.
  • the message writing module inputs messages to the processing module, including: the message writing module inputs messages to the processing module through the flow data line.
  • the queue management module inputs the queue information of the queue where the message is located to the processing module, including: the queue management module inputs the queue information of the queue where the message is located to the processing module through the queue data line.
  • the processing module inputs at least one of the message and the queue information of the queue where the message is located to the shared memory, including: the processing module inputs the message and the queue information of the queue where the message is located to the shared memory through the data bus.
  • the traffic management device further includes a message reading module, which is respectively connected to the shared memory and the queue management module.
  • the method also includes: the queue management module outputs the address of the message in the shared memory to the message readout module according to the queue information of the queue where the message in the shared memory is located; At the address in the shared memory, read the message from the shared memory.
  • the method further includes: the processing module configures the function of the shared memory according to the obtained configuration information, and the function of the shared memory includes caching the message and storing the queue information of the queue where the message is located.
  • the chip mentioned in the above embodiment may be a forwarding chip.
  • the "traffic management device” and/or “message caching method” in the above embodiments can be used in chips; in some embodiments, the "traffic management device” and/or “message caching method” in the above embodiments method” can be used to forward chips.
  • a computer-readable storage medium includes a computer program or an instruction.
  • the computer program or instruction When the computer program or instruction is executed by a computer, the computer executes the above-mentioned fourth aspect and various optional implementation modes of the fourth aspect. method.
  • a computer program including a computer program or an instruction, when the computer program or instruction is executed by a computer, causes the computer to execute the above fourth aspect and various optional implementation modes of the fourth aspect method.
  • the traffic management device, message caching method, chip and network equipment provided by the present application the traffic management device is applied to the chip, and since the capacity of the shared memory in the traffic management device is smaller than the preset capacity, the capacity of the shared memory is relatively small , help to reduce the area of the chip, and reduce the cost and power consumption of the chip.
  • the shared memory can cache messages and store the queue information of the queues where the messages are located, it can be used in edge network scenarios, backbone network scenarios, metropolitan area network scenarios, data center interconnection (DCI) network scenarios, and mobile bearer network scenarios. In various network scenarios, the shared memory can be fully utilized, and the utilization rate of the shared memory is high, which helps to avoid waste of chip memory resources.
  • FIG. 1 is a schematic diagram of a usage state of a management memory in a forwarding chip provided by related technologies
  • FIG. 2 is a schematic diagram of another usage state of the management memory in the forwarding chip provided by the related art
  • FIG. 3 is a schematic structural diagram of a flow management device provided in an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a shared memory provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of another shared memory provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of another shared memory provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of another shared memory provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another shared memory provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of another shared memory provided by an embodiment of the present application.
  • Fig. 10 is a kind of comparative diagram of the usage status of the managed memory provided by the related art and the usage status of the shared memory provided by the embodiment of the present application;
  • FIG. 11 is another comparison diagram between the use state of the managed memory provided by the related art and the use state of the shared memory provided by the embodiment of the present application;
  • FIG. 12 is a flow chart of a message caching method provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a network device provided by an embodiment of the present application.
  • a forwarding chip in a network device generally includes a traffic management device and a traffic management memory and a queue management memory configured for the traffic management device.
  • the traffic management device is responsible for buffering, scheduling or discarding packets entering the forwarding chip.
  • the traffic management device is responsible for buffering the packets entering the forwarding chip into the traffic management memory, and dispatching the packets from the traffic management memory for forwarding.
  • the traffic management memory does not meet the cache conditions of the message (for example, the available storage space of the traffic management memory is too small, and the read and write bandwidth of the traffic management memory cannot meet the writing of the message), the traffic management device can discard the message.
  • Messages are managed in the form of queues in the flow management memory (or the messages in the flow management memory are cached in the queues), and the queue management memory is used to store the queue information of the queues where the messages in the flow management memory are located for reporting Text scheduling.
  • the traffic management device may also be called a traffic manager (traffic manager, TM) or a traffic management module.
  • the flow management memory includes internal flow management memory and external flow management memory. Both the internal traffic management memory and the queue management memory are located inside the traffic management device (that is, both the internal traffic management memory and the queue management memory are located inside the forwarding chip), and the internal traffic management memory and the queue management memory can be called the on-chip of the forwarding chip.
  • Manage memory The external traffic management memory is located outside the traffic management device. The external traffic management memory is usually mounted on the forwarding chip. The external traffic management memory can be called the off-chip management memory of the forwarding chip. Wherein, the read-write bandwidth of the internal traffic management memory is usually greater than the read-write bandwidth of the external traffic management memory, and the capacity of the internal traffic management memory is usually smaller than that of the external traffic management memory.
  • the magnitude of the capacity of the external traffic management memory is generally on the order of gigabytes (GB), and considering the chip area and cost, the magnitude of the capacity of the internal traffic management memory is generally megabytes (megabytes). , MB) magnitude.
  • the flow management device caches the message in the internal flow management memory or the external flow management memory according to the cache policy, and stores the queue information of the queue where the message is located in the queue management memory.
  • forwarding chips may be applied to various network scenarios, such as edge network scenarios, backbone network scenarios, metropolitan area network scenarios, DCI network scenarios, and mobile bearer network scenarios.
  • forwarding chips usually have different requirements for internal traffic management memory and queue management memory.
  • the internal traffic management memory and queue management memory are usually set according to the most stringent network scenario. That is, both the internal traffic management memory and the queue management memory are set larger, so that the internal traffic management memory and the queue management memory can meet the requirements of various network scenarios. For example, assume that Pmem represents the capacity of the internal traffic management memory (or called the size of the internal traffic management memory), and Qmem represents the capacity of the queue management memory (or called the size of the queue management memory).
  • the internal traffic management The value range of memory capacity is [Pmem_Min, Pmem_Max], and the value range of queue management memory capacity is [Qmem_Min, Qmem_Max].
  • the internal traffic management memory is usually set according to Pmem_Max, and the queue management memory is set according to Qmem_Max, that is, set The capacity of the internal traffic management memory is Pmem_Max, and the capacity of the queue management memory is set to Qmem_Max, so that the forwarding chip can meet the needs of various network scenarios.
  • the forwarding chip usually has a large demand for queue management memory and a small demand for traffic management memory. Setting the internal traffic management memory and queue management memory according to the most stringent network scenario will easily lead to internal traffic management. Memory is not fully utilized.
  • the forwarding chip in network scenarios such as backbone network scenarios, metropolitan area network scenarios, DCI network scenarios, and mobile bearer network scenarios, the forwarding chip usually has a large demand for traffic management memory and a small demand for queue management memory. Setting internal traffic management memory and queue management memory in strict network scenarios may easily lead to underutilization of queue management memory.
  • Figure 1 is a schematic diagram of the use status of the internal traffic management memory and queue management memory in the forwarding chip when the forwarding chip provided by the related technology is applied to an edge network scenario.
  • Figure 2 shows the use status of the internal traffic management memory and queue management memory in the forwarding chip when the forwarding chip provided by the related technology is applied to network scenarios such as backbone network scenarios, metropolitan area network scenarios, DCI network scenarios, and mobile bearer network scenarios.
  • network scenarios such as backbone network scenarios, metropolitan area network scenarios, DCI network scenarios, and mobile bearer network scenarios.
  • small square blocks (including small square blocks filled with slashes, small square blocks filled with grids, and small square blocks without filling) all represent storage modules (or basic storage units), and the slash Filled squares represent storage modules that cache messages, grid-filled squares represent storage modules that store queue information, and unfilled squares represent unoccupied storage modules (or storage modules in an idle state) ).
  • all storage modules in the queue management memory are occupied, and many storage modules in the internal traffic management memory are idle, so the internal traffic management memory is not fully utilized in the edge network scenario.
  • all storage modules in the internal traffic management memory are occupied, and many storage modules in the queue management memory are idle.
  • the traffic management device In the edge network scenario, the number of users accessing the network device is large, and the traffic management device usually caches packets at the user granularity. For example, the traffic management device caches packets corresponding to the same user in the same queue, caches packets corresponding to different users in different queues, and stores queue information of these queues in the queue management memory. Therefore, the queue management memory needs to store more queue information, and most or even all of the storage modules in the queue management memory are occupied ( Figure 1 shows the situation that all the storage modules in the queue management memory are occupied).
  • the network traffic is usually small, and the actual bandwidth used by the forwarding chip is often smaller than the forwarding bandwidth of the forwarding chip (referring to the actual bandwidth of the forwarding chip, the forwarding bandwidth of the forwarding chip represents the Forwarding capability), the forwarding chip usually has a small requirement for the read and write bandwidth of the traffic management memory, and the read and write bandwidth of the external traffic management memory can basically meet the requirements of the forwarding chip for the read and write bandwidth of the traffic management memory.
  • the forwarding bandwidth of a forwarding chip is 3.2Tbps (tera bytes per second)
  • the actual bandwidth used by the forwarding chip is 2Tbps
  • the read and write bandwidth of the external traffic management memory can be greater than 4Tbps. Therefore, the traffic management device can cache most of the packets in the external traffic management memory, while the cached packets in the internal traffic management memory are less, so that most of the storage modules in the internal traffic management memory are in an idle state. As shown in Figure 1, many storage modules in the internal flow management memory are in an idle state, and the internal flow management memory is not fully utilized.
  • the traffic management device In network scenarios such as backbone network scenarios, metropolitan area network scenarios, DCI network scenarios, and mobile bearer network scenarios, the traffic management device usually caches packets at a coarser granularity. For example, the traffic management device caches packets corresponding to multiple users in the same queue. Therefore, the queue management memory needs to store less queue information, most of the storage modules in the queue management memory are in an idle state, and the queue management memory is not fully utilized.
  • the capacity of the queue management memory is B
  • the queue management memory can store the queue information of M queues, but the actual number of queues may be 0.02M, and only 0.02B of storage modules in the queue management memory may be occupied, 0.98B
  • the storage modules in the queue management memory are idle (that is, only 2% of the storage modules in the queue management memory are used, and 98% of the storage modules are in an idle state), as shown in Figure 2, there are more storage modules in the queue management memory that are idle state, queue management memory is not fully utilized.
  • the network traffic is usually large, and the actual bandwidth used by the forwarding chip is often close to or even equal to the forwarding bandwidth of the forwarding chip.
  • the chip usually has a large demand for the read and write bandwidth of the traffic management memory, and the read and write bandwidth of the external traffic management memory usually cannot meet the requirements of the forwarding chip for the read and write bandwidth of the traffic management memory, while the read and write bandwidth of the internal traffic management memory is average. It far meets the requirements of the forwarding chip for the read and write bandwidth of the traffic management memory.
  • the forwarding bandwidth of a forwarding chip is 3.2Tbps
  • the actual bandwidth used by the forwarding chip is 3.2Tbps
  • the external traffic management The read and write bandwidth of the memory is usually less than 6.4Tbps
  • the read and write bandwidth of the internal traffic management memory is usually much greater than 6.4Tbps. Therefore, in these network scenarios, the traffic management device can cache most of the packets in the internal traffic management memory, so that most or even all of the storage modules in the internal traffic management memory are occupied ( Figure 2 shows that the internal traffic management memory If all the storage modules in are occupied).
  • the traffic management device will cache most packets in the internal traffic management memory.
  • the packet loss rate it is generally necessary to set a relatively large internal traffic management memory for these network scenarios, but this will easily lead to an increase in chip area, cost, and power consumption.
  • Embodiments of the present application provide a traffic management device, a message buffering method, a forwarding chip, and a network device.
  • the traffic management device is applied to a forwarding chip, and the traffic management device includes a shared memory whose capacity is smaller than a preset capacity.
  • the shared memory is used for buffering messages and storing queue information of the queues where the messages in the shared memory are located. Since the capacity of the shared memory is smaller than the preset capacity, the capacity of the shared memory is small, which helps to reduce the area of the forwarding chip and reduce the cost and power consumption of the forwarding chip.
  • the shared memory can cache packets and store the queue information of the queues where the packets in the shared memory are located, it can be used in various network scenarios such as edge network scenarios, backbone network scenarios, metropolitan area network scenarios, DCI network scenarios, and mobile bearer network scenarios.
  • the shared memory can be fully utilized, and the utilization rate of the shared memory is high, which helps to avoid waste of memory resources of the forwarding chip.
  • the preset capacity may be equal to the sum of the capacity of the internal flow management memory and the capacity of the queue management memory in the related art.
  • FIG. 3 shows a schematic structural diagram of a traffic management device provided by an embodiment of the present application.
  • the flow management device is applied to a forwarding chip.
  • the flow management device includes a shared memory 01 .
  • the capacity of shared memory 01 is smaller than the preset capacity.
  • the shared memory 01 is used to cache packets input to the shared memory 01, and to store queue information of the queue where the packets in the shared memory 01 are located.
  • the messages are managed in the form of queues in the shared memory 01, and the queues can be virtual queues, and the storage resources of a queue can be scattered in different storage spaces in the shared memory 01, or can be concentrated in the same storage space in the shared memory 01. in storage space.
  • shared memory 01 includes multiple storage modules, and the storage resources of a queue can be distributed in different storage modules, or can be concentrated in the same storage module.
  • the queue information of the queue where each packet is located may include the queue length, the head pointer of the queue, and the like.
  • the queue information of the queue where each message is located may be referred to as the queue information corresponding to the message, and the queue information (including the queue length and the head pointer of the queue) corresponding to any two messages may be different.
  • the queues in the shared memory 01 may also be physical queues, and the storage resources of the physical queues may be concentrated in the same storage space in the shared memory 01, which is not limited in this embodiment of the present application.
  • the storage module may be a basic storage unit in the shared memory 01.
  • the capacity of the shared memory 01 is the size of the shared memory 01, and both the capacity of the shared memory 01 and the preset capacity can be determined according to various network scenarios to be applied by the forwarding chip.
  • the preset capacity is determined according to the capacity of the internal traffic management memory (for example, Pmem_Max) and the capacity of the queue management memory (for example, Qmem_Max) in the forwarding chip in the related art.
  • the preset capacity is equal to the sum of the capacity of the internal flow management memory and the capacity of the queue management memory in the related art (ie, Pmem_Max+Qmem_Max).
  • the capacity of the shared memory 01 is determined according to the usage of internal traffic management memory and queue management memory in the forwarding chip when the forwarding chip is used in various network scenarios in the related art. For example, when the forwarding chip in the related art is applied to the network scenario i, the usage of the internal traffic management memory in the forwarding chip is Pmem_i, and the usage of the queue management memory is Qmem_i, corresponding to the network scenario i, the application
  • the capacity of the shared memory 01 can be set as the internal traffic management in various network scenarios.
  • the traffic management device since the traffic management device includes a shared memory, the capacity of the shared memory is smaller than the preset capacity, so the capacity of the shared memory is small, which helps to reduce the size of the forwarding chip. area, reducing the cost and power consumption of the forwarding chip. Since the shared memory can cache messages and store the queue information of the queues where the messages in the shared memory are located, the utilization rate of the shared memory is high, which helps to avoid waste of memory resources of the forwarding chip.
  • the shared memory 01 includes m storage modules, where m is a positive integer.
  • Each of the m storage modules may be a basic storage unit in the shared memory 01 .
  • the m storage modules may be homogeneous storage modules, or at least two of the m storage modules are heterogeneous storage modules.
  • each storage module has a storage depth and a storage bit width, the storage depth refers to a physical depth, and the storage bit width refers to a physical bit width.
  • two storage modules are homogeneous storage modules, which means that the storage bit widths of the two storage modules are equal, and two storage modules are heterogeneous storage modules, which means that the storage bits of the two storage modules are equal.
  • the widths are not equal. That is, memory modules with equal storage bit widths are homogeneous memory modules, and memory modules with unequal memory bit widths are heterogeneous memory modules.
  • the m storage modules include n shared storage modules, m ⁇ n, and n is a positive integer.
  • the n shared storage modules may be homogeneous storage modules, or at least two of the n shared storage modules may be heterogeneous storage modules, for example, the n shared storage modules are Heterogeneous storage modules.
  • Each of the n shared memory modules may be used to cache messages input to the shared memory module, and/or store queue information of the queue where at least one message in the shared memory 01 is located.
  • the n shared memory modules are all used to cache messages, or the n shared memory modules are all used to store queue information, or some of the n shared memory modules are used for messages, and the other A part of the shared storage module is used to store queue information.
  • the queue information stored in each shared storage module may include the queue information of the queue in which the message is located in the shared storage module, and may also include the queue information of the queue in which the message in a storage module other than the shared storage module is located.
  • the queue information stored in each shared storage module only includes the queue information of the queue where the message in the shared storage module is located.
  • the queue information stored in each shared storage module does not include the queue information of the queue where the message in the shared storage module is located.
  • the description of the message and queue information stored in the shared storage module in this application is only exemplary. Whether the message and the queue information of the queue where the message is located are stored in the same shared storage module is set according to the actual storage requirements. The embodiment of this application It is not limited whether the message and the queue information of the queue where the message is stored are stored in the same shared storage module.
  • the shared memory 01 includes n shared memory modules, the storage bit widths of the n shared memory modules are all equal, and the n shared memory modules are isomorphic memory modules.
  • the storage depths of the n shared storage modules may be equal or unequal.
  • FIG. 4 shows the case that the storage depths of the n shared storage modules are equal.
  • each shared memory module in the n shared memory modules can cache the message input to the shared memory module, and can also store at least one message in the shared memory 01. Queue information of the queue where the document is located. Alternatively, some of the n shared memory modules are used to cache messages, and the other part of the shared memory modules are used to store the queue information of the queue where the messages in the shared memory 01 are located, which is not limited in this embodiment of the present application . Since the storage modules in the shared memory 01 shown in Figure 4 are all shared storage modules, the shared memory 01 can be called a fully shared memory.
  • the occupancy ratio of the shared memory 01 for example, which shared memory modules among the n shared memory modules are allocated to cache messages, and which shared memory modules are used to store the queue information of the queue where the messages in the shared memory 01 are located, so
  • the sharing method of the shared memory 01 shown in FIG. 4 is relatively flexible.
  • the message and queue information can share the address bus and data bus (that is, the message and queue information can be written to the shared memory 01 through the same data bus, and the message and queue information can be written to the shared memory 01 through the same data bus, and
  • the root address bus is used to transmit the address of the message in the shared memory 01 and the address of the queue information in the shared memory 01), so there is no need to add an additional address bus and data bus, which helps to save wiring.
  • the m storage modules further include at least one of a flow exclusive storage module and a queue exclusive storage module.
  • the traffic exclusive storage module is used for caching the messages input to the traffic exclusive storage module.
  • the queue exclusive storage module is used for storing the queue information of the queue where at least one message in the shared memory 01 is located.
  • the queue information stored in any one of the queue exclusive storage module and the shared storage module includes: the queue information of the queue where the message in the flow exclusive storage module is located and the queue information of the queue where the message in the shared storage module is located at least one of .
  • the queue information stored in the exclusive queue storage module may include the queue information of the queue in which the message in the shared storage module is located, and may also include the queue information of the queue in which the message in the traffic exclusive storage module is located.
  • the queue information stored in the exclusive queue storage module only includes the queue information of the queue where the message in the exclusive flow storage module is located.
  • the queue information stored in the queue exclusive storage module only includes the queue information of the queue where the message in the shared storage module is located.
  • the queue information stored in the shared storage module may also include the queue information of the queue where the packets in the traffic exclusive storage module are located, which is not limited in this embodiment of the present application.
  • the m storage modules include p flow exclusive storage modules and q queue exclusive storage modules (that is, the m storage modules include n shared storage modules, p flow exclusive storage modules shared storage module and q exclusive storage modules for queues), m ⁇ n+p+q, p>1, q>1, n>1, and both p and q are integers.
  • the n shared storage modules may be homogeneous storage modules, or at least two of the n shared storage modules are heterogeneous storage modules, for example, the n shared storage modules are heterogeneous storage modules.
  • the p flow-exclusive storage modules may be homogeneous storage modules, or at least two of the p flow-exclusive storage modules are heterogeneous storage modules, for example, the p flow-exclusive storage modules Modules are heterogeneous storage modules.
  • the q queue exclusive storage modules may be homogeneous storage modules, or at least two queue exclusive storage modules among the q queue exclusive storage modules are heterogeneous storage modules, for example, the q queue exclusive storage modules Modules are heterogeneous storage modules.
  • the n shared storage modules are isomorphic storage modules
  • the p flow-only storage modules are isomorphic storage modules
  • the q queue-only storage modules are isomorphic storage modules
  • the flow-only storage modules are identical to
  • the queue exclusive storage module is a heterogeneous storage module
  • the shared storage module and the flow exclusive storage module are isomorphic storage modules
  • the shared storage module and the queue exclusive storage module are isomorphic storage modules.
  • the n shared storage modules, the p traffic exclusive storage modules and the q queue exclusive storage modules are isomorphic storage modules, which is not limited in this embodiment of the present application.
  • the n shared storage modules, the p flow exclusive storage modules and the q queue exclusive storage modules are isomorphic storage modules as an example for description.
  • FIG. 5 shows a schematic structural diagram of another shared memory 01 provided by the embodiment of the present application.
  • the shared memory 01 includes n shared storage modules, p flow exclusive storage modules and q queue exclusive storage modules.
  • the storage bit widths of the n shared storage modules, the storage bit widths of the p traffic exclusive storage modules and the storage bit widths of the q queue exclusive storage modules are all equal, the n shared storage modules, the p flow
  • the exclusive storage module and the q queue exclusive storage modules are isomorphic storage modules.
  • the storage depths of the n shared storage modules, the storage depths of the p flow exclusive storage modules and the storage depths of the q queue exclusive storage modules may or may not be equal.
  • Fig. 5 shows that the storage depths of the n shared storage modules and the storage depths of the p flow exclusive storage modules are all equal, the storage depths of the q queue exclusive storage modules are all equal, and the queue exclusive The storage depth of the storage module is smaller than the storage depth of the shared storage module.
  • the n shared storage modules are isomorphic storage modules
  • the p flow exclusive storage modules are isomorphic storage modules
  • the q queue exclusive storage modules are isomorphic storage modules
  • the traffic exclusive storage module and the queue exclusive storage module are heterogeneous storage modules
  • the shared storage module and the traffic exclusive storage module are homogeneous storage modules as an example.
  • FIG. 6 and FIG. 7 show schematic structural diagrams of two types of shared memory 01 provided by the embodiment of the present application.
  • the shared memory 01 includes n shared storage modules, p flow exclusive storage modules and q queue exclusive storage modules.
  • the storage bit widths of the p flow-exclusive storage modules and the storage bit widths of the n shared storage modules are equal, so the p flow-exclusive storage modules and the n shared storage modules are isomorphic storage modules.
  • the storage bit widths of the q queue exclusive storage modules are all equal, so the q queue exclusive storage modules are isomorphic storage modules.
  • the storage bit width of the queue-only storage module is smaller than that of the flow-only storage module, so the queue-only storage module and the flow-only storage module are heterogeneous storage modules.
  • the storage depths of the n shared storage modules, the storage depths of the p flow exclusive storage modules and the storage depths of the q queue exclusive storage modules may or may not be equal. For example, what Fig.
  • the storage depths of the n shared storage modules, the storage depths of the p flow exclusive storage modules and the storage depths of the q queue exclusive storage modules are all equal, and the storage depths shown in Fig. 7
  • the storage depths of the n shared storage modules and the storage depths of the p flow exclusive storage modules are all equal, the storage depths of the q queue exclusive storage modules are all equal, and the storage depth of the queue exclusive storage modules is smaller than the shared The storage depth of the storage module.
  • the n shared storage modules are isomorphic storage modules
  • the p traffic exclusive storage modules are isomorphic storage modules
  • the q queue exclusive storage modules are isomorphic storage modules
  • the flow exclusive storage module and the queue exclusive storage module are heterogeneous storage modules
  • the shared storage module and the queue exclusive storage module are homogeneous storage modules as an example.
  • FIG. 8 and FIG. 9 show schematic structural diagrams of two types of shared memory 01 provided by the embodiment of the present application.
  • the shared memory 01 includes n shared storage modules, p flow exclusive storage modules and q queue exclusive storage modules.
  • the storage bit widths of the n shared storage modules and the storage bit widths of the q queue exclusive storage modules are equal, so the n shared storage modules and the q queue exclusive storage modules are isomorphic storage modules.
  • the storage bit widths of the p flow-exclusive storage modules are all equal, so the p flow-exclusive storage modules are isomorphic storage modules.
  • the storage bit width of the flow-only storage module is smaller than that of the queue-only storage module, so the flow-only storage module and the queue-only storage module are heterogeneous storage modules.
  • the storage depths of the n shared storage modules, the storage depths of the p flow exclusive storage modules and the storage depths of the q queue exclusive storage modules may or may not be equal. For example, what Fig.
  • the storage depths of the n shared storage modules, the storage depths of the p flow exclusive storage modules and the storage depths of the q queue exclusive storage modules are all equal, and the storage depths shown in Fig. 9
  • the storage depths of the n shared storage modules and the storage depths of the q queue exclusive storage modules are all equal, the storage depths of the p flow exclusive storage modules are all equal, and the storage depth of the flow exclusive storage modules is smaller than that of the shared The storage depth of the storage module.
  • the shared memory 01 shown in FIGS. 5 to 9 includes a shared storage module, a flow exclusive storage module and a queue exclusive storage module
  • the shared memory 01 shown in FIGS. 5 to 9 can be called a partial shared memory.
  • the flow-exclusive storage module is only used to cache messages and cannot be used to store queue information
  • the queue-exclusive storage module is only used to store queue information and cannot be used to cache messages.
  • the shared memory module can be used to cache messages, and/or the shared memory module can be used to store queue information, for example, n shared memory modules are all used to cache messages, or n shared memory modules are used to Store queue information, or, some of the n shared memory modules are used to cache messages, and the other shared memory modules are used to store queue information, and which shared memory modules can be allocated to cache messages according to the needs of network scenarios , which shared storage modules are used to store queue information.
  • n shared memory modules are all used to cache messages, or n shared memory modules are used to Store queue information, or, some of the n shared memory modules are used to cache messages, and the other shared memory modules are used to store queue information, and which shared memory modules can be allocated to cache messages according to the needs of network scenarios , which shared storage modules are used to store queue information.
  • the shared memory 01 can allow writing messages and queue information in the shared memory 01 at the same time ( For example, the message is written into the flow exclusive storage module, and the queue information is written into the shared storage module, the two processes can be carried out at the same time), there will be no write conflict, and the shared memory 01 can allow simultaneous access from the shared memory 01 Read message and queue information in (for example, read the message from the flow exclusive storage module, read the queue information from the shared storage module, the two processes can be carried out at the same time), and there will be no read conflict.
  • the storage bit width of the queue exclusive storage module in the shared memory 01 shown in Figure 6 and Figure 7 is smaller, so the shared memory shown in Figure 6 and Figure 7 01 can save a certain amount of wiring.
  • the storage bit width of the flow exclusive storage module in the shared memory 01 shown in Figure 8 and Figure 9 is smaller, so the shared memory 01 shown in Figure 8 and Figure 9 can Save a certain amount of traces.
  • the storage bit width of the shared storage module, the storage bit width of the flow exclusive storage module, and the storage bit width of the queue exclusive storage module are all W1, and the shared memory shown in Figure 5
  • the number of wires that memory 01 needs to consume is at least W1 ⁇ (n+p+q).
  • the storage bit width of the shared storage module and the storage bit width of the flow exclusive storage module are both W1
  • the storage bit width of the queue exclusive storage module is W2, and W2 ⁇ W1 Therefore, the number of wires to be consumed by the shared memory 01 shown in FIG. 6 or FIG. 7 is at least W1 ⁇ (n+p)+W2 ⁇ q.
  • the shared memory 01 shown in FIG. 6 and FIG. 7 can save a certain amount of wires.
  • the storage bit width of the shared storage module and the storage bit width of the queue exclusive storage module are both W1
  • the storage bit width of the flow exclusive storage module is W3, and W3 ⁇ W1
  • the number of wires to be consumed by the shared memory 01 shown in FIG. 8 or FIG. 9 is at least W1 ⁇ (n+q)+W3 ⁇ p. Since W3 ⁇ W1, W1 ⁇ (n+q)+W3 ⁇ p ⁇ W1 ⁇ (n+p+q), that is, the shared memory 01 shown in FIG. 8 and FIG. 9 can save a certain amount of wires.
  • the shared memory 01 may only include a shared storage module and a flow-only storage module, but not a queue-only storage module. In some other embodiments, the shared memory 01 may only include a shared storage module and a queue exclusive storage module, but not a flow exclusive storage module. In some other embodiments, the shared memory 01 may include storage modules with other functions. Whether the shared memory 01 needs a traffic-exclusive storage module and a queue-exclusive storage module may be set according to actual needs, which is not limited in this embodiment of the present application.
  • the traffic management device further includes a processing module 02 .
  • the processing module 02 is connected to the shared memory 01.
  • the processing module 02 is configured to input at least one of a message and queue information of the queue where the message is located to the shared memory 01 .
  • the processing module 02 is connected to the shared memory 01 through a data bus 08 .
  • the processing module 02 is configured to input the message and the queue information of the queue where the message is located to the shared memory 01 through the data bus 08 .
  • the data bus 08 in Fig. 3 is only exemplary, and the processing module 02 can be connected with each storage module in the shared memory 01 through the data bus, and multiple data buses are connected between the processing module 02 and each storage module .
  • the number of data buses between the processing module 02 and each storage module is related to the storage bit width of the storage module. For example, if the storage bit width of a certain storage module is W, it means that the storage module includes W storage units of 1 bit (bit) arranged along the width direction of the storage module, then the distance between the processing module 02 and the storage module
  • the number of data buses is W, and the W data buses are connected to the W 1-bit storage units in one-to-one correspondence, which is not limited in this embodiment of the present application.
  • the traffic management device further includes a packet writing module 03 and a queue management module 04 .
  • the message writing module 03 is connected with the queue management module 04.
  • the message writing module 03 and the queue management module 04 are connected to the processing module 02 respectively.
  • the message writing module 03 is used to apply for a queue resource for any message to be cached from the queue management module 04, and input the message to the processing module 02 according to the queue resource applied for the message.
  • the queue management module 04 is configured to input the queue information of the queue where the message is located to the processing module 02 according to the queue resource applied for by the message writing module 03 for the message.
  • the message writing module 03 is connected to the processing module 02 through the flow data line 06
  • the queue management module 04 is connected to the processing module 02 through the queue data line 07
  • the message writing module 03 is used to pass the flow data line 06 to input the message to the processing module 02
  • the queue management module 04 is used to input the queue information of the queue where the message is located to the processing module 02 through the queue data line 07.
  • the message writing module 03 sends an enqueue request carrying the message information of message 1 to the queue management module 04 .
  • Queue management module 04 determines the queue corresponding to message 1 according to the message information of message 1 carried in the enqueue request (that is, the queue that message 1 should cache, such as queue 1), and judges the queue length of queue 1 Whether it is less than the preset length, if the queue length of the queue 1 is less than the preset length, the queue management module 04 determines that the queue 1 can satisfy the storage of the message 1, and the queue management module 04 allocates storage resources for the message 1 in the shared memory 01 (also That is, allocate queue resources for message 1), and send an enqueue response to the message writing module 03, which carries the resource information of the queue resources allocated by the queue management module 04 for message 1 (such as message 1 address of the allocated queue resource).
  • the message writing module 03 inputs the message 1 to the processing module 02 through the flow data line 06 according to the resource information of the queue resource allocated by the queue management module 04 carried in the enqueue response to the message 1, and the message writing module 03 may input the resource information of the queue resources allocated by the queue management module 04 for the message 1 to the processing module 02 .
  • the processing module 02 may cache the message 1 to the shared memory 01 through the data bus 08 according to the resource information of the queue resource allocated for the message 1 by the queue management module 04 .
  • the queue management module 04 After the queue management module 04 allocates storage resources for the message 1 in the shared memory 01, it can also update the queue information of the queue 1 according to the storage resources allocated for the message 1 (for example, update the head pointer of the queue 1, update the queue length, etc.), and use the updated queue information of the queue 1 as the queue information of the queue where the message 1 is located.
  • the queue management module 04 may input the queue information of the queue where the message 1 is located to the processing module 02 through the queue data line 07 .
  • the processing module 02 can write the queue information of the queue where the message 1 is located into the shared memory 01 through the data bus 08 (before the processing module 02 writes the queue information of the queue where the message 1 is located into the shared memory 01, the shared memory 01 can store With the queue information of queue 1, the processing module 02 can use the queue information of the queue where the message 1 is located to overwrite the queue information of queue 1 currently stored in the shared memory 01).
  • the queue management module 04 determines that the queue 1 cannot satisfy the storage of the message 1, the queue management module 04 can send a notification message to the message writing module 03, and the message The writing module 03 may discard the message 1 according to the notification message, which is not limited in this embodiment of the present application.
  • the queue resources of a certain queue refer to the storage resources (or storage space) belonging to the queue in the shared memory 01.
  • the queue resources of each queue are controlled by the queue management module 04 According to the enqueue request sent by the message writing module 03, the queue is allocated in the shared memory 01 in real time.
  • the queue management module 04 can release the storage resource occupied by the message in the queue (that is, to release queue resources), and this part of storage resources may be allocated to other queues later, which is not limited in this embodiment of the present application.
  • the traffic management device also includes a message reading module 05 .
  • the message reading module 05 is connected with the shared memory 01 and the queue management module 04 respectively.
  • the queue management module 04 is further configured to output the address of the message in the shared memory 01 to the message reading module 05 according to the queue information of the queue in which the message in the shared memory 01 is located.
  • the message reading module 05 is used to read the message from the shared memory 01 according to the address of the message in the shared memory 01 .
  • the message reading module 05 is connected with the shared memory 01 through the data bus 09, and the message reading module 05 is used to read the message from the shared memory 01 through the data bus 09 according to the address of the message in the shared memory 01. Read message.
  • the queue management module 04 schedules the messages in the queue according to the order in which the messages are in the queue.
  • queue management module 04 determines the address of message 2 in shared memory 01, and outputs the address of message 2 in shared memory 01 to message readout module 05, and message reads out
  • the module 05 reads the message 2 from the shared memory 01 through the data bus 09 according to the address of the message 2 in the shared memory 01 .
  • the data bus 09 in Fig. 3 is only exemplary, and the message reading module 05 can be connected with each storage module in the shared memory 01 through the data bus, and the message reading module 05 is connected with each storage module There are multiple data buses.
  • the number of data buses between the message reading module 05 and each storage module is related to the storage bit width of the storage module.
  • the storage bit width of a certain storage module is W, which means that the storage module includes W 1-bit storage units arranged along the width direction of the storage module, then the data between the message readout module 05 and the storage module
  • the number of buses is W, and the W data buses are connected to the W 1-bit storage units in one-to-one correspondence, which is not limited in this embodiment of the present application.
  • the processing module 02 is further configured to configure functions of the shared memory 01 according to the obtained configuration information, and the functions of the shared memory 01 include caching messages and storing queue information of the queues where the messages are located. For example, the processing module 02 configures, according to the obtained configuration information, the ratio of the storage resources used for buffering messages in the shared memory 01 to the storage resources used for storing queue information.
  • the processing module 02 configures the shared storage module in the shared memory 01 to cache messages according to the obtained configuration information; or, the processing module 02 configures the shared storage module in the shared memory 01 according to the obtained configuration information for Storing the queue information; or, the processing module 02 configures a part of the shared storage modules in the shared memory 01 to cache messages and another part of the shared storage modules to store the queue information according to the acquired configuration information.
  • the configuration information may be configuration information input by the user, for example, input by the user to the processing module 02 through a human-computer interaction interface, or configuration information delivered by the controller.
  • the embodiment of the present application takes the integrated configuration function in the processing module 02 as an example for illustration, and the configuration module may also be deployed separately to configure the function of the shared memory 01, which is not limited in the embodiment of the present application.
  • the shared memory 01 can cache packets and store the queue information of the queue where the packets are located. Therefore, the shared memory 01 can be fully utilized in various network scenarios.
  • Figure 10 shows that in the edge network scenario, the usage status of the management memory (including internal traffic management memory and queue management memory) in the forwarding chip provided by the related technology is the same as that of this application
  • FIG 11 shows the management memory (including internal traffic management memory and queue management memory) in the forwarding chip provided by related technologies in network scenarios such as backbone network scenarios, metropolitan area network scenarios, DCI network scenarios, and mobile bearer network scenarios.
  • small square blocks (including small square blocks filled with slashes, small square blocks filled with grids and unfilled square blocks) all represent storage modules (or basic storage units), and the slash Filled squares represent storage modules that cache messages, grid-filled squares represent storage modules that store queue information, and unfilled squares represent unoccupied storage modules.
  • the capacity of the shared memory provided by the embodiment of the present application is less than the sum of the capacity of the internal traffic management memory and the capacity of the queue management memory in the forwarding chip provided by the related art (for example, in Figures 10 and 11, the capacities of all storage modules are equal,
  • the number of storage modules in the shared memory is less than the sum of the number of storage modules in the internal traffic management memory and the number of storage modules in the queue management memory).
  • FIG 10 in the edge network scenario, all the storage modules in the queue management memory are occupied, and there are many storage modules in the internal flow management memory that are idle (that is, the internal flow management memory is not fully utilized). However, all storage modules in the shared memory are occupied, so the shared memory can be fully utilized in the edge network scenario.
  • the traffic management device since the traffic management device includes a shared memory, the capacity of the shared memory is smaller than the preset capacity, so the capacity of the shared memory is small, which helps to reduce the size of the forwarding chip. area, reducing the cost and power consumption of the forwarding chip. Since the shared memory can cache messages and store the queue information of the queues where the messages in the shared memory are located, the utilization rate of the shared memory is high, which helps to avoid waste of memory resources of the forwarding chip.
  • FIG. 12 shows a flow chart of a packet buffering method provided by an embodiment of the present application.
  • the message caching method can be applied to the above traffic management device, and the traffic management device includes a shared memory, and the capacity of the shared memory is smaller than a preset capacity.
  • the packet buffering method may include the following S101 to S102.
  • the traffic management device caches the packets in the shared memory.
  • the traffic management device stores the queue information of the queues where the packets in the shared memory are located in the shared memory.
  • the shared memory includes m storage modules, the m storage modules include n shared storage modules, m ⁇ n, and both m and n are positive integers.
  • the traffic management device may cache packets in any shared storage module among the n shared storage modules. And/or, in S102, the traffic management device may store the queue information of the queue where at least one packet in the shared memory is located in any shared storage module of the n shared storage modules.
  • the m storage modules may also include at least one of a traffic exclusive storage module and a queue exclusive storage module.
  • the flow management device may also cache packets in the flow exclusive storage module.
  • the traffic management device may also store the queue information of the queue where at least one message in the shared memory is located in the queue exclusive storage module.
  • the m storage modules include p flow exclusive storage modules and q queue exclusive storage modules.
  • the flow management device can store the Messages are cached in the module.
  • the traffic management device may store the queue information of the queue where at least one message in the shared memory is located in any queue-only storage module among the q queue-only storage modules.
  • the queue information stored in any storage module in the shared storage module and the queue exclusive storage module includes: the queue information of the queue where the message in the shared storage module is located and the queue where the message in the flow exclusive storage module is located At least one of the queue information.
  • the traffic management device further includes a processing module, and the processing module is connected to the shared memory.
  • the processing module is connected to the shared memory through a data bus.
  • the processing module may input at least one of a message and queue information of a queue where the message is located into the shared memory.
  • the processing module inputs a message to the shared memory to cache the message in the shared memory; in S102, the processing module inputs the queue information of the queue where the message is located in the shared memory to store The queue information of the queue where the packet is stored is stored in the shared memory.
  • the flow management device further includes a message writing module and a queue management module, the message writing module is connected to the queue management module, and the message writing module and the queue management module are respectively connected to a processing module.
  • the message writing module is connected to the processing module through a traffic data line
  • the queue management module is connected to the processing module through a queue data line.
  • S101 may include: for any message to be cached, the message writing module applies for a queue resource for the message from the queue management module, and inputs the message to the processing module through the flow data line according to the queue resource applied for for the message.
  • the processing module inputs the message to the shared memory through the data bus, so as to cache the message in the shared memory.
  • S102 may include: for any message, the queue management module inputs the queue information of the queue where the message is located to the processing module through the queue data line according to the queue resource applied for by the message writing module, and the processing module passes the data The bus inputs the queue information of the queue where the message is located to the shared memory, so as to store the queue information of the queue where the message is located in the shared memory.
  • the flow management device further includes a message reading module, which is respectively connected to the shared memory and the queue management module, for example, the message reading module is connected to the shared memory through a data bus.
  • the message caching method may also include the following steps S103 to S104.
  • the queue management module outputs the address of the message in the shared memory to the message readout module according to the queue information of the queue in which the message is located in the shared memory.
  • the message reading module reads the message from the shared memory according to the address of the message in the shared memory.
  • the queue management module schedules the messages in the queue according to the order in which the messages are in the queue.
  • the queue management module determines the address of the message in the shared memory, and outputs the address of the message in the shared memory to the message readout module.
  • the message reading module reads the message from the shared memory through the data bus according to the address of the message in the shared memory.
  • the packet caching method may also include S105.
  • the processing module configures the function of the shared memory according to the acquired configuration information, and the function of the shared memory includes caching the message and storing the queue information of the queue where the message is located.
  • the processing module configures a ratio of storage resources used for buffering messages in the shared memory to storage resources used for storing queue information.
  • the processing module configures the shared memory module in the shared memory to cache messages according to the obtained configuration information; or, the processing module configures the shared memory module in the shared memory to store queue information according to the obtained configuration information; Alternatively, the processing module configures a part of the shared storage module in the shared memory for buffering messages and another part of the shared storage module for storing queue information according to the obtained configuration information.
  • the configuration information may be the configuration information input by the user, or the configuration information issued by the controller, which is not limited in this embodiment of the present application.
  • the message caching method provided by the embodiment of the present application is applied to the traffic management device in the forwarding chip. Since the traffic management device includes a shared memory, the capacity of the shared memory is smaller than the preset capacity. Therefore, the capacity of the shared memory is small, which helps to reduce the area of the forwarding chip and reduce the cost and power consumption of the forwarding chip. Since the shared memory can cache messages and store the queue information of the queues where the messages in the shared memory are located, the utilization rate of the shared memory is high, helping to avoid waste of memory resources of the forwarding chip.
  • the embodiment of the present application further provides a forwarding chip, the forwarding chip includes the traffic management device as shown in FIG. 3 , and the traffic management device includes the shared memory 01 as shown in any one of FIGS. 4 to 9 . Since the capacity of the shared memory 01 is small, the area, cost and power consumption of the forwarding chip are small. Since the utilization rate of the shared memory is high, the utilization rate of the memory resource of the forwarding chip is relatively high, which can avoid the waste of the memory resource of the forwarding chip.
  • the forwarding chip is various possible forwarding chips such as an ASIC chip, an FPGA chip, an NP chip, and a GAL chip.
  • the embodiment of the present application also provides a network device, the network device includes the foregoing forwarding chip.
  • the network device may be any network device used for service forwarding in the communication network.
  • the network device may be a switch, a router, and the like.
  • the network device can be an edge network device, a core network device, or a network device in a data center.
  • an edge network device can be a provider edge (provider edge, PE) device
  • a core network device can be a Operator (provider, P) equipment.
  • FIG. 13 shows a schematic structural diagram of a network device 1100 provided by an embodiment of the present application.
  • the network device 1100 includes a processor 1102 , a communication interface 1106 , a forwarding chip 1108 and a bus 1110 .
  • the processor 1102 , the communication interface 1106 and the forwarding chip 1108 are communicatively connected to each other through the bus 1110 .
  • Other connection modes other than 1110 are communicatively connected with each other.
  • the processor 1102 may be a general-purpose processor, and the general-purpose processor may be a processor that performs specific steps and/or operations by reading and executing a computer program (such as the computer program 11042) stored in the memory (such as the memory 1104).
  • the processor may use data stored in memory (eg, memory 1104 ) in performing the steps and/or operations.
  • a general processor may be, for example but not limited to, a central processing unit (CPU).
  • the processor 1102 may also be a special-purpose processor, which may be a processor specially designed to perform specific steps and/or operations, and the special-purpose processor may be, for example but not limited to, ASIC and FPGA.
  • the processor 1102 may also be a combination of multiple processors, such as a multi-core processor.
  • the communication interface 1106 may include an input/output (input/output, I/O) interface, a physical interface and a logical interface, etc., for realizing the interconnection of devices inside the network device 1100, and for realizing the communication between the network device 1100 and other devices ( For example, the interface for interconnecting network devices).
  • the physical interface can be a Gigabit Ethernet interface (gigabit Ethernet, GE), which can be used to realize the interconnection between the network device 1100 and other devices
  • the logical interface is an internal interface of the network device 1100, which can be used to realize the network device 1100. device interconnection.
  • the communication interface 1106 may be used for the network device 1100 to communicate with other devices, for example, the communication interface 1106 is used for sending and receiving packets between the network device 1100 and other devices.
  • the bus 1110 may be any type of communication bus for interconnecting the processor 1102, the communication interface 1106 and the forwarding chip 1108, such as a system bus.
  • the forwarding chip 1108 may be various possible forwarding chips such as an ASIC chip, an FPGA chip, an NP chip, and a GAL chip.
  • the interconnection of any device among the processor 1102 , the memory 1104 , and the communication interface 1106 with the forwarding chip 1108 may specifically mean that any device is interconnected with a device in the forwarding chip 1108 .
  • the above-mentioned processor 1102, memory 1104 and communication interface 1106 can all be integrated on the forwarding chip 1108, or the above-mentioned processor 1102 and communication interface 1106 can be set on independent chips, or at least partially Some or all of them are set on the same chip. Whether each device is independently arranged on different chips or integrated and arranged on one or more chips often depends on the needs of product design. The present application does not limit the specific implementation forms of the above devices.
  • the network device 1100 may also include a memory 1104 .
  • the memory 1104 can be used to store a computer program 11042, which can include instructions and data.
  • the memory 1104 can be various types of storage media, such as random access memory (random access memory, RAM), read-only memory (read-only memory, ROM), non-volatile RAM (non-volatile -volatile RAM, NVRAM), programmable ROM (programmable ROM, PROM), erasable PROM (erasable PROM, EPROM), electrically erasable PROM (electrically erasable PROM, EEPROM), flash memory, optical memory and registers, etc.
  • the storage 1104 may include a hard disk and/or a memory.
  • the network device 1100 shown in FIG. 13 is only exemplary. During implementation, the network device 1100 may also include other components, which will not be listed here.
  • the "forwarding chip” can also be replaced with other chips, such as some general-purpose processors including TM functions.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software it may be implemented in whole or in part in the form of a computer program product comprising one or more computer instructions.
  • the computer program instructions When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer may be a general purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data
  • the center transmits to another website site, computer, server or data center through wired (such as coaxial cable, optical fiber, digital subscriber line) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium, or a semiconductor medium (for example, a solid-state hard disk).
  • the disclosed devices and the like can be implemented in other configurations.
  • the device embodiments described above are only illustrative.
  • the division of units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or integrated. to another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical or other forms.
  • the unit described as a separate component may or may not be physically separated, the component described as a unit may or may not be a physical unit, and may be located in one place or distributed to multiple network devices (such as terminal equipment )superior. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Small-Scale Networks (AREA)

Abstract

L'invention concerne un appareil de gestion de trafic, un procédé de mise en cache de paquets, une puce d'acheminement et un dispositif de réseau, se rapportant au domaine technique des réseaux. L'appareil de gestion de trafic est appliqué à la puce d'acheminement. L'appareil de gestion de trafic comprend une mémoire partagée. La capacité de la mémoire partagée est inférieure à une capacité prédéfinie. La mémoire partagée est utilisée pour mettre en cache un paquet introduit dans la mémoire partagée, et stocker des informations de file d'attente d'une file d'attente dans laquelle se trouve le paquet dans la mémoire partagée. La présente invention est avantageuse pour éviter le gaspillage de ressources de mémoire de la puce d'acheminement, et réduire les coûts et la consommation d'énergie de la puce d'acheminement.
PCT/CN2022/141981 2021-12-29 2022-12-26 Appareil de gestion de trafic, procédé de mise en cache de paquets, puce et dispositif de réseau WO2023125430A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111640406.2A CN116414572A (zh) 2021-12-29 2021-12-29 流量管理装置、报文缓存方法、芯片及网络设备
CN202111640406.2 2021-12-29

Publications (1)

Publication Number Publication Date
WO2023125430A1 true WO2023125430A1 (fr) 2023-07-06

Family

ID=86997828

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/141981 WO2023125430A1 (fr) 2021-12-29 2022-12-26 Appareil de gestion de trafic, procédé de mise en cache de paquets, puce et dispositif de réseau

Country Status (2)

Country Link
CN (1) CN116414572A (fr)
WO (1) WO2023125430A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150487A (zh) * 2007-11-15 2008-03-26 曙光信息产业(北京)有限公司 一种零拷贝网络报文发送方法
US20130246556A1 (en) * 2012-03-16 2013-09-19 Oracle International Corporation System and method for supporting intra-node communication based on a shared memory queue
CN107193673A (zh) * 2017-06-28 2017-09-22 锐捷网络股份有限公司 一种报文处理方法及设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150487A (zh) * 2007-11-15 2008-03-26 曙光信息产业(北京)有限公司 一种零拷贝网络报文发送方法
US20130246556A1 (en) * 2012-03-16 2013-09-19 Oracle International Corporation System and method for supporting intra-node communication based on a shared memory queue
CN107193673A (zh) * 2017-06-28 2017-09-22 锐捷网络股份有限公司 一种报文处理方法及设备

Also Published As

Publication number Publication date
CN116414572A (zh) 2023-07-11

Similar Documents

Publication Publication Date Title
US9225668B2 (en) Priority driven channel allocation for packet transferring
CN111104775B (zh) 一种片上网络拓扑结构及其实现方法
US7307998B1 (en) Computer system and network interface supporting dynamically optimized receive buffer queues
US11381515B2 (en) On-demand packet queuing in a network device
US10193831B2 (en) Device and method for packet processing with memories having different latencies
US8248930B2 (en) Method and apparatus for a network queuing engine and congestion management gateway
CN111651377B (zh) 一种用于片内报文处理的弹性共享缓存器
CN106648896B (zh) 一种Zynq芯片在异构称多处理模式下双核共享输出外设的方法
JP6392745B2 (ja) サーバノード相互接続デバイス及びサーバノード相互接続方法
US20140036680A1 (en) Method to Allocate Packet Buffers in a Packet Transferring System
US20080028090A1 (en) System for managing messages transmitted in an on-chip interconnect network
US20200076742A1 (en) Sending data using a plurality of credit pools at the receivers
EP4028859A1 (fr) Procédés et appareil pour efficacité d'interrogation améliorée dans des tissus d'interface de réseaux
US7974190B2 (en) Dynamic queue memory allocation with flow control
WO2012116540A1 (fr) Procédé de gestion de trafic et dispositif de gestion
WO2022151475A1 (fr) Procédé de mise en mémoire tampon de messages, dispositif d'attribution de mémoire et système de transfert de message
CN117749726A (zh) Tsn交换机输出端口优先级队列混合调度方法和装置
WO2023125430A1 (fr) Appareil de gestion de trafic, procédé de mise en cache de paquets, puce et dispositif de réseau
WO2022170769A1 (fr) Procédé de communication, appareil et système
CN110708255B (zh) 一种报文控制方法及节点设备
CN118368293B (zh) 一种数据传输方法、计算机设备及介质
CN110661724B (zh) 一种分配缓存的方法和设备
WO2023231792A1 (fr) Procédé d'attribution de créneau temporel et appareil de communication
WO2022246710A1 (fr) Procédé de commande de transmission de flux de données et dispositif de communication
KR102719059B1 (ko) 다중 스트림 솔리드 스테이트 드라이브 서비스 품질 관리

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22914684

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE