WO2018024173A1 - 报文处理方法及路由器 - Google Patents

报文处理方法及路由器 Download PDF

Info

Publication number
WO2018024173A1
WO2018024173A1 PCT/CN2017/095165 CN2017095165W WO2018024173A1 WO 2018024173 A1 WO2018024173 A1 WO 2018024173A1 CN 2017095165 W CN2017095165 W CN 2017095165W WO 2018024173 A1 WO2018024173 A1 WO 2018024173A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
block
line card
input line
information
Prior art date
Application number
PCT/CN2017/095165
Other languages
English (en)
French (fr)
Inventor
夏洪淼
孙团会
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP17836355.2A priority Critical patent/EP3487132B1/en
Publication of WO2018024173A1 publication Critical patent/WO2018024173A1/zh
Priority to US16/264,309 priority patent/US10911364B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9005Buffering arrangements using dynamic buffer space allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/34Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9015Buffering arrangements for supporting a linked list
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9047Buffering arrangements including multiple buffers, e.g. buffer pools
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9063Intermediate storage in different physical parts of a node or terminal
    • H04L49/9068Intermediate storage in different physical parts of a node or terminal in the network interface card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Definitions

  • the present application relates to packet processing technology, and in particular, to a packet processing method and a router.
  • Internet Internet applications are permeating all aspects of society and have a huge impact. With the emergence of various 4K/8K videos, Virtual Reality (VR)/Augmented Reality (AR), telemedicine and other new applications, Internet traffic will continue to grow. On the Internet, a router is the core device for packet forwarding.
  • VR Virtual Reality
  • AR Augmented Reality
  • a router is the core device for packet forwarding.
  • the line cards of the traditional routers have certain packet buffers.
  • most routers still follow the well-known rules of thumb to guide the design specification of message buffer: assuming that the bandwidth of the router line card is B, the Round Trip Time (RTT) of the data stream usually reaches 200 milliseconds. Then the cache required for this line card is B*RTT. It can be seen that as the traffic increases, the router capacity and bandwidth requirements become larger and larger, and the cache demand of the line card will also become larger and larger, which will become one of the design bottlenecks of the router.
  • RTT Round Trip Time
  • the line card processing bandwidth is 1 Tbps, and it needs to cache 25 GB, which may need to be satisfied by the current memory stacking technology;
  • the line card processing bandwidth is 10Tbps in the future, it needs to cache 250GB, which is difficult to achieve in the short-term memory process. Therefore, there is an urgent need to solve the problem of insufficient router line card buffering.
  • FIG. 1 is a schematic diagram of a router provided by the prior art.
  • a typical router includes an input line card 11, an output line card 12, and a switch module 13, and a cache module 14 connected to the switch module 13.
  • the input line card also includes a cache module.
  • the present invention provides a packet processing method and a router, so that the distributed buffer of the packet can be effectively implemented, thereby improving the flexibility of packet processing.
  • the present application provides a packet processing method, where the method is applied to a router, where the router includes: an input line card, an output line card, at least one switch module that connects the input line card and the output line card, and is connected to the switch module.
  • the first cache module the first cache module includes: at least one first cache block
  • the input line card includes: at least one second cache module
  • each second cache module includes: at least one second cache block;
  • the input line card receives at least one message
  • the input line card acquires information of the first cache block available in the third cache module, and the third cache module is a first cache module including the first cache block that is available;
  • the input line card allocates a third cache block to each of the at least one message according to the at least one cache information block and the information of the available first cache block stored in the input line card, where the third cache block is the first cache a block or a second cache block, each cache information block corresponding to at least one fourth cache block, and each of the at least one fourth cache block is a first cache block or a second cache block, and each cache block is used by the block Indicating the occupancy of each fourth cache block;
  • the input line card caches each message into a third cache block.
  • the router can implement distributed buffering of the packet, wherein the router can cache the packet to the first cache block or cache to the second cache block, and expand the router cache while improving the router.
  • the flexibility of caching is provided.
  • the input line card allocates a third cache block to each of the at least one message according to the at least one cache information block and the information of the first cache block that are stored in the input line card, including:
  • the input line card determines, according to the at least one cache information block, whether each message can be stored in the fourth cache block;
  • each cache information block includes: a size of each fourth cache block occupied space, indication information indicating whether each of the fourth cache blocks is a first cache block; and when the indication information indicates When each of the fourth cache blocks is the first cache block, the cache information block further includes: an identifier of the first cache module where each of the fourth cache blocks is located;
  • the at least one cache information block is stored in the input line card in the form of a linked list
  • the input line card determines, according to the at least one cache information block, whether each of the messages can be stored in the fourth cache block, including:
  • the input line card determines whether the sum of the occupied space of the fifth cache block and the size of each of the messages is smaller than The size of the last cache block, where the fifth cache block is the last fourth cache block corresponding to the last cache information block in the at least one cache information block.
  • the input line card determines that the third cache block is the second cache block, the input line card directly allocates the third cache block for each of the messages;
  • the input line card determines that the third cache block is the first cache block
  • the input line card sends an allocation request message to the first switching module corresponding to the third cache block, where the allocation request message is sent. And configured to request the first switching module to allocate the third cache block for each of the messages;
  • the first switching module allocates the third cache block to each of the packets according to the allocation request message
  • the first switching module sends an allocation response message to the input line card, where the allocation response message includes: an identifier of the third cache block;
  • the input line card assigns each of the messages to the third cache block according to the allocation response message.
  • the input line card establishes a first cache information block, including:
  • the information of the available first cache block includes: an identifier of the third cache module and a number of available first cache blocks included in the third cache module;
  • the input line card sends a cache information block setup request message to the second switch module, where the cache information block setup request message is used to request to acquire the available first cache block;
  • the second switching module allocates the available first cache block to the input line card according to the cache information block setup request message
  • the second switching module sends a cache information block setup response message to the input line card, where the cache information block setup response message includes: an identifier of the third cache module;
  • the input line card establishes the first cache information block according to the identifier of the third cache module.
  • the method further includes:
  • the input line card schedules the message according to the schedulable message size according to the sequence of the at least one cached information block and the at least one fourth cache block corresponding to each of the cached information blocks.
  • the input line card Since the message is stored in the corresponding fourth cache block in the order of the cached information block, the input line card is also the same Packet scheduling is performed according to the order of cached information blocks to ensure the reliability of packet scheduling.
  • the method further includes:
  • the input line card When the message in the at least one fourth cache block corresponding to each cache information block is scheduled, the input line card releases the each cache information block, and when the at least one When the four cache block includes the first cache block, the input line card sends a request release message to the fourth cache module where the included first cache block is located;
  • the application provides a router, including: an input line card, an output line card, at least one switch module connecting the input line card and the output line card, and a first cache module connected to the switch module,
  • the first cache module includes: at least one first cache block
  • the input line card includes: at least one second cache module
  • each second cache module includes: at least one second cache block;
  • the input line card is used to:
  • the third cache module is a first cache module that includes a first cache block that is available;
  • Each of the messages is cached into the third cache block.
  • the input line card is specifically configured to:
  • a fourth cache block is selected as the third cache block, and each of the packets is allocated to the third cache block;
  • the available The information of the first cache block is allocated to the third cache block for each of the messages.
  • each of the cache information blocks includes: the size of the occupied space of each of the fourth cache blocks, and indication information for indicating whether each of the fourth cache blocks is the first cache block;
  • each of the cache information blocks further includes: an identifier of the first cache module where each of the fourth cache blocks is located;
  • the at least one cache information block is stored in the input line card in the form of a linked list
  • the input line card is specifically used to:
  • the input line card is specifically configured to:
  • the third cache block is directly allocated to each of the packets;
  • the first switching module is configured to allocate the third cache block to each of the packets according to the allocation request message, and send an allocation response message to the input line card, where the allocation response message includes: The identifier of the third cache block;
  • the input line card is further configured to allocate each of the messages to the third cache block according to the allocation response message.
  • the input line card is specifically configured to:
  • the first cache information block is established according to the second cache block that is not occupied;
  • the input line card is specifically used to:
  • the input line card is further configured to establish the first cache information block according to the identifier of the third cache module.
  • the output line card is configured to determine a schedulable message size according to the message queue status, and send the schedulable message size to the input line card;
  • the input line card is further configured to: according to the schedulable message size, schedule the report according to the sequence of the at least one cache information block and the at least one fourth cache block corresponding to each cache information block. Text.
  • the input line card is further used to release the cached information block.
  • the input line card is further configured to send a request release message to the fourth cache module where the included first cache block is located;
  • the fourth cache module is configured to release the included first cache block and issue information of the first cache block available in the fourth cache module.
  • FIG. 1 is a schematic diagram of a router provided by the prior art
  • FIG. 2 is a schematic diagram of a router according to an embodiment of the present application.
  • FIG. 4B is a schematic diagram of a physical cache information block according to an embodiment of the present disclosure.
  • FIG. 5 is a flowchart of a packet processing method according to another embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram of at least one cache information block according to an embodiment of the present disclosure.
  • the present application is intended to address the issue of how to perform distributed caching of messages.
  • the present application provides a packet processing method and a router.
  • the packet processing method provided by the present application mainly includes three aspects: a first aspect: a cache block information release; a second aspect: a distributed message cache; and a third aspect: a message schedule.
  • FIG. 2 is a schematic diagram of a router according to an embodiment of the present application.
  • the router includes: an input line card 21, an output line card 22, and the input. a line card 21 and at least one switch module 23 of the output line card 22, and a first cache module 24 connected to the switch module 23, the first cache module 24 comprising: at least one first cache block, the input line
  • the card 21 includes: at least one second cache module, each second cache module comprising: at least one second cache block.
  • the router includes: at least one input line card 21 and at least one output line card 22, and each input line card 21 and each output line card 22 can be connected by at least one switching module 23, wherein, as shown in FIG. 2 As shown, the switching module 23 has a one-to-one correspondence with the first cache module 24. In fact, their correspondence is not limited thereto. For example, each switching module 23 may also correspond to multiple first cache modules 24.
  • Step S301 The input line card receives at least one message
  • Step S302 The input line card acquires information about the first cache block available in the third cache module.
  • the third cache module is a first cache module including a first cache block that is available, that is, the first cache module including the first cache block that is available is referred to as a third cache module. Further, the information that the input line card obtains the first cache block that is available in the third cache module includes two specific manners: one is to broadcast the information of the first cache block that is available to the third cache module, that is, the input line card is passive.
  • the third cache module sends the information of the available first cache block according to the architecture of the router, for example, when there is only one switch module between the third cache module and the input line card, the third The cache module passes the exchange mode The block sends the information of the available first cache block to the input line card; when there are only a plurality of switch modules between the third cache module and the input line card, the third cache module first uses the available first cache block The information is sent to the switching module connected to it, and then the switching module sends the information of the available first cache block to the input line card through the other switching module.
  • Step S303 The input line card allocates a third cache block to each of the at least one message according to the at least one cache information block and the information of the available first cache block stored in the input line card;
  • the third cache block is the first cache block or the second cache block. That is, the first cache block or the second cache block that is finally allocated to the packet is referred to as a third cache block.
  • Each of the cache information blocks corresponds to at least one fourth cache block, that is, any first cache block or second cache block corresponding to the cache information block is referred to as a fourth cache block, and the at least one fourth cache block
  • Each of the fourth cache blocks is a first cache block or a second cache block
  • each cache information block is used to indicate occupancy of each of the fourth cache blocks.
  • the cache information block may be divided into a logical cache information block and a physical cache information block.
  • the so-called logical cache information block is a cache information block including at least one physical cache information block.
  • FIG. 4A is a logical cache provided by an embodiment of the present application. Schematic diagram of the information block, as shown in FIG.
  • the logical cache information block includes N physical cache information blocks, where N is a positive integer greater than or equal to 1; each logical cache information block records the next physical cache information block.
  • FIG. 4B is a schematic diagram of a physical cache information block according to an embodiment of the present disclosure.
  • the physical cache information block includes: a size of a occupied space of a fourth cache block corresponding to the physical cache information block, and is used for And indicating that the fourth cache block is the first cache block.
  • the physical cache information block further includes: the fourth cache block The identifier of a cache module. It is worth mentioning that when the cache information block is a logical cache information block, the cache information block may correspond to at least one fourth cache block.
  • the cache information block corresponds to a first block.
  • the input line card allocates a third cache block to each of the at least one message according to the at least one cache information block and the information of the available first cache block stored in the input line card. That is, the input line card allocates a third cache block for each message in the at least one message according to the occupied condition of the cache block that has been occupied and the information of the available first cache block.
  • the input line card allocates a corresponding number to the message according to the updated at least one cache information block stored in the input line card and the updated information of the available first cache block.
  • steps S303 and S304 are the above second aspect: distributed message cache.
  • the input line card in the router allocates a third cache block to each of the at least one message according to the at least one cache information block and the information of the available first cache block stored in the input line card, that is, Determining a third cache block for each packet, and buffering the packet to a corresponding third cache block.
  • the router can implement distributed buffering of the packet by using the packet processing method, where the router can cache the packet to The first cache block or cached into the second cache block increases the flexibility of the router cache while expanding the router cache.
  • Step S502 The input line card acquires information about the first cache block available in the third cache module.
  • Step S501 is the same as step S301, and step S502 is the same as step S302, and details are not described herein again.
  • Step S503 The input line card determines whether each message can be stored in the fourth cache block according to the at least one cache information block; if yes, step S504 is performed; otherwise, step S505 is performed;
  • the at least one cache information block is stored in the input line card in the form of a linked list, and the message is stored in the fourth cache block corresponding to each cache information block in the order of the cache information block, when the first cache information block When the corresponding fourth cache block is full, the message is stored in the second cache information block, and based on this, the input line card determines the size of the occupied space of the fifth cache block and the size of each of the messages.
  • the fifth cache block is the last fourth cache block corresponding to the last cache block in the at least one cache information block, when the fifth cache block is occupied
  • the fifth cache block indicates that the fifth cache block can cache the message; otherwise, it indicates that the fifth cache block is insufficient.
  • Step S504 the input line card selects a fourth cache block as the third cache block, and allocates each message to the third cache block;
  • the selected fourth cache block is the fifth cache block, and the fifth cache block is allocated as the third cache block to the corresponding packet.
  • the input line card allocates each of the messages to the third cache block, including:
  • the input line card determines that the third cache block is the second cache block
  • the input line card directly allocates a third cache block for each message; that is, when the input line card determines that the third cache block is an input line
  • the third cache block is directly allocated for the message.
  • the input line card determines that the third cache block is the first cache block
  • the input line card sends an allocation request message to the first switching module corresponding to the third cache block, where the allocation request message is used to request the first switching module for each Transmitting, by the message, the third cache block; the first switching module allocates the third cache block for each message according to the allocation request message; the first switching module sends an allocation response message to the input line card, the allocation response
  • the message includes: an identifier of the third cache block; and the input line card allocates each message to the third cache block according to the allocation response message.
  • the input line card determines that the third cache block is a non-local cache block, it needs to interact with the first switching module corresponding to the third cache block to determine the identifier of the third cache block, and the report The text is assigned to the third cache block.
  • Step S505 The input line card establishes a first cache information block, and uses at least one cache information block and the first cache information block as a new at least one cache information block, and input the line card according to the new at least one cache information block, the available The information of a cache block is assigned a third cache block for each message.
  • the first cache information block that is established according to the second cache block that is not occupied includes: determining the size of the occupied space of the second cache block, and indicating that the second cache block is not the indication information of the first cache block.
  • the input line card establishes the first cache information block according to the information of the available first cache block, that is, the negotiation process includes: the input line card is configured according to the size of each message, The identifier of the third cache module and the number of available first cache blocks included in the third cache module determine a second switch module; the input line card sends a cache information block setup request message to the second switch module The cache information block setup request message is used to request to acquire the available first cache block; the second switch module allocates the available first to the input line card according to the cache information block setup request message.
  • the second switching module sends a cache information block setup response message to the input line card, where the cache information block setup response message includes: an identifier of the third cache module; and the input line card is according to the third The identifier of the cache module establishes the first cache information block.
  • the size of each cache block is fixed.
  • the input line card calculates the number of the first cache block and the size of the cache block available in each third cache module according to the size of the message.
  • the available space of the third cache module when the size of the packet is smaller than the available space of the third cache module, the cache information block setup request message may be sent to the second switch module corresponding to the third cache module, and the second switch module allocates
  • the available first cache block, the first cache information block that is established by the input line card according to the identifier of the third cache module includes: a size of an available first cache block occupied space, used to indicate the The first cache block that is available is the indication information of the first cache block and the identifier of the third cache module.
  • the input line card is configured to allocate a third cache block to each of the at least one message according to the information of the at least one cache information block and the available first cache block stored in the input line card. And the input line card allocates a third cache block for each message, and the above content is introduced to implement distributed buffering of the message.
  • FIG. 6 is a flowchart of a packet processing method according to another embodiment of the present application. As shown in FIG. 6, the method includes the following steps. Process:
  • the packet queue status includes: the size of the packet currently included in the packet queue, and optionally, the priority of each packet queue.
  • Step S602 The output line card determines the size of the schedulable message according to the state of the message queue, and sends the size of the schedulable message to the input line card.
  • the output line card determines the size of the packet that can be transmitted at a time according to the configuration of the router, and determines the size of the schedulable packet according to the size of the packet currently included in the packet queue and the priority of each packet queue. Scheduling report The size of the text is sent to the input line card.
  • Step S603 The input line card schedules the message according to the schedulable message size in the order of the at least one cache information block and the at least one fourth cache block corresponding to each cache information block.
  • the input line card determines that the schedulable message size is 500 Bytes
  • the input line card schedules the corresponding message according to the order of at least one cached information block, where the sum of the scheduled message sizes is less than or equal to 500 Bytes.
  • the input line card releases the cache information block, and
  • the input line card sends a request release message to the fourth cache module where the included first cache block is located; and the fourth cache module releases the included a first cache block and publish information of the first cache block available in the fourth cache module.
  • a message scheduling method is mainly provided. Since the message is stored in the corresponding fourth cache block according to the order of the cache information block, the input line card also performs message scheduling according to the order of the cache information block. To ensure the reliability of packet scheduling.
  • the present application also provides a router, as shown in FIG. 2, the router includes: an input line card 21, an output line card 22, at least one switching module 23 connecting the input line card 21 and the output line card 22, and a first cache module 24 connected to the switch module 23, the first cache module 24 includes: at least one first cache block, the input line card 21 includes: at least one second cache module, each second cache module includes : at least one second cache block.
  • the third cache module is a first cache module 24 including the first cache block available;
  • each cache information block corresponds to at least one fourth cache block
  • each of the at least one fourth cache block is a first cache block or a second cache block.
  • Block, each of the cache information blocks is used to indicate occupancy of each of the fourth cache blocks;
  • Each of the messages is cached into the third cache block.
  • the router provided in this embodiment can be used to implement the technical solution of the corresponding packet processing method in FIG. 3, and the implementation principle and technical effects are similar, and details are not described herein again.
  • a fourth cache block is selected as the third cache block, and each of the packets is allocated to the third cache block;
  • the available The information of the first cache block is allocated to the third cache block for each of the messages.
  • each of the cache information blocks includes: the size of the occupied space of each of the fourth cache blocks, and indication information for indicating whether each of the fourth cache blocks is the first cache block;
  • each of the cache information blocks further includes: An identifier of a cache module 24;
  • the at least one cache information block is stored in the input line card 21 in the form of a linked list
  • the input line card 21 is specifically configured to:
  • the input line card 21 is specifically configured to:
  • the third cache block is directly allocated to each of the packets;
  • the first switching module is configured to allocate the third cache block to each of the packets according to the allocation request message, and send an allocation response message to the input line card 21, where the allocation response message includes: Describe the identifier of the third cache block;
  • the input line card 21 is further configured to allocate each of the messages to the third cache block according to the allocation response message.
  • the input line card 21 is specifically configured to:
  • the first cache information block is established according to the second cache block that is not occupied;
  • the first cache information block is established according to the information of the available first cache block.
  • the information of the available first cache block includes: an identifier of the third cache module and a number of available first cache blocks included in the third cache module;
  • the input line card 21 is specifically configured to:
  • the second switching module is configured to allocate the available first cache block to the input line card 21 according to the cache information block setup request message, and send a cache information block setup response message to the input line card 21,
  • the cache information block setup response message includes: an identifier of the third cache module;
  • the input line card 21 is further configured to establish the first cache information block according to the identifier of the third cache module.
  • the input line card is configured to allocate a third cache block to each of the at least one message according to the information of the at least one cache information block and the available first cache block stored in the input line card. And the input line card allocates a third cache block for each message, and the above content is introduced to implement distributed buffering of the message.
  • the input line card 21 is further configured to send a message queue status to the output line card 22;
  • the output line card 22 is configured to determine a schedulable message size according to the message queue status, and send the schedulable message size to the input line card 21;
  • the input line card 21 is further configured to follow the at least one cache information block according to the schedulable message size. And sequentially scheduling the message of the at least one fourth cache block corresponding to each of the cache information blocks.
  • the input line card 21 is further configured to release each cache information block. And when the at least one fourth cache block includes the first cache block, the input line card 21 is further configured to send a request release message to the fourth cache module where the included first cache block is located;
  • the fourth cache module is configured to release the included first cache block and issue information of the first cache block available in the fourth cache module.
  • the input line card of the router since the packets are stored in the corresponding fourth cache block according to the order of the cached information blocks, the input line card of the router also performs packet scheduling according to the order of the cached information blocks to ensure reliable packet scheduling. Sex.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请提供一种报文处理方法及路由器,方法应用于路由器,路由器包括:输入线卡、输出线卡、连接输入线卡和输出线卡的至少一个交换模块,以及与交换模块连接的第一缓存模块,第一缓存模块包括:至少一个第一缓存块,输入线卡包括:至少一个第二缓存模块,每个第二缓存模块包括:至少一个第二缓存块;该方法包括:输入线卡接收至少一个报文;获取第三缓存模块中可用的第一缓存块的信息,第三缓存模块为包括有可用的第一缓存块的第一缓存模块;根据输入线卡中所存储的至少一个缓存信息块和可用的第一缓存块的信息为至少一个报文中的每个报文分配第三缓存块;输入线卡将每个报文缓存至第三缓存块中。通过该方法可以实现对报文的分布式缓存。

Description

报文处理方法及路由器
本申请要求于2016年8月4日提交中国专利局、申请号为201610633110.0、申请名称为“报文处理方法及路由器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及报文处理技术,尤其涉及一种报文处理方法及路由器。
背景技术
互联网Internet应用正在渗透社会的各个方面,并产生巨大的影响。随着各种4K/8K视频,虚拟现实(Virtual Reality,简称VR)/增强现实(Augmented Reality,简称AR),远程医疗等各种新应用的出现,Internet的流量将持续增长。在Internet网络中,路由器是实现报文转发的核心设备。
为了容忍一定的数据突发,防止拥塞时丢包,传统路由器的线卡都有一定的报文缓存buffer。目前大部分路由器仍然是按照著名的经验法则指导报文缓存的设计规格:假设路由器线卡带宽是B,数据流的端到端往返延时(Round Trip Time,简称RTT)通常要达到200毫秒,那么这个线卡需要的缓存是B*RTT。由此可见,随着流量的增长,路由器容量、带宽需求也越来越大,那么线卡的缓存需求也会越来越大,这将成为路由器的设计瓶颈之一。举例而言,如果一个线卡处理带宽为100Gbps,需要缓存2.5GB,当前内存技术很容易满足这个规格;线卡处理带宽为1Tbps,需要缓存25GB,这个可能需要采用当前内存堆叠技术才能满足;而当未来线卡处理带宽为10Tbps时,需要缓存250GB,短期内内存工艺很难达到这个需求。因此,目前急需解决路由器线卡缓存不足的问题。
图1为现有技术提供的路由器的示意图,如图1所示,当前典型的路由器包括:输入线卡11、输出线卡12和一个交换模块13,以及与交换模块13连接的缓存模块14,其中,输入线卡也包括缓存模块。基于此,现有技术的报文处理过程为:当输入线卡11检测到接收到报文的队列深度超过预设的第一水线时,将队列中的报文以及队列的后续报文的目的地址修改为交换模块13所对应的缓存模块14的地址;当这些报文随后被发送至该缓存模块14时,则将目的地址修改为输出线卡12的地址;当输入线卡11检测队列深度低于第三水线,并且交换模块13对应的缓存模块14所对应的队列深度低于第二水线时,将输入线卡11中的队列的报文以及队列的后续报文的目的地址修改为输出线卡12地址,以便输入线卡11的报文通过交换模块13直接发送至输出线卡12中。
现有技术所提供的路由器包括一个交换模块和该交换模块对应的一个缓存模块,但是一个缓存模块可能远远不满足缓存需求,因此,扩展路由器的缓存模块将成为必然趋势, 然而,当路由器包括多个交换模块,每个交换模块对应一个缓存模块时,现有技术并没有提供如何确定将报文缓存至哪个缓存模块的方案。因此,基于上述路由器的结构,如何进行报文的分布式缓存是本申请急需解决的技术问题。
发明内容
本申请提供一种报文处理方法及路由器,从而可以有效的实现报文的分布式缓存,进而提高报文处理的灵活性。
第一方面,本申请提供一种报文处理方法,该方法应用于路由器,路由器包括:输入线卡、输出线卡、连接输入线卡和输出线卡的至少一个交换模块,以及与交换模块连接的第一缓存模块,第一缓存模块包括:至少一个第一缓存块,输入线卡包括:至少一个第二缓存模块,每个第二缓存模块包括:至少一个第二缓存块;方法包括:
输入线卡接收至少一个报文;
输入线卡获取第三缓存模块中可用的第一缓存块的信息,第三缓存模块为包括有可用的第一缓存块的第一缓存模块;
输入线卡根据输入线卡中所存储的至少一个缓存信息块和可用的第一缓存块的信息为至少一个报文中的每个报文分配第三缓存块,第三缓存块为第一缓存块或者第二缓存块,每个缓存信息块对应至少一个第四缓存块,至少一个第四缓存块中每个第四缓存块为第一缓存块或者第二缓存块,每个缓存信息块用于指示每个第四缓存块的占用情况;
输入线卡将每个报文缓存至第三缓存块中。
通过该报文处理方法,路由器可以实现对报文的分布式缓存,其中路由器可以将报文缓存至第一缓存块或者缓存至第二缓存块中,在扩充了路由器缓存的同时,提高了路由器缓存的灵活性。
可选地,输入线卡根据输入线卡中所存储的至少一个缓存信息块和可用的第一缓存块的信息为至少一个报文中的每个报文分配第三缓存块,包括:
输入线卡根据至少一个缓存信息块判断每个报文是否可存储在第四缓存块中;
若是,则输入线卡将选择一个第四缓存块作为所述第三缓存块,并将每个报文分配给第三缓存块;
否则,则输入线卡建立第一缓存信息块,并将至少一个缓存信息块和第一缓存信息块作为新的至少一个缓存信息块,输入线卡根据新的至少一个缓存信息块、可用的第一缓存块的信息为每个报文分配第三缓存块。
通过该方法可以有效的确定每个报文被分配的第三缓存块。
可选地,每个缓存信息块包括:每个第四缓存块已占用空间大小、用于指示所述每个第四缓存块是否为第一缓存块的指示信息;当所述指示信息指示所述每个第四缓存块为所述第一缓存块时,则所述每个缓存信息块还包括:所述每个第四缓存块所在的第一缓存模块的标识;
其中,所述至少一个缓存信息块以链表的形式存储在所述输入线卡中;
相应的,所述输入线卡根据所述至少一个缓存信息块判断所述每个报文是否可存储在第四缓存块中,包括:
所述输入线卡判断第五缓存块已占用空间大小与所述每个报文的大小之和是否小于 所述最后一个缓存块的大小,所述第五缓存块为所述至少一个缓存信息块中最后一个缓存信息块所对应的最后一个第四缓存块。
通过该方法输入线卡可以判断每个报文是否可存储在第四缓存块中。
可选地,所述输入线卡将所述每个报文分配给所述第三缓存块,包括:
当所述输入线卡确定所述第三缓存块为第二缓存块时,则所述输入线卡为所述每个报文直接分配所述第三缓存块;
当所述输入线卡确定所述第三缓存块为第一缓存块时,则所述输入线卡向所述第三缓存块所对应的第一交换模块发送分配请求消息,所述分配请求消息用于请求所述第一交换模块为所述每个报文分配所述第三缓存块;
所述第一交换模块根据所述分配请求消息为所述每个报文分配所述第三缓存块;
所述第一交换模块向所述输入线卡发送分配响应消息,所述分配响应消息包括:所述第三缓存块的标识;
所述输入线卡根据所述分配响应消息将所述每个报文分配给所述第三缓存块。
可选地,所述输入线卡建立第一缓存信息块,包括:
所述输入线卡确定所述至少一个缓存信息块所对应的所有第四缓存块中为第二缓存块的个数;
若所述个数小于或者等于第一预设值,则所述输入线卡根据未被占用的第二缓存块建立所述第一缓存信息块;
否则,则所述输入线卡根据所述可用的第一缓存块的信息建立所述第一缓存信息块。
可选地,所述可用的第一缓存块的信息包括:所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数;
相应的,所述输入线卡根据所述可用的第一缓存块的信息建立所述第一缓存信息块,包括:
所述输入线卡根据所述每个报文的大小、所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数确定第二交换模块;
所述输入线卡向所述第二交换模块发送缓存信息块建立请求消息,所述缓存信息块建立请求消息用于请求获取所述可用的第一缓存块;
所述第二交换模块根据所述缓存信息块建立请求消息为所述输入线卡分配所述可用的第一缓存块;
所述第二交换模块向所述输入线卡发送缓存信息块建立响应消息,缓存信息块建立响应消息包括:所述第三缓存模块的标识;
所述输入线卡根据所述第三缓存模块的标识建立所述第一缓存信息块。
可选地,所述输入线卡将所述每个报文缓存至所述第三缓存块中之后,还包括:
所述输入线卡向所述输出线卡发送报文队列状态;
所述输出线卡根据所述报文队列状态确定可调度报文大小,并将所述可调度报文大小发送给所述输入线卡;
所述输入线卡根据所述可调度报文大小,按照所述至少一个缓存信息块的顺序和所述每个缓存信息块所对应的所述至少一个第四缓存块的顺序调度报文。
由于报文按照缓存信息块的顺序,存储在对应的第四缓存块中,因此,输入线卡同样 根据缓存信息块的顺序进行报文调度,以保证报文调度的可靠性。
可选地,该方法还包括:
当所述每个缓存信息块对应的所述至少一个第四缓存块中的报文都被调度完毕时,则所述输入线卡释放所述每个缓存信息块,并且当所述至少一个第四缓存块包括第一缓存块时,则所述输入线卡向所包括的第一缓存块所在的第四缓存模块发送请求释放消息;
所述第四缓存模块释放所述所包括的第一缓存块,并发布所述第四缓存模块中可用的第一缓存块的信息。
下面将介绍发明实施例提供一种路由器,其中路由器部分与上述方法对应,对应内容技术效果相同,在此不再赘述。
第二方面,本申请提供一种路由器,包括:输入线卡、输出线卡、连接所述输入线卡和所述输出线卡的至少一个交换模块,以及与交换模块连接的第一缓存模块,所述第一缓存模块包括:至少一个第一缓存块,所述输入线卡包括:至少一个第二缓存模块,每个第二缓存模块包括:至少一个第二缓存块;
所述输入线卡用于:
接收至少一个报文;
获取第三缓存模块中可用的第一缓存块的信息,所述第三缓存模块为包括有可用的第一缓存块的第一缓存模块;
根据所述输入线卡中所存储的至少一个缓存信息块和所述可用的第一缓存块的信息为所述至少一个报文中的每个报文分配第三缓存块,所述第三缓存块为第一缓存块或者第二缓存块,每个缓存信息块对应至少一个第四缓存块,所述至少一个第四缓存块中每个第四缓存块为第一缓存块或者第二缓存块,所述每个缓存信息块用于指示所述每个第四缓存块的占用情况;
将所述每个报文缓存至所述第三缓存块中。
可选地,所述输入线卡具体用于:
根据所述至少一个缓存信息块判断所述每个报文是否可存储在第四缓存块中;
若是,则将选择一个第四缓存块作为所述第三缓存块,并将所述每个报文分配给所述第三缓存块;
否则,则建立第一缓存信息块,并将所述至少一个缓存信息块和所述第一缓存信息块作为新的至少一个缓存信息块,根据所述新的至少一个缓存信息块、所述可用的第一缓存块的信息为所述每个报文分配所述第三缓存块。
可选地,所述每个缓存信息块包括:所述每个第四缓存块已占用空间大小、用于指示所述每个第四缓存块是否为第一缓存块的指示信息;当所述指示信息指示所述每个第四缓存块为所述第一缓存块时,则所述每个缓存信息块还包括:所述每个第四缓存块所在的第一缓存模块的标识;
其中,所述至少一个缓存信息块以链表的形式存储在所述输入线卡中;
相应的,所述输入线卡具体用于:
判断第五缓存块已占用空间大小与所述每个报文的大小之和是否小于所述最后一个缓存块的大小,所述第五缓存块为所述至少一个缓存信息块中最后一个缓存信息块所对应的最后一个第四缓存块。
可选地,所述输入线卡具体用于:
当确定所述第三缓存块为第二缓存块时,则为所述每个报文直接分配所述第三缓存块;
当确定所述第三缓存块为第一缓存块时,则向所述第三缓存块所对应的第一交换模块发送分配请求消息,所述分配请求消息用于请求所述第一交换模块为所述每个报文分配所述第三缓存块;
所述第一交换模块用于根据所述分配请求消息为所述每个报文分配所述第三缓存块,并且向所述输入线卡发送分配响应消息,所述分配响应消息包括:所述第三缓存块的标识;
所述输入线卡还用于根据所述分配响应消息将所述每个报文分配给所述第三缓存块。
可选地,所述输入线卡具体用于:
确定所述至少一个缓存信息块所对应的所有第四缓存块中为第二缓存块的个数;
若所述个数小于或者等于第一预设值,则根据未被占用的第二缓存块建立所述第一缓存信息块;
否则,则根据所述可用的第一缓存块的信息建立所述第一缓存信息块。
可选地,所述可用的第一缓存块的信息包括:所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数;
相应的,所述输入线卡具体用于:
根据所述每个报文的大小、所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数确定第二交换模块;
向所述第二交换模块发送缓存信息块建立请求消息,所述缓存信息块建立请求消息用于请求获取所述可用的第一缓存块;
所述第二交换模块用于根据所述缓存信息块建立请求消息为所述输入线卡分配所述可用的第一缓存块,并向所述输入线卡发送缓存信息块建立响应消息,缓存信息块建立响应消息包括:所述第三缓存模块的标识;
所述输入线卡还用于根据所述第三缓存模块的标识建立所述第一缓存信息块。
可选地,所述输入线卡还用于向所述输出线卡发送报文队列状态;
所述输出线卡用于根据所述报文队列状态确定可调度报文大小,并将所述可调度报文大小发送给所述输入线卡;
所述输入线卡还用于根据所述可调度报文大小,按照所述至少一个缓存信息块的顺序和所述每个缓存信息块所对应的所述至少一个第四缓存块的顺序调度报文。
可选地,当所述每个缓存信息块对应的所述至少一个第四缓存块中的报文都被调度完毕时,则所述输入线卡还用于释放所述每个缓存信息块,并且当所述至少一个第四缓存块包括第一缓存块时,则所述输入线卡还用于向所包括的第一缓存块所在的第四缓存模块发送请求释放消息;
所述第四缓存模块用于释放所述所包括的第一缓存块,并发布所述第四缓存模块中可用的第一缓存块的信息。
本申请提供一种报文处理方法及路由器,其中,路由器中的输入线卡根据输入线卡中所存储的至少一个缓存信息块和可用的第一缓存块的信息为至少一个报文中的每个报文分配第三缓存块,即为每个报文确定第三缓存块,并将报文缓存至对应的第三缓存块,通过该报文处理方法,路由器可以实现对报文的分布式缓存,其中路由器可以将报文缓存至 第一缓存块或者缓存至第二缓存块中,在扩充了路由器缓存的同时,提高了路由器缓存的灵活性。
附图说明
图1为现有技术提供的路由器的示意图;
图2为本申请一实施例提供的路由器的示意图;
图3为本申请一实施例提供的报文处理方法的流程图;
图4A为本申请一实施例提供的逻辑缓存信息块的示意图;
图4B为本申请一实施例提供的物理缓存信息块的示意图;
图5为本申请另一实施例提供的报文处理方法的流程图;
图6为本申请一实施例提供的至少一个缓存信息块的示意图。
具体实施方式
本申请旨在解决如何进行报文的分布式缓存的问题。为了解决该问题,本申请提供一种报文处理方法及路由器。其中,本申请所提供的报文处理方法主要包括三方面:第一方面:缓存块信息发布;第二方面:分布式报文缓存;第三方面:报文调度。
该报文处理方法应用于路由器,具体地,图2为本申请一实施例提供的路由器的示意图,如图2所示,该路由器包括:输入线卡21、输出线卡22、连接所述输入线卡21和所述输出线卡22的至少一个交换模块23,以及与交换模块23连接的第一缓存模块24,所述第一缓存模块24包括:至少一个第一缓存块,所述输入线卡21包括:至少一个第二缓存模块,每个第二缓存模块包括:至少一个第二缓存块。
需要说明的是,该路由器包括:至少一个输入线卡21和至少一个输出线卡22,每个输入线卡21和每个输出线卡22可以通过至少一个交换模块23连接,其中,如图2所示,交换模块23与第一缓存模块24一一对应,实际上,它们的对应关系不限于此,例如:每个交换模块23也可以对应多个第一缓存模块24。
基于该路由器的架构,本申请提供一种报文处理方法,该方法的应用场景为路由器接收到其他设备所传输的报文,该路由器预将报文传输至另一个设备。图3为本申请一实施例提供的报文处理方法的流程图,如图3所示,该方法包括如下流程:
步骤S301:输入线卡接收至少一个报文;
步骤S302:输入线卡获取第三缓存模块中可用的第一缓存块的信息;
其中,第三缓存模块为包括有可用的第一缓存块的第一缓存模块,也就是说,凡是包括有可用的第一缓存块的第一缓存模块都被称为第三缓存模块。进一步地,输入线卡获取第三缓存模块中可用的第一缓存块的信息包括两种具体方式:一种为第三缓存模块广播自己的可用的第一缓存块的信息,即输入线卡被动获取第三缓存模块中可用的第一缓存块的信息;另一种为输入线卡向所有的第一缓存模块发送请求消息,该请求消息用于请求第三缓存模块的可用的第一缓存块的信息,然后输入线卡接收第三缓存模块发送的可用的第一缓存块的信息,即输入线卡主动获取第三缓存模块中可用的第一缓存块的信息。无论是上述哪种获取方式,第三缓存模块根据路由器的架构发送可用的第一缓存块的信息,例如:当第三缓存模块与输入线卡之间只存在一个交换模块时,则该第三缓存模块通过该交换模 块将可用的第一缓存块的信息发送给输入线卡;当第三缓存模块与输入线卡之间只存在多个交换模块时,则该第三缓存模块先将可用的第一缓存块的信息发送至与其连接的交换模块,然后该交换模块再通过其他交换模块将可用的第一缓存块的信息发送给输入线卡。
步骤S303:输入线卡根据输入线卡中所存储的至少一个缓存信息块和可用的第一缓存块的信息为至少一个报文中的每个报文分配第三缓存块;
其中,第三缓存块为第一缓存块或者第二缓存块,也就是说,凡是最后分配给报文的第一缓存块或者第二缓存块都被称为第三缓存块。
上述每个缓存信息块对应至少一个第四缓存块,也就是说,凡是缓存信息块所对应的第一缓存块或者第二缓存块都被称为第四缓存块,该至少一个第四缓存块中每个第四缓存块为第一缓存块或者第二缓存块,所述每个缓存信息块用于指示所述每个第四缓存块的占用情况。具体地,缓存信息块可以分为逻辑缓存信息块和物理缓存信息块,所谓逻辑缓存信息块即包括有至少一个物理缓存信息块的缓存信息块,图4A为本申请一实施例提供的逻辑缓存信息块的示意图,如图4A所示,该逻辑缓存信息块包括N个物理缓存信息块,其中,N为大于或者等于1的正整数;每个逻辑缓存信息块记录下一个物理缓存信息块的序号,图4B为本申请一实施例提供的物理缓存信息块的示意图,如图4B所示,物理缓存信息块包括:该物理缓存信息块所对应的第四缓存块已占用空间大小、用于指示该第四缓存块是否为第一缓存块的指示信息;当该指示信息指示该第四缓存块为第一缓存块时,则该物理缓存信息块还包括:该第四缓存块所在的第一缓存模块的标识。值得一提的是,当缓存信息块为逻辑缓存信息块时,则该缓存信息块可以对应至少一个第四缓存块,当缓存信息块为物理缓存信息块时,则该缓存信息块对应一个第四缓存块。
根据上述缓存信息块的定义可知,输入线卡根据输入线卡中所存储的至少一个缓存信息块和可用的第一缓存块的信息为至少一个报文中的每个报文分配第三缓存块,即为输入线卡根据已被占用的缓存块的占用情况,以及可用的第一缓存块的信息为至少一个报文中的每个报文分配第三缓存块。
每一个报文被分配完了对应的第三缓存块时,则至少一个缓存信息块中的信息要发生更新,并且可用的第一缓存块的信息也要发生更新,因此,当要为下一个报文分配对应的第三缓存块时,则输入线卡根据输入线卡中所存储的更新后的至少一个缓存信息块和更新后的可用的第一缓存块的信息为该报文分配对应的第三缓存块。
步骤S304:输入线卡将每个报文缓存至第三缓存块中。
其中,上述步骤S303和步骤S304即为上述第二方面:分布式报文缓存。
本申请中,路由器中的输入线卡根据输入线卡中所存储的至少一个缓存信息块和可用的第一缓存块的信息为至少一个报文中的每个报文分配第三缓存块,即为每个报文确定第三缓存块,并将报文缓存至对应的第三缓存块,通过该报文处理方法,路由器可以实现对报文的分布式缓存,其中路由器可以将报文缓存至第一缓存块或者缓存至第二缓存块中,在扩充了路由器缓存的同时,提高了路由器缓存的灵活性。
基于上述实施例的基础,进一步地,下面将对上述实施例的各个步骤进行进一步的细化,具体地,图5为本申请另一实施例提供的报文处理方法的流程图,如图5所示,该方法包括如下流程:
步骤S501:输入线卡接收至少一个报文;
步骤S502:输入线卡获取第三缓存模块中可用的第一缓存块的信息;
其中,步骤S501与步骤S301相同,步骤S502与步骤S302相同,在此不再赘述。
步骤S503:输入线卡根据至少一个缓存信息块判断每个报文是否可存储在第四缓存块中;若是,则执行步骤S504;否则,则执行步骤S505;
所述每个缓存信息块包括:所述每个第四缓存块已占用空间大小、用于指示所述每个第四缓存块是否为第一缓存块的指示信息;当所述指示信息指示所述每个第四缓存块为所述第一缓存块时,则所述每个缓存信息块还包括:所述每个第四缓存块所在的第一缓存模块的标识;其中,所述至少一个缓存信息块以链表的形式存储在所述输入线卡中;图6为本申请一实施例提供的至少一个缓存信息块的示意图,如图6所示,这里仅示出了三个缓存信息块,每个缓存信息块可以是逻辑缓存信息块,也可以是物理缓存信息块。
由于至少一个缓存信息块以链表的形式存储在所述输入线卡中,报文是按照缓存信息块的顺序存储在各个缓存信息块对应的第四缓存块中的,当第一个缓存信息块所对应的第四缓存块被存满时,在将报文存储在第二个缓存信息块中,基于此,输入线卡判断第五缓存块已占用空间大小与所述每个报文的大小之和是否小于所述最后一个缓存块的大小,所述第五缓存块为所述至少一个缓存信息块中最后一个缓存信息块所对应的最后一个第四缓存块,当第五缓存块已占用空间大小与所述每个报文的大小之和小于所述最后一个缓存块的大小时,则表示第五缓存块可以缓存下该报文,否则,则表示第五缓存块空间不足。
步骤S504:输入线卡将选择一个第四缓存块作为第三缓存块,并将每个报文分配给第三缓存块;
其中,选择的第四缓存块即为上述的第五缓存块,将该第五缓存块作为第三缓存块分配给对应的报文。
进一步地,所述输入线卡将所述每个报文分配给所述第三缓存块,包括:
当输入线卡确定第三缓存块为第二缓存块时,则所述输入线卡为每个报文直接分配第三缓存块;也就是说,当输入线卡确定第三缓存块为输入线卡中的本地缓存块时,则为该报文直接分配第三缓存块。
当输入线卡确定第三缓存块为第一缓存块时,则输入线卡向第三缓存块所对应的第一交换模块发送分配请求消息,分配请求消息用于请求第一交换模块为每个报文分配所述第三缓存块;第一交换模块根据分配请求消息为每个报文分配所述第三缓存块;第一交换模块向所述输入线卡发送分配响应消息,所述分配响应消息包括:第三缓存块的标识;输入线卡根据分配响应消息将每个报文分配给第三缓存块。也就是说,当输入线卡确定第三缓存块为非本地缓存块时,则需要与第三缓存块所对应的第一交换模块进行交互,以确定第三缓存块的标识,并将该报文分配给第三缓存块。
步骤S505:输入线卡建立第一缓存信息块,并将至少一个缓存信息块和第一缓存信息块作为新的至少一个缓存信息块,输入线卡根据新的至少一个缓存信息块、可用的第一缓存块的信息为每个报文分配第三缓存块。
当输入线卡根据至少一个缓存信息块判断报文不可以存储在第四缓存块中所述输入线卡建立第一缓存信息块时,则输入线卡需要建立第一缓存信息块,其中,建立第一缓存信息块的过程为:输入线卡确定至少一个缓存信息块所对应的所有第四缓存块中为第二缓存块的个数;若所述个数小于或者等于第一预设值,则所述输入线卡根据未被占用的第二 缓存块建立所述第一缓存信息块;否则,则所述输入线卡根据所述可用的第一缓存块的信息建立所述第一缓存信息块,即输入线卡与交换模块协商,以建立第一缓存信息块,由于各个输入线卡之间是相互独立的,因此,它们在于交换模块进行协商时,可能会存在多个输入线卡与同一个交换模块进行协商,有可能会存在资源不足,协商失败的情况,当协商失败后,交换模块要向协商失败的输入线卡发送协商失败消息,以使得该输入线卡重新选择交换模块进行协商,针对同一个输入线卡,当协商失败的次数超过预设值时,则该输入线卡要进行丢包。
其中,根据未被占用的第二缓存块建立后的第一缓存信息块包括:确定第二缓存块已占用空间大小,用于指示该第二缓存块不为第一缓存块的指示信息。
所述可用的第一缓存块的信息包括:所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数;
相应的,所述输入线卡根据所述可用的第一缓存块的信息建立所述第一缓存信息块,即协商过程包括:所述输入线卡根据所述每个报文的大小、所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数确定第二交换模块;所述输入线卡向所述第二交换模块发送缓存信息块建立请求消息,所述缓存信息块建立请求消息用于请求获取所述可用的第一缓存块;所述第二交换模块根据所述缓存信息块建立请求消息为所述输入线卡分配所述可用的第一缓存块;所述第二交换模块向所述输入线卡发送缓存信息块建立响应消息,缓存信息块建立响应消息包括:所述第三缓存模块的标识;所述输入线卡根据所述第三缓存模块的标识建立所述第一缓存信息块。
其中,本申请中,每个缓存块的大小固定,基于此,输入线卡根据报文的大小,根据每个第三缓存模块中可用的第一缓存块的个数和缓存块的大小计算第三缓存模块的可用空间,当报文的大小小于第三缓存模块的可用空间时,则可以向该第三缓存模块所对应的第二交换模块发送缓存信息块建立请求消息,第二交换模块分配所述可用的第一缓存块,所述输入线卡根据所述第三缓存模块的标识所建立的所述第一缓存信息块包括:可用的第一缓存块已占用空间大小、用于指示该可用的第一缓存块为第一缓存块的指示信息,以及该第三缓存模块的标识。
本申请中,详细介绍了输入线卡如何根据输入线卡中所存储的至少一个缓存信息块和可用的第一缓存块的信息为至少一个报文中的每个报文分配第三缓存块,以及输入线卡如何为每个报文分配第三缓存块,通过上述内容的介绍,以实现对报文的分布式缓存。
基于上述实施例的基础,下面将详细介绍上述第三方面:报文调度。其中,报文调度的方法在上述步骤S304或者S504或者S505之后执行,具体地,图6为本申请再一实施例提供的报文处理方法的流程图,如图6所示,该方法包括如下流程:
步骤S601:输入线卡向输出线卡发送报文队列状态;
其中,报文队列状态包括:报文队列目前所包括的报文的大小,可选地,还包括:各个报文队列的优先级等。
步骤S602:输出线卡根据报文队列状态确定可调度报文大小,并将可调度报文大小发送给输入线卡;
输出线卡根据路由器的配置情况,确定一次可以传输的报文大小,并根据报文队列目前所包括的报文的大小以及各个报文队列的优先级等确定可调度报文大小,并将可调度报 文大小发送给输入线卡。
步骤S603:输入线卡根据可调度报文大小,按照至少一个缓存信息块的顺序和每个缓存信息块所对应的至少一个第四缓存块的顺序调度报文。
例如:输入线卡确定可调度报文大小为500Byte,则输入线卡按照至少一个缓存信息块的顺序,调度对应的报文,其中所调度的报文大小总和小于或者等于500Byte。
可选地,当所述每个缓存信息块对应的所述至少一个第四缓存块中的报文都被调度完毕时,则所述输入线卡释放所述每个缓存信息块,并且当所述至少一个第四缓存块包括第一缓存块时,则所述输入线卡向所包括的第一缓存块所在的第四缓存模块发送请求释放消息;所述第四缓存模块释放所述所包括的第一缓存块,并发布所述第四缓存模块中可用的第一缓存块的信息。
本申请中,主要提供了报文调度的方法,由于报文按照缓存信息块的顺序,存储在对应的第四缓存块中,因此,输入线卡同样根据缓存信息块的顺序进行报文调度,以保证报文调度的可靠性。
本申请还提供一种路由器,如图2所示,该路由器包括:输入线卡21、输出线卡22、连接所述输入线卡21和所述输出线卡22的至少一个交换模块23,以及与交换模块23连接的第一缓存模块24,所述第一缓存模块24包括:至少一个第一缓存块,所述输入线卡21包括:至少一个第二缓存模块,每个第二缓存模块包括:至少一个第二缓存块。
其中,所述输入线卡21用于:
接收至少一个报文;
获取第三缓存模块中可用的第一缓存块的信息,所述第三缓存模块为包括有可用的第一缓存块的第一缓存模块24;
根据所述输入线卡21中所存储的至少一个缓存信息块和所述可用的第一缓存块的信息为所述至少一个报文中的每个报文分配第三缓存块,所述第三缓存块为第一缓存块或者第二缓存块,每个缓存信息块对应至少一个第四缓存块,所述至少一个第四缓存块中每个第四缓存块为第一缓存块或者第二缓存块,所述每个缓存信息块用于指示所述每个第四缓存块的占用情况;
将所述每个报文缓存至所述第三缓存块中。
本实施例提供的路由器,可以用于执行图3对应报文处理方法的实施技术方案,其实现原理和技术效果类似,此处不再赘述。
可选地,所述输入线卡21具体用于:
根据所述至少一个缓存信息块判断所述每个报文是否可存储在第四缓存块中;
若是,则将选择一个第四缓存块作为所述第三缓存块,并将所述每个报文分配给所述第三缓存块;
否则,则建立第一缓存信息块,并将所述至少一个缓存信息块和所述第一缓存信息块作为新的至少一个缓存信息块,根据所述新的至少一个缓存信息块、所述可用的第一缓存块的信息为所述每个报文分配所述第三缓存块。
可选地,所述每个缓存信息块包括:所述每个第四缓存块已占用空间大小、用于指示所述每个第四缓存块是否为第一缓存块的指示信息;当所述指示信息指示所述每个第四缓存块为所述第一缓存块时,则所述每个缓存信息块还包括:所述每个第四缓存块所在的第 一缓存模块24的标识;
其中,所述至少一个缓存信息块以链表的形式存储在所述输入线卡21中;
相应的,所述输入线卡21具体用于:
判断第五缓存块已占用空间大小与所述每个报文的大小之和是否小于所述最后一个缓存块的大小,所述第五缓存块为所述至少一个缓存信息块中最后一个缓存信息块所对应的最后一个第四缓存块。
可选地,所述输入线卡21具体用于:
当确定所述第三缓存块为第二缓存块时,则为所述每个报文直接分配所述第三缓存块;
当确定所述第三缓存块为第一缓存块时,则向所述第三缓存块所对应的第一交换模块发送分配请求消息,所述分配请求消息用于请求所述第一交换模块为所述每个报文分配所述第三缓存块;
所述第一交换模块用于根据所述分配请求消息为所述每个报文分配所述第三缓存块,并且向所述输入线卡21发送分配响应消息,所述分配响应消息包括:所述第三缓存块的标识;
所述输入线卡21还用于根据所述分配响应消息将所述每个报文分配给所述第三缓存块。
可选地,所述输入线卡21具体用于:
确定所述至少一个缓存信息块所对应的所有第四缓存块中为第二缓存块的个数;
若所述个数小于或者等于第一预设值,则根据未被占用的第二缓存块建立所述第一缓存信息块;
否则,则根据所述可用的第一缓存块的信息建立所述第一缓存信息块。
可选地,所述可用的第一缓存块的信息包括:所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数;
相应的,所述输入线卡21具体用于:
根据所述每个报文的大小、所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数确定第二交换模块;
向所述第二交换模块发送缓存信息块建立请求消息,所述缓存信息块建立请求消息用于请求获取所述可用的第一缓存块;
所述第二交换模块用于根据所述缓存信息块建立请求消息为所述输入线卡21分配所述可用的第一缓存块,并向所述输入线卡21发送缓存信息块建立响应消息,缓存信息块建立响应消息包括:所述第三缓存模块的标识;
所述输入线卡21还用于根据所述第三缓存模块的标识建立所述第一缓存信息块。
本申请中,详细介绍了输入线卡如何根据输入线卡中所存储的至少一个缓存信息块和可用的第一缓存块的信息为至少一个报文中的每个报文分配第三缓存块,以及输入线卡如何为每个报文分配第三缓存块,通过上述内容的介绍,以实现对报文的分布式缓存。
可选地,所述输入线卡21还用于向所述输出线卡22发送报文队列状态;
所述输出线卡22用于根据所述报文队列状态确定可调度报文大小,并将所述可调度报文大小发送给所述输入线卡21;
所述输入线卡21还用于根据所述可调度报文大小,按照所述至少一个缓存信息块的 顺序和所述每个缓存信息块所对应的所述至少一个第四缓存块的顺序调度报文。
可选地,当所述每个缓存信息块对应的所述至少一个第四缓存块中的报文都被调度完毕时,则所述输入线卡21还用于释放所述每个缓存信息块,并且当所述至少一个第四缓存块包括第一缓存块时,则所述输入线卡21还用于向所包括的第一缓存块所在的第四缓存模块发送请求释放消息;
所述第四缓存模块用于释放所述所包括的第一缓存块,并发布所述第四缓存模块中可用的第一缓存块的信息。
本申请中,由于报文按照缓存信息块的顺序,存储在对应的第四缓存块中,因此,路由器的输入线卡同样根据缓存信息块的顺序进行报文调度,以保证报文调度的可靠性。
以上所述,仅为本发明的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (16)

  1. 一种报文处理方法,其特征在于,所述方法应用于路由器,所述路由器包括:输入线卡、输出线卡、连接所述输入线卡和所述输出线卡的至少一个交换模块,以及与交换模块连接的第一缓存模块,所述第一缓存模块包括:至少一个第一缓存块,所述输入线卡包括:至少一个第二缓存模块,每个第二缓存模块包括:至少一个第二缓存块;所述方法包括:
    所述输入线卡接收至少一个报文;
    所述输入线卡获取第三缓存模块中可用的第一缓存块的信息,所述第三缓存模块为包括有可用的第一缓存块的第一缓存模块;
    所述输入线卡根据所述输入线卡中所存储的至少一个缓存信息块和所述可用的第一缓存块的信息为所述至少一个报文中的每个报文分配第三缓存块,所述第三缓存块为第一缓存块或者第二缓存块,每个缓存信息块对应至少一个第四缓存块,所述至少一个第四缓存块中每个第四缓存块为第一缓存块或者第二缓存块,所述每个缓存信息块用于指示所述每个第四缓存块的占用情况;
    所述输入线卡将所述每个报文缓存至所述第三缓存块中。
  2. 根据权利要求1所述的方法,其特征在于,所述输入线卡根据所述输入线卡中所存储的至少一个缓存信息块和所述可用的第一缓存块的信息为所述至少一个报文中的每个报文分配第三缓存块,包括:
    所述输入线卡根据所述至少一个缓存信息块判断所述每个报文是否可存储在第四缓存块中;
    若是,则所述输入线卡将选择一个第四缓存块作为所述第三缓存块,并将所述每个报文分配给所述第三缓存块;
    否则,则所述输入线卡建立第一缓存信息块,并将所述至少一个缓存信息块和所述第一缓存信息块作为新的至少一个缓存信息块,所述输入线卡根据所述新的至少一个缓存信息块、所述可用的第一缓存块的信息为所述每个报文分配所述第三缓存块。
  3. 根据权利要求2所述的方法,其特征在于,所述每个缓存信息块包括:所述每个第四缓存块已占用空间大小、用于指示所述每个第四缓存块是否为第一缓存块的指示信息;当所述指示信息指示所述每个第四缓存块为所述第一缓存块时,则所述每个缓存信息块还包括:所述每个第四缓存块所在的第一缓存模块的标识;
    其中,所述至少一个缓存信息块以链表的形式存储在所述输入线卡中;
    相应的,所述输入线卡根据所述至少一个缓存信息块判断所述每个报文是否可存储在第四缓存块中,包括:
    所述输入线卡判断第五缓存块已占用空间大小与所述每个报文的大小之和是否小于所述最后一个缓存块的大小,所述第五缓存块为所述至少一个缓存信息块中最后一个缓存信息块所对应的最后一个第四缓存块。
  4. 根据权利要求2或3所述的方法,其特征在于,所述输入线卡将所述每个报文分配给所述第三缓存块,包括:
    当所述输入线卡确定所述第三缓存块为第二缓存块时,则所述输入线卡为所述每个报 文直接分配所述第三缓存块;
    当所述输入线卡确定所述第三缓存块为第一缓存块时,则所述输入线卡向所述第三缓存块所对应的第一交换模块发送分配请求消息,所述分配请求消息用于请求所述第一交换模块为所述每个报文分配所述第三缓存块;
    所述第一交换模块根据所述分配请求消息为所述每个报文分配所述第三缓存块;
    所述第一交换模块向所述输入线卡发送分配响应消息,所述分配响应消息包括:所述第三缓存块的标识;
    所述输入线卡根据所述分配响应消息将所述每个报文分配给所述第三缓存块。
  5. 根据权利要求2-4任一项所述的方法,其特征在于,所述输入线卡建立第一缓存信息块,包括:
    所述输入线卡确定所述至少一个缓存信息块所对应的所有第四缓存块中为第二缓存块的个数;
    若所述个数小于或者等于第一预设值,则所述输入线卡根据未被占用的第二缓存块建立所述第一缓存信息块;
    否则,则所述输入线卡根据所述可用的第一缓存块的信息建立所述第一缓存信息块。
  6. 根据权利要求5所述的方法,其特征在于,所述可用的第一缓存块的信息包括:所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数;
    相应的,所述输入线卡根据所述可用的第一缓存块的信息建立所述第一缓存信息块,包括:
    所述输入线卡根据所述每个报文的大小、所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数确定第二交换模块;
    所述输入线卡向所述第二交换模块发送缓存信息块建立请求消息,所述缓存信息块建立请求消息用于请求获取所述可用的第一缓存块;
    所述第二交换模块根据所述缓存信息块建立请求消息为所述输入线卡分配所述可用的第一缓存块;
    所述第二交换模块向所述输入线卡发送缓存信息块建立响应消息,缓存信息块建立响应消息包括:所述第三缓存模块的标识;
    所述输入线卡根据所述第三缓存模块的标识建立所述第一缓存信息块。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述输入线卡将所述每个报文缓存至所述第三缓存块中之后,还包括:
    所述输入线卡向所述输出线卡发送报文队列状态;
    所述输出线卡根据所述报文队列状态确定可调度报文大小,并将所述可调度报文大小发送给所述输入线卡;
    所述输入线卡根据所述可调度报文大小,按照所述至少一个缓存信息块的顺序和所述每个缓存信息块所对应的所述至少一个第四缓存块的顺序调度报文。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,还包括:
    当所述每个缓存信息块对应的所述至少一个第四缓存块中的报文都被调度完毕时,则所述输入线卡释放所述每个缓存信息块,并且当所述至少一个第四缓存块包括第一缓存块时,则所述输入线卡向所包括的第一缓存块所在的第四缓存模块发送请求释放消息;
    所述第四缓存模块释放所述所包括的第一缓存块,并发布所述第四缓存模块中可用的第一缓存块的信息。
  9. 一种路由器,其特征在于,包括:输入线卡、输出线卡、连接所述输入线卡和所述输出线卡的至少一个交换模块,以及与交换模块连接的第一缓存模块,所述第一缓存模块包括:至少一个第一缓存块,所述输入线卡包括:至少一个第二缓存模块,每个第二缓存模块包括:至少一个第二缓存块;
    所述输入线卡用于:
    接收至少一个报文;
    获取第三缓存模块中可用的第一缓存块的信息,所述第三缓存模块为包括有可用的第一缓存块的第一缓存模块;
    根据所述输入线卡中所存储的至少一个缓存信息块和所述可用的第一缓存块的信息为所述至少一个报文中的每个报文分配第三缓存块,所述第三缓存块为第一缓存块或者第二缓存块,每个缓存信息块对应至少一个第四缓存块,所述至少一个第四缓存块中每个第四缓存块为第一缓存块或者第二缓存块,所述每个缓存信息块用于指示所述每个第四缓存块的占用情况;
    将所述每个报文缓存至所述第三缓存块中。
  10. 根据权利要求9所述的路由器,其特征在于,所述输入线卡具体用于:
    根据所述至少一个缓存信息块判断所述每个报文是否可存储在第四缓存块中;
    若是,则将选择一个第四缓存块作为所述第三缓存块,并将所述每个报文分配给所述第三缓存块;
    否则,则建立第一缓存信息块,并将所述至少一个缓存信息块和所述第一缓存信息块作为新的至少一个缓存信息块,根据所述新的至少一个缓存信息块、所述可用的第一缓存块的信息为所述每个报文分配所述第三缓存块。
  11. 根据权利要求10所述的路由器,其特征在于,所述每个缓存信息块包括:所述每个第四缓存块已占用空间大小、用于指示所述每个第四缓存块是否为第一缓存块的指示信息;当所述指示信息指示所述每个第四缓存块为所述第一缓存块时,则所述每个缓存信息块还包括:所述每个第四缓存块所在的第一缓存模块的标识;
    其中,所述至少一个缓存信息块以链表的形式存储在所述输入线卡中;
    相应的,所述输入线卡具体用于:
    判断第五缓存块已占用空间大小与所述每个报文的大小之和是否小于所述最后一个缓存块的大小,所述第五缓存块为所述至少一个缓存信息块中最后一个缓存信息块所对应的最后一个第四缓存块。
  12. 根据权利要求10或11所述的路由器,其特征在于,所述输入线卡具体用于:
    当确定所述第三缓存块为第二缓存块时,则为所述每个报文直接分配所述第三缓存块;
    当确定所述第三缓存块为第一缓存块时,则向所述第三缓存块所对应的第一交换模块发送分配请求消息,所述分配请求消息用于请求所述第一交换模块为所述每个报文分配所述第三缓存块;
    所述第一交换模块用于根据所述分配请求消息为所述每个报文分配所述第三缓存块,并且向所述输入线卡发送分配响应消息,所述分配响应消息包括:所述第三缓存块的标识; 所述输入线卡还用于根据所述分配响应消息将所述每个报文分配给所述第三缓存块。
  13. 根据权利要求10-12任一项所述的路由器,其特征在于,所述输入线卡具体用于:
    确定所述至少一个缓存信息块所对应的所有第四缓存块中为第二缓存块的个数;
    若所述个数小于或者等于第一预设值,则根据未被占用的第二缓存块建立所述第一缓存信息块;
    否则,则根据所述可用的第一缓存块的信息建立所述第一缓存信息块。
  14. 根据权利要求13所述的路由器,其特征在于,所述可用的第一缓存块的信息包括:所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数;
    相应的,所述输入线卡具体用于:
    根据所述每个报文的大小、所述第三缓存模块的标识和所述第三缓存模块所包括的可用的第一缓存块的个数确定第二交换模块;
    向所述第二交换模块发送缓存信息块建立请求消息,所述缓存信息块建立请求消息用于请求获取所述可用的第一缓存块;
    所述第二交换模块用于根据所述缓存信息块建立请求消息为所述输入线卡分配所述可用的第一缓存块,并向所述输入线卡发送缓存信息块建立响应消息,缓存信息块建立响应消息包括:所述第三缓存模块的标识;
    所述输入线卡还用于根据所述第三缓存模块的标识建立所述第一缓存信息块。
  15. 根据权利要求9-14任一项所述的路由器,其特征在于,所述输入线卡还用于向所述输出线卡发送报文队列状态;
    所述输出线卡用于根据所述报文队列状态确定可调度报文大小,并将所述可调度报文大小发送给所述输入线卡;
    所述输入线卡还用于根据所述可调度报文大小,按照所述至少一个缓存信息块的顺序和所述每个缓存信息块所对应的所述至少一个第四缓存块的顺序调度报文。
  16. 根据权利要求9-15任一项所述的路由器,其特征在于,
    当所述每个缓存信息块对应的所述至少一个第四缓存块中的报文都被调度完毕时,则所述输入线卡还用于释放所述每个缓存信息块,并且当所述至少一个第四缓存块包括第一缓存块时,则所述输入线卡还用于向所包括的第一缓存块所在的第四缓存模块发送请求释放消息;
    所述第四缓存模块用于释放所述所包括的第一缓存块,并发布所述第四缓存模块中可用的第一缓存块的信息。
PCT/CN2017/095165 2016-08-04 2017-07-31 报文处理方法及路由器 WO2018024173A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP17836355.2A EP3487132B1 (en) 2016-08-04 2017-07-31 Packet processing method and router
US16/264,309 US10911364B2 (en) 2016-08-04 2019-01-31 Packet processing method and router

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610633110.0A CN107689923B (zh) 2016-08-04 2016-08-04 报文处理方法及路由器
CN201610633110.0 2016-08-04

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/264,309 Continuation US10911364B2 (en) 2016-08-04 2019-01-31 Packet processing method and router

Publications (1)

Publication Number Publication Date
WO2018024173A1 true WO2018024173A1 (zh) 2018-02-08

Family

ID=61072703

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/095165 WO2018024173A1 (zh) 2016-08-04 2017-07-31 报文处理方法及路由器

Country Status (4)

Country Link
US (1) US10911364B2 (zh)
EP (1) EP3487132B1 (zh)
CN (1) CN107689923B (zh)
WO (1) WO2018024173A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108833301A (zh) * 2018-05-30 2018-11-16 杭州迪普科技股份有限公司 一种报文处理方法和装置
US11018973B2 (en) * 2019-05-31 2021-05-25 Microsoft Technology Licensing, Llc Distributed sonic fabric chassis
CN110927349B (zh) * 2019-12-27 2022-04-01 中央储备粮三明直属库有限公司 一种基于Lora的粮仓气体监测方法
CN111371704B (zh) * 2020-02-06 2024-03-15 视联动力信息技术股份有限公司 一种数据缓存方法、装置、终端设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299721A (zh) * 2008-06-19 2008-11-05 杭州华三通信技术有限公司 交换网报文交换方法、交换装置、路由线卡和以太线卡
CN101594299A (zh) * 2009-05-20 2009-12-02 清华大学 基于链表的交换网络中队列缓冲管理方法
US20100238941A1 (en) * 2009-03-19 2010-09-23 Fujitsu Limited Packet transmission apparatus, line interface unit, and control method for packet transmission apparatus
CN102006226A (zh) * 2010-11-19 2011-04-06 福建星网锐捷网络有限公司 报文缓存管理方法、装置及网络设备
CN102739536A (zh) * 2012-06-26 2012-10-17 华为技术有限公司 一种报文缓存方法及路由器
US8804751B1 (en) * 2005-10-04 2014-08-12 Force10 Networks, Inc. FIFO buffer with multiple stream packet segmentation

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000072530A2 (en) * 1999-05-21 2000-11-30 Avici Systems Fabric router with flit caching
US7102999B1 (en) * 1999-11-24 2006-09-05 Juniper Networks, Inc. Switching device
US7565496B2 (en) * 2005-01-22 2009-07-21 Cisco Technology, Inc. Sharing memory among multiple information channels
US7839779B2 (en) * 2005-05-16 2010-11-23 Cisco Technology, Inc. Queue aware flow control
US8135024B2 (en) * 2005-11-14 2012-03-13 Corning Incorporated Method and system to reduce interconnect latency
US8149710B2 (en) * 2007-07-05 2012-04-03 Cisco Technology, Inc. Flexible and hierarchical dynamic buffer allocation
CN101252536B (zh) * 2008-03-31 2010-06-02 清华大学 路由器多队列数据包缓存管理与输出队列调度系统
CN101272345B (zh) * 2008-04-29 2010-08-25 杭州华三通信技术有限公司 一种流量控制的方法、系统和装置
CN101304383B (zh) * 2008-07-07 2010-10-27 杭州华三通信技术有限公司 交换网报文交换方法和交换系统
US8670454B2 (en) * 2009-03-26 2014-03-11 Oracle America, Inc. Dynamic assignment of data to switch-ingress buffers
US9363173B2 (en) * 2010-10-28 2016-06-07 Compass Electro Optical Systems Ltd. Router and switch architecture
US9008113B2 (en) * 2010-12-20 2015-04-14 Solarflare Communications, Inc. Mapped FIFO buffering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8804751B1 (en) * 2005-10-04 2014-08-12 Force10 Networks, Inc. FIFO buffer with multiple stream packet segmentation
CN101299721A (zh) * 2008-06-19 2008-11-05 杭州华三通信技术有限公司 交换网报文交换方法、交换装置、路由线卡和以太线卡
US20100238941A1 (en) * 2009-03-19 2010-09-23 Fujitsu Limited Packet transmission apparatus, line interface unit, and control method for packet transmission apparatus
CN101594299A (zh) * 2009-05-20 2009-12-02 清华大学 基于链表的交换网络中队列缓冲管理方法
CN102006226A (zh) * 2010-11-19 2011-04-06 福建星网锐捷网络有限公司 报文缓存管理方法、装置及网络设备
CN102739536A (zh) * 2012-06-26 2012-10-17 华为技术有限公司 一种报文缓存方法及路由器

Also Published As

Publication number Publication date
US20190166058A1 (en) 2019-05-30
CN107689923B (zh) 2021-02-12
CN107689923A (zh) 2018-02-13
EP3487132A4 (en) 2019-07-03
EP3487132B1 (en) 2020-11-04
EP3487132A1 (en) 2019-05-22
US10911364B2 (en) 2021-02-02

Similar Documents

Publication Publication Date Title
KR102239717B1 (ko) 패킷 처리 방법 및 장치
US9479384B2 (en) Data stream scheduling method, device, and system
WO2018024173A1 (zh) 报文处理方法及路由器
US11785113B2 (en) Client service transmission method and apparatus
US8553708B2 (en) Bandwith allocation method and routing device
US9122439B2 (en) System and method for efficient buffer management for banked shared memory designs
TWI550411B (zh) 動態佇列臨界值限制方法、能實現動態佇列臨界值限制的交換機及其系統
KR102410422B1 (ko) 네트워크에서의 분산 프로세싱
CN101299721B (zh) 交换网报文交换方法和交换装置
TW200926860A (en) Method for providing a buffer status report in a mobile communication network
WO2016078341A1 (zh) 一种缓存分配方法、装置及网络处理器
CN105391567A (zh) 流量管理实现方法、装置和网络设备
KR20080075308A (ko) Ip 네트워크 시스템에서의 패킷 버퍼 관리 장치 및 방법
US20150058485A1 (en) Flow scheduling device and method
JP2014241493A (ja) 送信装置、送信方法及びプログラム
US8838782B2 (en) Network protocol processing system and network protocol processing method
KR20140125274A (ko) 방송 시스템에서 동적 큐 관리 방법 및 장치
JP2017526244A (ja) マルチメディアシステムにおける情報を送受信する方法及び装置
US10078607B2 (en) Buffer management method and apparatus for universal serial bus communication in wireless environment
JP4957660B2 (ja) ラベルスイッチングネットワークにおける通信装置
WO2014075525A1 (zh) 一种报文转发方法和装置
WO2015180426A1 (zh) 一种数据传输方法、装置及系统
WO2019165855A1 (zh) 一种报文传输的方法及装置
JP2008060700A (ja) バッファ制御装置及びバッファ制御方法
JP5621588B2 (ja) 通信装置、中継装置及びネットワークシステム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17836355

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017836355

Country of ref document: EP

Effective date: 20190213