WO2023116340A1 - 一种数据报文的转发方法及装置 - Google Patents

一种数据报文的转发方法及装置 Download PDF

Info

Publication number
WO2023116340A1
WO2023116340A1 PCT/CN2022/134231 CN2022134231W WO2023116340A1 WO 2023116340 A1 WO2023116340 A1 WO 2023116340A1 CN 2022134231 W CN2022134231 W CN 2022134231W WO 2023116340 A1 WO2023116340 A1 WO 2023116340A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
queue
storage space
data message
type
Prior art date
Application number
PCT/CN2022/134231
Other languages
English (en)
French (fr)
Inventor
林俊杰
Original Assignee
锐捷网络股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 锐捷网络股份有限公司 filed Critical 锐捷网络股份有限公司
Priority to US18/317,969 priority Critical patent/US20230283578A1/en
Publication of WO2023116340A1 publication Critical patent/WO2023116340A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/39Credit based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9084Reactions to storage capacity overflow

Definitions

  • the embodiments of the present invention relate to the technical field of communications, and in particular, to a data packet forwarding method and device.
  • the forwarding of data packets needs to pass through the switch or router system, cache the received data packets through the cache management mechanism in the switch or router system, and forward the cached data packets after meeting certain conditions .
  • Each message queue corresponds to a block storage space, and stores the data message in the corresponding message queue, that is, the message
  • the data is stored in the corresponding block storage space, and when the block storage space is full or no complete data message can be stored, the data message stored in the block storage space is forwarded. Since each data packet has a corresponding flow, when the data packet flow is small, the time required to accumulate to the size of the block storage space is longer, and the data packets that have been stored in the block storage space are waiting to be sent longer . To sum up, when the current switch or router system forwards data packets, there is a relatively large transmission delay.
  • Embodiments of the present application provide a method and device for forwarding data packets, so as to reduce transmission delay.
  • the embodiment of the present application provides a method for forwarding a data message, the method including:
  • the message queue Monitor the storage status of the message queue corresponding to the currently received target data message; the message queue is used to store data messages consistent with the address information accessed by the target data message;
  • the target data packet is stored in the first type of block storage space corresponding to the low-delay queue; wherein, the first state is used to indicate that the total flow of data packets stored in the message queue is less than the flow threshold ;
  • the data packets stored in the first type of block storage space are forwarded.
  • the message queue corresponding to the target data message is determined, and the storage state corresponding to the message queue is monitored.
  • the storage state is determined to be the first state, it is indicated that the target data message
  • the total flow of data packets currently stored in the message queue corresponding to the file is relatively small. If the target data message is stored in the message queue, since the message queue is used to store data packets consistent with the address information accessed by the target data message Therefore, a certain waiting time is required before the total flow of data packets meets the forwarding conditions of data packets, resulting in a delay in the transmission of data packets.
  • the target data message is stored in the first type of block storage space corresponding to the low-delay queue, and the low-delay queue is used to store the data related to the target data.
  • the address information accessed by the message is consistent and/or inconsistent with the data message, so the total flow of the data message can quickly meet the forwarding conditions of the data message, and when the forwarding condition is met, the first type of block storage space
  • the data packets stored in the network are forwarded to reduce the transmission delay of data packets.
  • monitoring the storage status of the message queue corresponding to the currently received target data message includes:
  • the remaining queue length of the message queue is monitored, and based on the remaining queue length, the storage state of the message queue corresponding to the target data message is determined.
  • two implementations of monitoring the storage state of the message queue are provided to accurately determine the storage state of the message queue, and based on the storage state, determine the block storage space where the target data message will be stored, so as to reduce data The transmission delay of the message.
  • monitoring the target credit point of the message queue includes:
  • For the message queue determine the first credit point assigned to the message queue, and when forwarding the data message corresponding to the message queue, determine the second credit point for deduction based on the total flow of the forwarded data message;
  • a target credit is determined.
  • the embodiment of the present application provides an implementation method of monitoring the target credit point of the message queue, so as to determine the target credit point of the message queue, and further determine the storage status of the message queue based on the target credit point.
  • the data packets stored in the first type of block storage space are forwarded, including:
  • BD buffer Description
  • the specific situation of forwarding the data message stored in the first type of block storage space is provided, and the data message stored in the first type of block storage space is forwarded in time to reduce the transmission delay of the data message .
  • the target data packet when it is determined that the storage state is the second state, the target data packet is stored in the second type of block storage space corresponding to the corresponding message queue, wherein the second state is used to represent the message queue
  • the total flow of stored data packets is greater than or equal to the flow threshold
  • the second BD information corresponding to the second type of block storage space is sent to forward the data packets stored in the second type of block storage space.
  • the storage state of the message queue when it is determined that the storage state of the message queue is the second state, it means that the total flow of data packets currently stored in the message queue corresponding to the target data packet is larger than the expected scheduling flow, so the target data packet Store in the second type of block storage space corresponding to the corresponding message queue, cache and store the message queue, and when the second condition is met, send the second BD information corresponding to the second type of block storage space, and then the scheduling The management module decides whether to forward the data packets stored in the second type of block storage space.
  • the second BD information corresponding to the second type of block storage space is sent to forward the data message stored in the second type of block storage space, including:
  • the second BD information corresponding to the second type of block storage space is sent to forward the data packets stored in the second type of block storage space ;
  • the second BD information corresponding to the second type of block storage space is sent to forward the data packets stored in the second type of block storage space.
  • the specific situation of forwarding the data message stored in the second type of block storage space is provided, and the data message stored in the second type of block storage space is forwarded in time to reduce the transmission delay of the data message .
  • the block storage space of the first type is not larger than the block storage space of the second type.
  • the size of the first-type block storage space corresponding to the low-delay queue is not larger than the size of the second-type block storage space corresponding to the message queue, which can quickly fill up the first-type block storage space and reduce data Packet transmission delay.
  • the embodiment of the present application provides a device for forwarding a data message, the device comprising:
  • the monitoring unit is used to monitor the storage state of the message queue corresponding to the currently received target data message; the message queue is used to store the data message consistent with the address information accessed by the target data message;
  • the storage unit is configured to store the target data message in the first type of block storage space corresponding to the low-delay queue when the storage state is determined to be the first state; wherein, the first state is used to represent the data message stored in the message queue The total flow is less than the flow threshold;
  • the forwarding unit is configured to forward the data packets stored in the first type of block storage space when the first condition is met.
  • the embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and operable on the processor, wherein, when the processor executes the computer program, any of the above data The steps of the message forwarding method.
  • the embodiment of the present application provides a computer-readable storage medium, which stores a computer program executable by an electronic device, and when the program is run on the electronic device, the electronic device executes any one of the above-mentioned data packets. The steps of the forwarding method.
  • the embodiment of the present application provides a computer storage medium, and the computer storage medium stores computer instructions, and when the computer instructions are run on the computer, the computer is made to execute the steps of any one of the methods for forwarding data packets described above.
  • FIG. 1 is a schematic diagram of a data packet forwarding system framework in the related art
  • FIG. 2 is a schematic diagram of a cache initialization in the related art
  • FIG. 3 is a schematic diagram of enqueuing BD information of a message queue in the related art
  • FIG. 4 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a switch or router system provided by an embodiment of the present application.
  • FIG. 6 is a flow chart of a method for forwarding a data packet provided in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of monitoring storage status based on target credit points provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a message queue bundled storage provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of off-chip access to a message queue bundled storage provided by an embodiment of the present application.
  • FIG. 10 is a flow chart of another data message forwarding method provided by the embodiment of the present application.
  • FIG. 11 is a schematic diagram of a distributed storage of message queues provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of off-chip access to distributed storage of message queues provided by an embodiment of the present application.
  • FIG. 13 is a flow chart of a specific implementation of a data packet forwarding method provided by the embodiment of the present application.
  • FIG. 14 is a structural diagram of a data packet forwarding device provided by an embodiment of the present application.
  • FIG. 15 is a structural diagram of an electronic device provided by an embodiment of the present application.
  • Queue storage generally refers to storing certain types of data packets with specific address information in the same message queue, such as storing data packets with the same Media Access Control Address (MAC) in the same message queue Data packets with the same Internet Protocol (Internet Protocol, IP) address are stored in the same message queue or data packets with the same destination port are stored in the same message queue.
  • MAC Media Access Control Address
  • IP Internet Protocol
  • the data message When a normal data message enters the switch or router system, the data message will be classified and processed based on the specific address information carried in the data message. After the classification is completed, the data message will be stored in a message corresponding to the corresponding category. Queue, which realizes the mapping relationship between data packets and packet queues.
  • the low-end switch or router system supports the least amount of message queue data, and supports message queues within the K (thousand) level; the high-end switch or router system can support a higher number of message queues than the low-end
  • the switch or router system supports message queues above K level; in particular, the number of message queues supported by the switch or router system in the core equipment of the operator is higher than that of high-end switches or routers, and can support up to M( Mega) level message queue.
  • Data message The data unit exchanged and transmitted in the network, including the complete data information to be sent, its length is very inconsistent, and the length is unlimited and variable.
  • Buffer management A storage unit is set in the switch or router system for message queue storage. When storing data messages, it is necessary to apply for a storage pointer, and store the data message in the block storage space indicated by the storage pointer. When forwarding the data message, it is addressed according to the storage pointer information corresponding to the message queue. The data message stored in the storage space is read, and the pointer is recycled for subsequent use.
  • Cache management mainly includes storage space division, pointer allocation, pointer recycling, etc.
  • Scheduling management Data packets enter the switch or routing system and are stored in message queues. Data packets in the same message queue need to be forwarded according to the first-in-first-out principle, otherwise it will cause forwarding disorder.
  • the pointers of the block storage space corresponding to the same message queue are formed into a linked list, and when the data message corresponding to the message queue is read, it is read according to the linked list.
  • the dispatch management module determines the BD information that needs to be dispatched out of the queue at present, and then the buffer management module reads the data message according to the BD information dispatched out of the queue.
  • Buffer Description (Buffer Description, BD) information: used to store information such as the message storage pointer, message number, and total flow of the current message queue; The BD information and the second BD information, wherein the first BD information corresponds to the packet queue when the data packets are distributed and stored, and the second BD information corresponds to the low-delay queue when the data packets are bound and stored.
  • the embodiment of the present application relates to the technical field of communication, and in particular to a method for forwarding data packets in a switch or router system.
  • the switch or router system forwards the data message
  • the data message is received through the port module, and the received data message is sent to the buffer management module, and is cached and stored according to the queue; the data of the same MAC address
  • the messages are stored in the block storage space corresponding to the same message queue, and the data messages of the same IP address are stored in the block storage space corresponding to the same message queue.
  • the cache management module generally divides the system memory into blocks, and each block storage space has a fixed size (for example, 16KB space), and corresponds to a BD pointer, which is used to record data messages in the block storage space and store and read addressing operate.
  • a BD pointer which is used to record data messages in the block storage space and store and read addressing operate.
  • FIG. 1 exemplarily provides a schematic diagram of a framework of a data packet forwarding system in the related art.
  • the cache management module When the system is initialized, the cache management module initializes all the block storage spaces, and stores the corresponding pointers in the pointer pool.
  • the pointers in the pointer pool are the pointer BD information available for queue storage. Please refer to FIG. 2 .
  • FIG. 2 exemplarily provides a schematic diagram of cache initialization.
  • FIG. 3 provides a schematic diagram of enqueuing BD information of a message queue in the related art.
  • the data message forwarding method described above has the following problems: the block storage space allocated by each message queue needs to be filled with a fixed size, or the BD information corresponding to the block storage space is sent only after the complete message cannot be stored. to the scheduling management module, so there will be a certain degree of delay in the forwarding process of data packets. And when the flow of data packets is small, the longer it takes to store a full block of storage space, that is to say, the longer the waiting time for data packets to be forwarded, the greater the delay in data packet forwarding.
  • an on-chip cache is set up inside the cache management module. Smaller than the off-chip cache, for example, 16KB off-chip and 512B on-chip (on-chip), so data packet forwarding delay can be reduced through on-chip cache.
  • this method needs to consume additional on-chip cache space and increase resource requirements for chips or systems.
  • the embodiments of the present application provide a data message forwarding method and device, so as to reduce the transmission delay of the data message.
  • a multi-queue low-delay cache management mechanism is set up, that is, the message data corresponding to multiple message queues is bundled and stored in a low-delay queue, and the multi-queue low-delay cache management mechanism is used in a switch or router system
  • the cache management module can be effectively used to improve the data packet forwarding performance and low-latency characteristics of the switch or router system, meet the storage and forwarding requirements of high-performance data packets, effectively improve the bandwidth utilization of the off-chip cache, and reduce resource consumption.
  • FIG. 4 exemplarily provides an application scenario in the embodiment of the present application, which includes a first terminal device 40, a switch 41, and a second terminal device 42; wherein, the first terminal device 40 and the second terminal device The two terminal devices 42 can transmit data packets through the switch 41 to achieve information exchange.
  • the embodiment of the present application after receiving the data message, first determine the storage state of the message queue corresponding to the data message; then, based on the storage state, determine whether to store the data message Into the low-latency queue, or store the data message in the corresponding message queue.
  • the buffer management module in the switch or router system provided by the embodiment of the present application is provided with a low-delay queue and a message queue.
  • FIG. 5 exemplarily provides a schematic diagram of the architecture of a switch or router system 500 in the embodiment of the present application, including: a port module 501, a cache management module 502, and a scheduling management module 503; wherein:
  • a port module 501 configured to receive and send data packets
  • the cache management module 502 is used to manage the storage memory of the switch or router system, store the data message, and form the BD information of the storage queue, and set the storage queue in two states inside the cache management module 502: low-delay queue and The message queue, and the BD information of the low-delay queue is called the first BD information, and the BD information of the message queue is called the second BD information; wherein, the low-delay queue is used to store the information stored when the message queue is in the first state.
  • Various data messages including data messages with consistent access address information and/or data messages with inconsistent access address information, and the message queue is used to store the address accessed by the received target data message Data packets with consistent information.
  • the scheduling management module 503 is configured to enqueue the second BD information currently corresponding to the message queue, and select the first enqueue in the second BD information linked list corresponding to the message queue according to the first-in first-out principle.
  • the second BD information, and the selected second BD information is dispatched out of the team, and credit monitoring and status monitoring are set in the scheduling management module 503, so as to monitor the target credit point of the message queue in real time, and based on the message queue The target credit switches the storage state of the message queue.
  • the embodiment of the present application provides a method for forwarding data packets, please refer to FIG. 6, which exemplarily provides a flow chart of a method for forwarding data packets in the embodiment of the present application. Including the following steps:
  • Step S600 monitoring the storage status of the message queue corresponding to the currently received target data message.
  • the message queue is used for storing the data message consistent with the address information accessed by the target data message.
  • At least one of the target credit point and the remaining queue length corresponding to the message queue is used to monitor the storage status of the message queue corresponding to the currently received target data message.
  • Method 1 Based on the target credit point, monitor the storage status of the message queue.
  • the dispatch management module in the switch or routing management system monitors the target credit point of the message queue, and based on the target credit point, determines the storage status of the message queue corresponding to the target data message.
  • the target credit point is determined based on the first credit point assigned to the message queue and the second credit point deducted based on the total flow of the forwarded data message when forwarding the data message corresponding to the message queue. Credit points are used to represent the forwarding rate of forwarded data packets.
  • FIG. 7 exemplarily provides a schematic diagram of monitoring storage status based on target credit points in an embodiment of the present application; wherein credit monitoring and storage status monitoring functions are set to determine the storage status of the message queue. Since the method of monitoring the storage state for each message queue is the same, only one message queue is used as an example for illustration in this embodiment of the present application.
  • the first credit point is periodically assigned to each message queue, and when the data message corresponding to the message queue is forwarded, the credit point is calculated based on the total flow of the forwarded data message Deduct and determine the deducted second credit point; determine the target credit point based on the first credit point obtained by allocation and the second credit point deducted when forwarding the data message; then determine the message queue based on the target credit point storage status.
  • a credit point threshold is set, and the target credit point is compared with the credit point threshold.
  • the target credit point is greater than the credit point threshold, it indicates that the rate at which the message queue forwards data packets is relatively slow, that is to say, the currently stored The total flow of data packets is small, so the storage state is the first state.
  • the target credit point is less than the credit point threshold, it indicates that the packet queue forwards data packets faster, that is to say, the total flow of currently stored data packets is larger, so the storage state is the second state.
  • the credit point threshold is set to 0, and the first credit point is allocated periodically for 1 millisecond at a rate of 100 Mbps, then the amount of the first credit point allocated per millisecond is 0.1 Mb. If the forwarding rate of the data message reaches 1000Mbps at this time, the second credit point of 1Mb is deducted approximately every 1 millisecond, and the amount of the second credit point deducted is greater than the amount of the first credit point allocated, and the target credit point will be quickly transferred to The deduction reaches a negative value state, and at this time, it is determined that the storage state of the message queue is the second state.
  • the forwarding rate of the data message is less than 100 Mbps
  • the second credit point deducted approximately every 1 millisecond is less than 0.1 Mb, and the target credit point of the message queue will not become a negative number, then it is determined that the message queue is in the first state.
  • Method 2 Based on the remaining queue length, monitor the storage status of the message queue.
  • the remaining queue length of the message queue is monitored by the scheduling management module in the switch or the routing management system, and based on the remaining queue length, the storage status of the message queue corresponding to the target data message is determined.
  • a queue length threshold is set, and the remaining queue length is compared with the queue length threshold. When the remaining queue length is greater than the queue length threshold, it indicates that the total flow of currently stored data packets is small, so the storage state is the first state .
  • the storage state is the second state.
  • Step S601 when it is determined that the storage state is the first state, store the target data packet in the first type of block storage space corresponding to the low-delay queue, and the first state is used to indicate that the total flow of data packets stored in the message queue is less than the flow rate threshold.
  • the switch or routing management system Since the switch or routing management system continuously receives data packets, each time a data packet is received, it will monitor the storage status of the packet queue corresponding to the currently received target data The message is stored;
  • the data message will be stored in the low-delay queue, so there are constantly monitored message queues in the low-delay queue Store data packets for the first state;
  • the received data message may be a data message for accessing the same address information, or a data message for accessing different address information, but whether it is a data message for accessing the same address information or a data message for accessing different address information Messages are all stored in the low-delay queue when it is detected that the corresponding message queue is in the first state. Therefore, the low-delay queue in the embodiment of the present application is used to store data messages with consistent access address information and/or access Data packets with inconsistent address information.
  • the target data Packets are stored in the first type of block storage space corresponding to the low-latency queue.
  • the switch or router system continuously receives the target data message, after each receiving a target data message, it monitors the storage state of the message queue corresponding to the target data message, and after determining that the storage state is the first state, Store the target data packet in the first type of block storage space corresponding to the low-delay queue.
  • FIG. 8 exemplarily provides a message queue bundling in the embodiment of the present application Schematic diagram of storage.
  • the second BD information of the corresponding message queue is obtained by addressing with the queue number as an index, wherein the message storage pointer of the current message queue is stored in the second BD information , number of packets, total flow and other information. Therefore, the packet queues corresponding to the obtained target data packets are bundled with the traffic entering the low-delay queues, that is, the data packets of multiple message queues are bundled and stored in the first type of block storage space corresponding to the low-delay queues.
  • FIG. 9 exemplarily provides a schematic diagram of off-chip access to a message queue bundled storage in an embodiment of the present application.
  • all message queues correspond to the same storage block.
  • the message queue When the message queue is storing, it initiates a write operation to the off-chip memory. Since the storage blocks are the same, the address of the write operation initiated by the message queue is continuous.
  • the storage initiation addresses of queues 0-3 are: block 0 address, block 0 address, block 0 address, and block 0 address.
  • the access addresses are merged continuously, which is beneficial to improve the storage performance of the off-chip memory, improve the store-and-forward performance of the switch or router system, and reduce the consumption of on-chip resources.
  • Step S602 when the first condition is satisfied, forward the data packets stored in the first type of block storage space.
  • the storage state is the first state
  • the port uses the queue number as an index to obtain the corresponding queue bundled traffic entering the low-latency queue, and performs an accumulation operation to determine the first type of block
  • the total traffic stored in the storage space and after entering the low-delay queue, no longer performs the corresponding message queue storage operation according to the queue number, but directly enters the low-delay queue for storage.
  • a fixed threshold such as 16KB, or the remaining space cannot store a complete data packet
  • the dequeue operation is performed immediately, and the queue is dequeued without going through the strategy of the scheduling module, that is, the first BD information of the queue is consistent with the first BD information of the dequeue.
  • the first BD information of the dequeue is determined, based on the first BD information of the dequeue
  • a BD information addresses the memory, reads the data message stored in the first type of block storage space, and forwards it.
  • the scheduling management module deducts the credit points of the message queue based on the information, and further determines the storage status of the message queue.
  • the first BD information corresponding to the low-delay queue is regularly detected, and after determining that the first BD information is valid, the data packets stored in the first-type block storage space are forwarded.
  • the bundling operation is beneficial to improve the packet forwarding delay characteristics, especially when the data packet forwarding rate is low.
  • each queue needs to independently fill a 16KB buffer space before enqueuing the first BD information and entering the forwarding process.
  • the bundling operation aggregates the small traffic of each queue into a large traffic, increasing the rate of filling 16KB cache blocks. For example: 10 message queues, each message queue fills 1KB of storage space per millisecond, and it takes 16 milliseconds to send the first BD information after filling and enter the forwarding process. In this case, the packet delay reaches 16 milliseconds. If the queue is bundled, 10 message queues are bundled into the same low-delay queue, and the delay time is reduced from 16 milliseconds to 1.6 milliseconds.
  • a timer for the low-delay queue please refer to Figure 8, to improve the low-delay feature again, and ensure that the data packets bound into the low-delay queue do not remain.
  • the timer compulsorily sends the first BD information of the low-delay queue to the scheduling management module according to a fixed cycle. For example, if the timer period is set to 10us, after the 10us period, if the first BD information of the current low-delay queue is valid, the first BD information will be sent to the scheduling management module. Taking the 10 queues mentioned above as an example, the latency will increase from 1.6ms to 10us.
  • the storage state of the message queue corresponding to the target data message may also be the second state.
  • the storage state is the second state
  • the target data message is stored in the corresponding data message queue.
  • FIG. 10 exemplarily provides a flowchart of another data message forwarding method in the embodiment of the present application, including the following steps:
  • Step S1000 monitoring the storage status of the message queue corresponding to the currently received target data message.
  • step S1000 reference may be made to step S600, which will not be repeated here.
  • Step S1001 when it is determined that the storage state is the second state, store the target data message in the second type of block storage space corresponding to the corresponding message queue, wherein the second state is used to represent the data message stored in the message queue
  • the total traffic is greater than or equal to the traffic threshold.
  • the storage state of the message queue when it is determined that the storage state of the message queue is the second state, it means that the current flow of data packets in the message queue is relatively large, and then the target data packet is stored in the corresponding message queue.
  • the switch or router system Since the switch or router system continuously receives the target data message, after each receiving a target data message, it monitors the storage state of the message queue corresponding to the target data message, and after determining that the storage state is the second state, Store the target data message in the corresponding message queue.
  • the data messages of multiple message queues whose storage status is the second state are stored in a distributed manner.
  • FIG. 11 which exemplarily provides a message queue in the embodiment of the present application Schematic diagram of decentralized storage.
  • the second BD information of the corresponding queue is obtained by addressing with the queue number as an index.
  • the second BD information stores information such as the message storage pointer of the current message queue, the number of messages, and the total traffic.
  • Each BD only stores data packets of one packet queue.
  • Fig. 12 exemplarily provides a schematic diagram of off-chip access of message queue decentralized storage in the embodiment of the present application.
  • each message queue storage corresponds to its own storage block.
  • the message queue When the message queue is storing, it initiates a write operation to the off-chip memory. Since the storage blocks are different, the addresses of the write operations initiated by the queue cannot be continuous. As shown in FIG. 12 , the storage initiation addresses of queues 0-3 are: block 0 address, block 1 address, block 2 address, and block 3 address.
  • Step S1002 when the second condition is met, send the second BD information corresponding to the second type of block storage space, so as to forward the data message stored in the second type of block storage space.
  • the second BD information corresponding to the second type of block storage space is sent to forward the second type of datagrams stored in block storage;
  • the port uses the queue number as an index to address to obtain the second BD information of the corresponding queue.
  • the second BD information stores information such as the message storage pointer of the current message queue, the number of messages, and the total traffic.
  • each second BD only stores the data message of one message queue, that is, the data message of the same message queue is stored in the second type of block storage space corresponding to the second BD information, when each second BD After the total flow of data packets stored in the second type of block storage space corresponding to the information exceeds a fixed threshold, for example, after 16KB, or after the remaining space cannot store a complete data packet, the second BD information is sent to the scheduling management module.
  • the second BD information After entering the queue participates in the scheduling management to perform policy scheduling and dequeue, and determine the second BD information that is dispatched out of the queue. If the BD information is inconsistent, address the memory based on the dequeued second BD information, read the data message stored in the second-type block storage space corresponding to the dequeued second BD information, and forward it.
  • each second-type block In the case of normal input of data messages without interruption, the storage space of each second-type block will be completely filled, and the corresponding second BD information will be sent to the scheduling management module to complete the subsequent process.
  • the switch based on the storage state is the trigger point, that is, when the storage state of the message queue is switched from the second state to the first state, the second BD information is forcibly sent to the scheduling management module.
  • the data packets stored in the second type of block storage space are forwarded.
  • no data packets are left behind.
  • a bundling storage mechanism is adopted, and then the mechanism ensures that no bundling and storage packets remain.
  • the first-type block storage space can be set to be no larger than the second-type block storage space.
  • FIG. 13 exemplarily provides a flow chart of a data message forwarding method implemented in the embodiment of the present application, including the following steps:
  • Step S1300 receiving the target data message through the port module of the switch or router system.
  • Step S1301 identifying the address information accessed by the target data message, and determining the message queue corresponding to the target data message based on the identified address information;
  • data packets are stored in a message queue, and data packets accessing the same address information are stored in the same message queue, but in the embodiment of this application, it is necessary to to determine whether to store the data message in the corresponding message queue.
  • the message queue corresponding to the target data message should be identified first, and then the storage status of the message queue is determined.
  • Step S1302 monitoring the storage status of the message queue corresponding to the currently received target data message.
  • Step S1303 judging whether the storage status is: the first status indicating that the total traffic of data packets is less than the traffic threshold, if so, go to step S1304, otherwise go to step S1306.
  • Step S1304 when it is determined that the storage state is the first state, store the target data packet in the first type of block storage space corresponding to the low-delay queue;
  • the target data packet is stored in the first-type block storage space corresponding to the low-delay queue, that is, the target data packet is stored in the low-delay queue.
  • Step S1305 when the first condition is met, enqueue the first BD information corresponding to the first type of block storage space, and determine the first BD information to be dequeued based on scheduling management;
  • the scheduling management module when the first BD information corresponding to the first type of block storage space enters the queue, the scheduling management module does not participate in the scheduling management strategy, but directly outputs the scheduling information that enters the queue.
  • Team operation that is, the information of the first BD entering the queue is consistent with the information of the first BD leaving the queue.
  • Step S1306 when it is determined that the storage state is the second state used to indicate that the total flow of the data message is greater than or equal to the flow threshold, store the target data message in the second type of block storage space corresponding to the corresponding message queue;
  • the target data message is stored in the second-type block storage space corresponding to the corresponding message queue, that is, the target data message is stored in the corresponding message queue.
  • Step S1307 when the second condition is satisfied, enqueue the second BD information corresponding to the second type of block storage space, and determine the second BD information to be dequeued based on scheduling management;
  • the scheduling management module participates in the first-in-first-out scheduling management strategy to determine the second BD information that has been dequeued.
  • the information of the second BD entering the queue may be inconsistent with the information of the second BD dequeuing.
  • Step S1308 addressing memory based on the first BD information or the second BD information dequeued by scheduling management, reading corresponding data packets, and sending the read data packets through the port module.
  • the first type of block storage space corresponding to the low-latency queue is divided in the off-chip cache to improve the bandwidth utilization rate of the off-chip storage and reduce the consumption of on-chip resources.
  • the message queue corresponding to the target data message is determined, and the storage state corresponding to the message queue is monitored.
  • the storage state is determined to be the first state, the target The total flow of data packets currently stored in the packet queue corresponding to the data packet is small. Since the packet queue is used to store data packets consistent with the address information accessed by the target data packet, the target data packet is stored in When the packet is queued, the data packet will not be forwarded until the total traffic of the data packet meets the forwarding condition of the data packet, resulting in a delay in the transmission of the data packet.
  • the target data message is stored in the first type of block storage space corresponding to the low-delay queue.
  • the address information accessed by the data message is consistent and/or inconsistent with the data message, so the total flow of the data message can quickly meet the forwarding conditions of the data message, and when the forwarding condition is met, the first type of block is stored.
  • the forwarding of data packets stored in the space can effectively reduce the storage and forwarding delay of data packets in multi-message queues, and improve the data packet forwarding performance of multi-message queues.
  • a data packet forwarding device is also provided in the embodiment of the present application. The implementation of the above method will not be described repeatedly.
  • FIG. 14 exemplarily provides a structural diagram of a data message forwarding device in the embodiment of the present application.
  • the data message forwarding device 1400 includes a monitoring unit 1401, a storage unit 1402, and a forwarding unit 1403, wherein :
  • the monitoring unit 1401 is configured to monitor the storage state of the message queue corresponding to the currently received target data message; the message queue is used to store the data message consistent with the address information accessed by the target data message;
  • the storage unit 1402 is configured to store the target data message in the first type of block storage space corresponding to the low-delay queue when the storage state is determined to be the first state; wherein, the first state is used to represent the data message stored in the message queue The total flow of is less than the flow threshold;
  • the forwarding unit 1403 is configured to forward the data packets stored in the first type of block storage space when the first condition is met.
  • the monitoring unit 1401 is specifically configured to:
  • the remaining queue length of the message queue is monitored, and based on the remaining queue length, the storage state of the message queue corresponding to the target data message is determined.
  • the monitoring unit 1401 is specifically configured to:
  • For the message queue determine the first credit point assigned to the message queue, and when forwarding the data message corresponding to the message queue, determine the second credit point for deduction based on the total flow of the forwarded data message;
  • a target credit is determined.
  • the forwarding unit 1403 is specifically configured to:
  • the storage unit 1402 is also used for:
  • the target data packet is stored in the second type of block storage space corresponding to the corresponding message queue, and the second state is used to indicate that the total flow of data packets stored in the message queue is greater than or equal to the flow rate threshold;
  • the second BD information corresponding to the second type of block storage space is sent to forward the data packets stored in the second type of block storage space.
  • the forwarding unit 1403 is further configured to:
  • the second BD information corresponding to the second type of block storage space is sent to forward the data packets stored in the second type of block storage space ;
  • the second BD information corresponding to the second type of block storage space is sent to forward the data message stored in the second type of block storage space.
  • the block storage space of the first type is not larger than the block storage space of the second type.
  • each module or unit
  • the functions of each module can be implemented in one or more pieces of software or hardware when implementing the present application.
  • an electronic device 150 is also provided in the embodiment of the present application, as shown in FIG. 15 , including at least one processor 1501 and a memory 1502 connected to the at least one processor.
  • the specific connection medium between the processor 1501 and the memory 1502 is not limited in this embodiment of the present application.
  • the processor 1501 and the memory 1502 are connected through a bus as an example.
  • the bus can be divided into address bus, data message bus, control bus and so on.
  • the memory 1502 stores instructions executable by at least one processor 1501, and at least one processor 1501 executes the instructions stored in the memory 1502 to perform the steps included in the aforementioned data packet forwarding method .
  • the processor 1501 is the control center of the electronic equipment, and various interfaces and lines can be used to connect various parts of the terminal equipment, and by running or executing instructions stored in the memory 1502 and calling data messages stored in the memory 1502, thereby Get the client address.
  • the processor 1501 may include one or more processing units, and the processor 1501 may integrate an application processor and a modem processor, wherein the application processor mainly processes the operating system, positioning target interface and application programs, etc.
  • the demodulation processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1501 .
  • the processor 1501 and the memory 1502 can be implemented on the same chip, and in some embodiments, they can also be implemented on independent chips.
  • the processor 1501 can be a general processor, such as a central processing unit (CPU), a digital signal processor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array or other programmable logic devices, discrete gates or transistors Logic devices and discrete hardware components can implement or execute the methods, steps and logic block diagrams disclosed in the embodiments of the present application.
  • a general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the methods disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
  • the memory 1502 as a non-volatile computer-readable storage medium, can be used to store non-volatile software programs, non-volatile computer-executable programs and modules.
  • the memory 1502 may include at least one type of storage medium, such as flash memory, hard disk, multimedia card, card-type memory, random access memory (Random Access Memory, RAM), static random access memory (Static Random Access Memory, SRAM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Magnetic Memory, Magnetic Disk, Optical Disk etc.
  • the memory 1502 is any other medium that can be used to carry or store desired program code in the form of an instruction or data message structure and can be accessed by a computer, but is not limited thereto.
  • the memory 1502 in this embodiment of the present application may also be a circuit or any other device capable of implementing a storage function, and is used for storing program instructions and/or data packets.
  • various aspects of the data packet forwarding method provided in this application can also be implemented in the form of a program product, which includes a computer program.
  • the program product When the program product is run on an electronic device, the computer program uses To cause the electronic device to execute the steps in the method for forwarding data packets according to various exemplary embodiments of the present application described above in this specification, for example, the electronic device may execute the steps shown in FIG. 5 .
  • a program product may take the form of any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • the program product of the embodiments of the present application may take the form of a portable compact disk read only memory (CD-ROM) and include a computer program, and may run on a computing device.
  • CD-ROM portable compact disk read only memory
  • the program product of the present application is not limited thereto.
  • a readable storage medium may be any tangible medium containing or storing a program, and the program may be used by or in combination with a command execution system, device or device.
  • a readable signal medium may include a data signal carrying a readable computer program in baseband or as part of a carrier wave carrying a data signal. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a readable signal medium may also be any readable medium, other than a readable storage medium, that can transmit, propagate, or transport a program for use by or in conjunction with a command execution system, apparatus, or device.
  • a computer program embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having a computer-usable computer program embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例提供了一种数据报文的转发方法及装置,其中,该方法包括:监测当前接收到的目标数据报文对应的报文队列的存储状态,报文队列用于存储与目标数据报文访问的地址信息一致的数据报文;在存储状态为第一状态的情况下,将目标数据报文存储至低延迟队列对应的第一类块存储空间;当满足第一条件时,将第一类块存储空间中存储的数据报文进行转发。当存储状态为第一状态时,说明存储的数据报文的总流量小,将接收到的数据报文存储至低延迟队列对应的第一类块存储空间中,低延迟队列用于存储报文队列为第一状态时存入的各种数据报文,很快满足转发条件,降低传输时延。

Description

一种数据报文的转发方法及装置
相关申请的交叉引用
本申请要求在2021年12月20日提交中国专利局、申请号为202111558739.0、申请名称为“一种数据报文的转发方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及通信技术领域,尤其涉及一种数据报文的转发方法及装置。
背景技术
在通信过程中,数据报文的转发需要通过交换机或路由器系统,通过交换机或路由器系统中的缓存管理机制对接收到的数据报文进行缓存,并在满足一定条件后将缓存的数据报文转发。
交换机或路由器系统接收到数据报文后,按队列存储方式对数据报文进行缓存存储,每个报文队列都对应有一个块存储空间,将数据报文存储至相应报文队列,即将报文数据存储至相应的块存储空间,并在当块存储空间存满或无法再存入完整数据报文时,将该块存储空间中存储的数据报文进行转发。由于每个数据报文都有相应流量,当数据报文流量较小时,所累积到块存储空间的大小所需的时间越长,已存储至块存储空间的数据报文等待发送的时间越长。综上,目前交换机或路由器系统进行数据报文转发时,存在较大的传输时延。
发明内容
本申请实施例提供一种数据报文的转发方法及装置,用以降低传输时延。
第一方面,本申请实施例提供一种数据报文的转发方法,该方法包括:
监测当前接收到的目标数据报文对应的报文队列的存储状态;报文队列用于存储与目标数据报文访问的地址信息一致的数据报文;
确定存储状态为第一状态时,将目标数据报文存储至低延迟队列对应的第一类块存储空间;其中,第一状态用于表征报文队列存储的数据报文的总流量小于流量阈值;
当满足第一条件时,将第一类块存储空间中存储的数据报文转发。
本申请实施例中,当接收到目标数据报文后,确定目标数据报文对应的报文队列,并监测报文队列对应的存储状态,在确定存储状态为第一状态时,说明目标数据报文对应的报文队列当前存储的数据报文的总流量较小,若将目标数据报文存储至报文队列,由于报文队列用于存储与目标数据报文访问的地址信息一致的数据报文,因此需要一定的等待时间,数据报文的总流量才会满足数据报文的转发条件,导致数据报文传输延时。因此,在确定目标数据报文对应的报文队列的存储状态为第一状态时,将目标数据报文存储至低延迟队列对应的第一类块存储空间,低延迟队列用于存储与目标数据报文访问的地址信息一致和/或不一致的数据报文,因此很快就可以使数据报文的总流量满足数据报文的转发条件,并在满足转发条件时,将第一类块存储空间中存储的数据报文转发,降低数据报文的传输时延。
在一种可能的实现方式中,监测当前接收到的目标数据报文对应的报文队列的存储状态,包括:
监测报文队列的目标信用点,并基于目标信用点,确定目标数据报文对应的报文队列的存储状态,目标信用点用于表征转发数据报文的转发速率;或
监测报文队列的剩余队列长度,并基于剩余队列长度,确定目标数据报文对应的报文队列的存储状态。
本申请实施例中,提供两种监测报文队列的存储状态的实现方式,以准确确定报文队列的存储状态,并基于存储状态确定目标数据报文将要存入的 块存储空间,以降低数据报文的传输时延。
在一种可能的实现方式中,监测报文队列的目标信用点,包括:
针对报文队列,确定分配给报文队列的第一信用点,以及转发报文队列对应的数据报文时,基于转发的数据报文的总流量确定扣减的第二信用点;
基于第一信用点,以及第二信用点,确定目标信用点。
本申请实施例,提供一种监测报文队列的目标信用点的实现方式,以确定报文队列的目标信用点,进一步的基于目标信用点确定报文队列的存储状态。
在一种可能的实现方式中,当满足第一条件时,将第一类块存储空间中存储的数据报文转发,包括:
当第一类块存储空间存储的数据报文的总量达到第一阈值时,将第一类块存储空间中存储的数据报文转发;或
定时检测低延迟队列对应的第一缓存描述符(Buffer Description,BD)信息,确定第一BD信息有效后,将第一类块存储空间中存储的数据报文转发。
本申请实施例中,提供将第一类块存储空间中存储的数据报文转发的具体情况,以及时将第一类块存储空间中存储的数据报文转发,降低数据报文的传输时延。
在一种可能的实现方式中,确定存储状态为第二状态时,将目标数据报文存储至相应的报文队列对应的第二类块存储空间,其中,第二状态用于表征报文队列存储的数据报文的总流量大于等于流量阈值;
当满足第二条件时,将第二类块存储空间对应的第二BD信息送出,以转发第二类块存储空间中存储的数据报文。
本申请实施例中,确定报文队列的存储状态为第二状态时,说明目标数据报文对应的报文队列当前存储的数据报文的总流量比预期调度流量大,因此将目标数据报文存储至相应的报文队列对应的第二类块存储空间,对报文队列进行缓存存储,并在满足第二条件时,将第二类块存储空间对应的第二BD信息送出,再由调度管理模块,决定是否转发第二类块存储空间中存储的 数据报文。
在一种可能的实现方式中,当满足第二条件时,将第二类块存储空间对应的第二BD信息送出,以转发第二类块存储空间中存储的数据报文,包括:
当第二类块存储空间存储的数据报文的总量达到第二阈值时,将第二类块存储空间对应的第二BD信息送出,以转发第二类块存储空间中存储的数据报文;或
确定报文队列的存储状态由第二状态切换至第一状态时,将第二类块存储空间对应的第二BD信息送出,以转发第二类块存储空间中存储的数据报文。
本申请实施例中,提供将第二类块存储空间中存储的数据报文转发的具体情况,以及时将第二类块存储空间中存储的数据报文转发,降低数据报文的传输时延。
在一种可能的实现方式中,第一类块存储空间不大于第二类块存储空间。
在本申请实施例中,低延迟队列对应的第一类块存储空间的大小不大于报文队列对应的第二类块存储空间的大小,可使第一类块存储空间快速填满,降低数据报文传输时延。
第二方面,本申请实施例提供一种数据报文的转发装置,该装置包括:
监测单元,用于监测当前接收到的目标数据报文对应的报文队列的存储状态;报文队列用于存储与目标数据报文访问的地址信息一致的数据报文;
存储单元,用于确定存储状态为第一状态时,将目标数据报文存储至低延迟队列对应的第一类块存储空间;其中,第一状态用于表征报文队列存储的数据报文的总流量小于流量阈值;
转发单元,用于当满足第一条件时,将第一类块存储空间中存储的数据报文转发。
第三方面,本申请实施例提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其中,处理器执行计算机程序时实现上述任一种数据报文的转发方法的步骤。
第四方面,本申请实施例提供了一种计算机可读存储介质,其存储有可 由电子设备执行的计算机程序,当程序在电子设备上运行时,使得电子设备执行上述任一种数据报文的转发方法的步骤。
第五方面,本申请实施例提供一种计算机存储介质,计算机存储介质存储有计算机指令,当计算机指令在计算机上运行时,使得计算机执行上述任一种数据报文的转发方法的步骤。
本申请的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请而了解。本申请的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为相关技术中一种数据报文的转发系统框架示意图;
图2为相关技术中一种缓存初始化的示意图;
图3为相关技术中一种报文队列的BD信息入队的示意图;
图4为本申请实施例提供的一种应用场景示意图;
图5为本申请实施例提供的一种交换机或路由器系统的架构示意图;
图6为本申请实施例提供的一种数据报文的转发方法流程图;
图7为本申请实施例提供的一种基于目标信用点监测存储状态的示意图;
图8为本申请实施例提供的一种报文队列捆绑存储的示意图;
图9为本申请实施例提供的一种报文队列捆绑存储片外访问的示意图;
图10为本申请实施例提供的另一种数据报文的转发方法流程图;
图11为本申请实施例提供的一种报文队列分散存储的示意图;
图12为本申请实施例提供的一种报文队列分散存储片外访问的示意图;
图13为本申请实施例提供的一种具体实施的数据报文的转发方法流程图;
图14为本申请实施例提供的一种数据报文的转发装置结构图;
图15为本申请实施例提供的一种电子设备的结构图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
为了方便理解,下面对本申请实施例中涉及的名词进行解释:
队列存储:一般指将某类带有特定地址信息的数据报文存储在同一报文队列中,比如同个媒体访问控制地址(Media Access Control Address,MAC)的数据报文存储在同一报文队列中、同个互联网协议(Internet Protocol,IP)地址的数据报文存储在同一报文队列中或者同个目的端口的数据报文存储在同一报文队列中。
正常的数据报文在进入交换机或者路由器系统中,都将基于数据报文中携带的特定地址信息将数据报文进行分类处理,分类完成后,将数据报文存储至相应类别对应的一个报文队列,实现数据报文到报文队列的映射关系。
数据报文进入交换机或者路由器系统后,需要存储后再转发,存储的过程叫做队列存储。一般的,低端的交换机或者路由器系统中的支持的报文队列数据量最少,支持K(千)级别以内的报文队列;高端的交换机或者路由器系统可支持的报文队列数量高于低端的交换机或者路由器系统,支持K级别以上的报文队列;特别的,用于运营商的核心设备中的交换机或路由器系统支持的报文队列数量高于高端的交换机或者路由器,可支持到M(兆)级别的报文队列。
数据报文:网络中交换与传输的数据单元,包含了将要发送的完整的数据信息,其长短很不一致,长度不限且可变。
缓存管理:交换机或路由器系统中设置有存储单元,用于报文队列存储。在进行数据报文存储的时候需要申请存储指针,并将数据报文存放在存储指针指示的块存储空间,在将数据报文转发时依据报文队列对应的存储指针信息进行寻址,对块存储空间存储的数据报文进行读取,并将指针回收,以便后续继续使用。缓存管理主要包括存储空间划分、指针分配、指针回收等。
调度管理:数据报文进入交换机或路由系统中,按报文队列进行存储,同一报文队列中的数据报文需要按照先进先出的原则进行转发,否则会引起转发乱序。将同一报文队列对应的块存储空间的指针形成链表,在读取该报文队列对应的数据报文时,依据链表进行读取。此时,由调度管理模块决定当前需要调度出队的BD信息,再由缓存管理模块按照调度出队的BD信息读取数据报文。
缓存描述符(Buffer Description,BD)信息:用于存放当前报文队列的报文存储指针、报文数、总流量等信息;本申请实施例中,基于报文队列的不同,区分为第一BD信息和第二BD信息,其中第一BD信息与数据报文分散存储时的报文队列对应,第二BD信息与数据报文绑定存储时的低延迟队列对应。
下面对本申请实施例的设计构思进行简要介绍:
本申请实施例涉及通信技术领域,尤其涉及交换机或路由器系统中进行 数据报文的转发方法。
相关技术中,交换机或路由器系统进行数据报文的转发时,通过端口模块接收数据报文,并将接收到的数据报文送入缓存管理模块,按队列进行缓存存储;即将同一MAC地址的数据报文存储在同一报文队列对应的块存储空间中,将同一IP地址的数据报文存储在同一报文队列对应的块存储空间中。
缓存管理模块一般将系统内存按照块划分,每个块存储空间有固定大小(例如16KB空间),且对应有一个BD指针,用于记录块存储空间内的数据报文和存储、读取寻址操作。当报文队列对应的块存储空间存满固定大小或无法再存入完整的数据报文时,将块存储空间对应的BD信息送入调度管理模块进行BD信息入队操作。调度管理模块接收到BD信息后,按照先进先出的策略,进行相应的BD信息出队操作,读取出队操作的BD信息,将读取的BD信息送入到缓存管理模块,缓存管理模块依据BD信息寻址,读取数据报文,并发送到端口模块发送。请参考图1,图1示例性提供相关技术中一种数据报文的转发系统框架示意图。
缓存管理模块在系统初始时,将全部块存储空间进行初始化,相应的指针存入指针池中,指针池中的指针即为队列存储可用的指针BD信息。请参考图2,图2示例性提供一种缓存初始化的示意图。
且相关技术中,数据报文进入交换机或路由器系统后,需要进行队列存储。当一个新的报文队列对应的数据报文需要存储时,向指针池中申请并取出指针。每个指针指向一个固定大小的块存储空间,例如16KB空间的大小。数据报文在存入相应的报文队列时,不断的向该报文队列对应的块存储空间写入数据报文,直到块存储空间无法再存入数据报文时,缓存管理空间将该块存储空间对应的BD信息送入调度管理模块。请参考图3,图3提供相关技术中报文队列的BD信息入队的示意图。
以上描述的数据报文的转发方法存在如下问题:每个报文队列分配得到的块存储空间需要存满固定大小,或者无法再存入完整报文后,才将块存储空间对应的BD信息发送到调度管理模块,因此会造成数据报文在转发过程中 存在一定程度上的延迟。且当数据报文的流量较小时,存满块存储空间所需的时间越长,也就是说数据报文等待被转发时的等待时间越长,数据报文转发的延迟越大。
相关技术中针对上述问题给出了一种解决的技术方案,即在缓存管理模块内部设立片上缓存,在数据报文的流量较小的情况下,经过片上缓存路径,片上缓存内的块大小设立小于片外缓存,例如,片外为16KB,片内(片上)为512B,因此可通过片上缓存的方式降低数据报文转发延迟。但是该方式需要消耗额外的片上缓存空间,加大对芯片或系统的资源需求。
有鉴于此,本申请实施例提供一种数据报文的转发方法及装置,用以降低数据报文的传输时延。
在本申请实施例中,设置多队列低延迟缓存管理机制,即将多个报文队列对应的报文数据捆绑存入低延迟队列,并将多队列低延迟缓存管理机制作用于交换机或路由器系统中的缓存管理模块,能有效地用于提升交换机或路由器系统的数据报文转发性能和低延迟特性,满足高性能数据报文存储转发需求,有效提升片外缓存的带宽利用率,降低资源消耗。
在介绍完本申请实施例的设计构思之后,下面对本申请实施例的技术方案能够适用的应用场景做一些简单介绍,需要说明的是,以下介绍的应用场景仅用于说明本申请实施例而非限定,在具体实施过程中,可以根据实际需要灵活地应用本申请实施例提供的技术方案。
请参考图4,图4示例性提供本申请实施例中的一种应用场景,该应用场景中包括第一终端设备40、交换机41以及第二终端设备42;其中,第一终端设备40和第二终端设备42可通过交换机41进行数据报文的传输,以达到信息交互。
下面结合上述描述的应用场景,参考附图来描述本申请示例性实施方式提供的数据报文的转发方法,需要注意的是,上述应用场景仅是为了便于理解本申请的精神和原理而示出,本申请的实施方式在此方面不受任何限制。
为了降低数据报文的传输时延,本申请实施例中:在接收到数据报文后, 先确定数据报文对应的报文队列的存储状态;然后,基于存储状态确定是将数据报文存入低延迟队列,还是将数据报文存入相应的报文队列。
因此,本申请实施例提供的交换机或路由器系统中的缓存管理模块中设置有低延迟队列和报文队列。
请参考图5,图5示例性提供本申请实施例中一种交换机或路由器系统500的架构示意图,包括:端口模块501、缓存管理模块502以及调度管理模块503;其中:
端口模块501,用于接收和发送数据报文;
缓存管理模块502,用于管理交换机或路由器系统存储内存,将数据报文进行存储,并形成存储队列的BD信息,且在缓存管理模块502内部设置两种状态下的存储队列:低延迟队列和报文队列,且低延迟队列的BD信息称为第一BD信息,报文队列的BD信息称为第二BD信息;其中,低延迟队列用于存储报文队列为第一状态时存入的各种数据报文,各种数据报文中包括访问地址信息一致的数据报文和/或访问地址信息不一致的数据报文,报文队列用于存储与接收到的目标数据报文访问的地址信息一致的数据报文。
调度管理模块503,用于将报文队列当前对应的第二BD信息进行入队操作,并按照先进先出的原则,在报文队列对应的第二BD信息链表中,选取最先入队的第二BD信息,并将选取的第二BD信息进行调度出队,以及在调度管理模块503中设置信用监测,以及状态监测,以实时监测报文队列的目标信用点,并基于报文队列的目标信用点切换报文队列的存储状态。
基于上述交换机或路由器系统500的架构,本申请实施例提供一种数据报文转发的方法,请参考图6,图6示例性提供本申请实施例中一种数据报文的转发方法流程图,包括如下步骤:
步骤S600,监测当前接收到的目标数据报文对应的报文队列的存储状态。
其中,报文队列用于存储与目标数据报文访问的地址信息一致的数据报文。
在本申请实施例中,通过报文队列对应的目标信用点、剩余队列长度中 的至少一种,监测当前接收到的目标数据报文对应的报文队列的存储状态。
下面,分别对基于目标信用点,和基于剩余队列长度,监测报文队列的存储状态进行详细说明。
方式一:基于目标信用点,监测报文队列的存储状态。
在一种可能的实现方式中,通过交换机或路由管理系统中的调度管理模块监测报文队列的目标信用点,并基于目标信用点,确定目标数据报文对应的报文队列的存储状态。目标信用点是基于分配给报文队列的第一信用点,以及转发报文队列对应的数据报文时,基于转发的数据报文的总流量确定扣减的第二信用点所确定的,目标信用点用于表征转发数据报文的转发速率。
请参考图7,图7示例性提供本申请实施例中一种基于目标信用点监测存储状态的示意图;其中,设置信用监测和存储状态监测功能,以用于确定报文队列的存储状态。由于针对每个报文队列监测存储状态的方式相同,因此本申请实施例中仅以一个报文队列进行举例说明。
在本申请实施例中,对每个报文队列周期性地分配第一信用点,且在转发该报文队列对应的数据报文时,基于转发的数据报文的总流量,对信用点进行扣减,并确定扣减的第二信用点;基于分配获得的第一信用点,和转发数据报文时扣减的第二信用点,确定目标信用点;然后基于目标信用点确定报文队列的存储状态。
一般地,设置一个信用点阈值,将目标信用点与信用点阈值进行比较,当目标信用点大于信用点阈值时,说明该报文队列转发数据报文的速率比较慢,也就是说当前存储的数据报文的总流量较小,因此存储状态为第一状态。
相反的,当目标信用点小于信用点阈值时,说明该报文队列转发数据报文的速率较快,也就是说当前存储的数据报文的总流量较大,因此存储状态为第二状态。
比如,设置信用点阈值为0,以速率100Mbps的量按照1毫秒周期性的分配第一信用点,则每毫秒分配的第一信用点的量为0.1Mb。如果此时数据报文的转发速率达到1000Mbps,大约每1毫秒扣减1Mb第二信用点,扣减的 第二信用点的量大于分配的第一信用点的量,则很快将目标信用点扣减到负值状态,此时确定报文队列的存储状态为第二状态。当数据报文的转发速率小于100Mbps时,大约每1毫秒扣减的第二信用点小于0.1Mb,报文队列的目标信用点不会变成负数,则确定该报文队列为第一状态。
方式二:基于剩余队列长度,监测报文队列的存储状态。
在一种可能的实现方式中,通过交换机或路由管理系统中的调度管理模块监测报文队列的剩余队列长度,并基于剩余队列长度,确定目标数据报文对应的报文队列的存储状态。
一般地,设置一个队列长度阈值,将剩余队列长度与队列长度阈值进行比较,当剩余队列长度大于队列长度阈值时,说明当前存储的数据报文的总流量较小,因此存储状态为第一状态。
相反的,当剩余队列长度小于队列长度阈时,说明当前存储的数据报文的总流量较大,因此存储状态为第二状态。
步骤S601,确定存储状态为第一状态时,将目标数据报文存储至低延迟队列对应的第一类块存储空间,第一状态用于表征报文队列存储的数据报文的总流量小于流量阈值。
由于交换机或路由管理系统不断接收数据报文,每接收一个数据报文,就会监测当前接收到的目标数据报文对应的报文队列的存储状态,并基于存储状态对当前接收到的目标数据报文进行存储;
一般的,确定当前接收到的目标数据报文对应的报文队列为第一状态时,就会将该数据报文存储至低延迟队列中,因此低延迟队列中不断有在监测到报文队列为第一状态情况下的数据报文存入;
又因为接收到的数据报文可能是访问同一地址信息的数据报文,也可能是访问不同地址信息的数据报文,但不管是访问同一地址信息的数据报文,还是访问不同地址信息的数据报文,在监测到相应的报文队列为第一状态时,均存入低延迟队列,因此,本申请实施例中的低延迟队列用于存储访问地址信息一致的数据报文和/或访问地址信息不一致的数据报文。
在本申请实施例中,在确定报文队列的存储状态为第一状态时,说明报文队列当前的数据报文流量较小,不能及时发送,为了保证数据报文传输的效率,将目标数据报文存入低延迟队列对应的第一类块存储空间。
由于交换机或路由器系统不断的接收目标数据报文,每接收到一个目标数据报文后,就监测该目标数据报文对应的报文队列的存储状态,并在确定存储状态为第一状态后,将目标数据报文存储至低延迟队列对应的第一类块存储空间中。
在本申请实施例中,将多个存储状态为第一状态的报文队列的数据报文进行捆绑式存储,请参考图8,图8示例性提供本申请实施例中一种报文队列捆绑存储的示意图。
具体的,通过端口模块接收到目标数据报文后,以队列号为索引寻址得到对应的报文队列的第二BD信息,其中,第二BD信息内存放当前报文队列的报文存储指针、报文数、总流量等信息。因此,将得到目标数据报文对应的报文队列捆绑进入低延迟队列的流量,即将多个报文队列的数据报文捆绑存储至低延迟队列对应的第一类块存储空间。
请参考图9,图9示例性提供本申请实施例中一种报文队列捆绑存储片外访问的示意图。
在报文队列的数据报文捆绑存储的情况下,所有报文队列都共同对应同个存储块。报文队列在进行存储时,向片外存储器发起写操作,由于存储块相同,所以报文队列发起的写操作地址连续。如图9所示,队列0-3存储发起地址为:块0地址、块0地址、块0地址、块0地址。该情况下访问地址连续合并,利于片外存储器提升存储性能,提升了交换机或路由器系统的存储转发性能,且减少片上资源的消耗。
步骤S602,当满足第一条件时,将第一类块存储空间中存储的数据报文转发。
在一种可能的实现方式中,当第一类块存储空间存储的数据报文的总量达到第一阈值时,将第一类块存储空间中存储的数据报文转发;或
当第一类块存储空间的剩余空间无法在存入一个完整数据报文时,将第一类块存储空间中存储的数据报文转发;
具体的,在存储状态为第一状态的情况下,端口接收到数据报文后,以队列号为索引寻址得到对应队列捆绑进入低延迟队列的流量,进行累加操作,即确定第一类块存储空间中存储的总流量,且在进入低延迟队列后,不再依据队列号进行相应的报文队列存储操作,而直接进入低延迟队列存储。当总流量超过固定阀值,例如16KB后,或剩余空间无法存入一个完整的数据报文后,将低延迟队列对应的第一BD信息送入调度管理模块,进行第一BD信息入队,同时立刻进行出队操作,不经过调度模块的策略出队,即入队的第一BD信息和出队的第一BD信息一致,在确定出队的第一BD信息后,基于出队的第一BD信息寻址内存,读取第一类块存储空间中存储的数据报文,并转发。
需要说明的是,当总量超过固定阈值,或剩余空间无法存入一个完整的数据报文后,将各个目标数据报文以及相应的队列号对应的报文队列等信息送入调度管理模块,以使调度管理模块基于该信息对报文队列的信用点进行扣减,并进一步确定报文队列的存储状态。
在另一种可能的实现方式中,定时检测低延迟队列对应的第一BD信息,确定第一BD信息有效后,将第一类块存储空间中存储的数据报文转发。
捆绑操作有利于提升报文转发延迟特性,特别是在数据报文转发速率较低的情况下。数据报文速率较低时,每个队列都需独立的填充16KB大小的缓存空间后,才能将第一BD信息进行入队操作,进入转发流程。而捆绑操作将各队列的小流量汇聚成大流量,提升填充16KB缓存块的速率。例如:10个报文队列,每个报文队列每毫秒填充1KB的存储空间,则需要16毫秒才能填充完成后送出第一BD信息,进入转发流程。该情况下,报文的延迟达到16毫秒。如果对队列进行捆绑操作,则10个报文队列捆绑入同个低延迟队列,延迟时间从16毫秒降低为1.6毫秒。
在此基础上,为低延迟队列设置定时器,请参考图8,用于再次提升低延 迟特性,并且保证捆绑入低延迟队列的数据报文不发生残留现象。定时器按照固定周期,强制将低延迟队列的第一BD信息送入调度管理模块。例如设置定时器周期为10us,则在10us周期后,如果当前低延迟队列的第一BD信息是有效的,则将第一BD信息送入调度管理模块。以上文提到的10个队列为例,延迟将从1.6毫秒提升至10us。
在本申请实施例中,目标数据报文对应的报文队列的存储状态还可能为第二状态,当存储状态为第二状态时,将目标数据报文存储至相应的数据报文队列对应的第二类块存储空间中。请参考图10,图10示例性提供本申请实施例中另一种数据报文的转发方法流程图,包括如下步骤:
步骤S1000,监测当前接收到的目标数据报文对应的报文队列的存储状态。
需要说明的是,步骤S1000的实施方式,可参见步骤S600,在此不再重复赘述。
步骤S1001,确定存储状态为第二状态时,将目标数据报文存储至相应的报文队列对应的第二类块存储空间,其中,第二状态用于表征报文队列存储的数据报文的总流量大于等于流量阈值。
在本申请实施例中,在确定报文队列的存储状态为第二状态时,说明报文队列当前的数据报文流量较大,则将目标数据报文存储至相应的报文队列。
由于交换机或路由器系统不断的接收目标数据报文,每接收到一个目标数据报文后,就监测该目标数据报文对应的报文队列的存储状态,并在确定存储状态为第二状态后,将目标数据报文存入相应的报文队列中。
因此,在本申请实施例中,将多个存储状态为第二状态的报文队列的数据报文进行分散式存储,请参考图11,图11示例性提供本申请实施例一种报文队列分散存储的示意图。
具体的,通过端口模块接收到目标数据报文后,以队列号为索引寻址得到对应队列的第二BD信息。第二BD信息内存放当前报文队列的报文存储指针、报文数、总流量等信息。其中每个BD内仅存放一个报文队列的数据报文。
请参考图12,图12示例性提供本申请实施例中一种报文队列分散存储片 外访问的示意图。
在报文队列的数据报文分散存储的情况下,每个报文队列存储都对应各自的存储块。报文队列在进行存储时,向片外存储器发起写操作,由于存储块不同,所以队列发起的写操作地址无法连续。如图12所示,队列0-3存储发起地址为:块0地址、块1地址、块2地址、块3地址。
步骤S1002,当满足第二条件时,将第二类块存储空间对应的第二BD信息送出,以转发第二类块存储空间中存储的数据报文。
在一种可能的实现方式中,当第二类块存储空间存储的数据报文的总量达到第二阈值时,将第二类块存储空间对应的第二BD信息送出,以转发第二类块存储空间中存储的数据报文;或
当第二类块存储空间的剩余空间无法在存入一个完整数据报文时,将第二类块存储空间对应的第二BD信息送出,以转发第二类块存储空间中存储的数据报文;
具体的,在存储状态为第二状态的情况下,端口接收到数据报文后,以队列号为索引寻址得到对应队列的第二BD信息。第二BD信息内存放当前报文队列的报文存储指针、报文数、总流量等信息。其中每个第二BD内仅存放一个报文队列的数据报文,即一个第二BD信息对应的第二类块存储空间中存储同一个报文队列的数据报文,当每个第二BD信息对应的第二类块存储空间存储的数据报文的总流量超过固定阈值,例如16KB后,或剩余空间无法存入一个完整的数据报文后,将第二BD信息送入调度管理模块,进行第二BD信息入队,入队后的第二BD信息参与调度管理进行策略调度出列,确定调度出队的第二BD信息,此时入队的第二BD信息和出队的第二BD信息不一致,基于出队的第二BD信息寻址内存,读取出队的第二BD信息对应的第二类块存储空间中存储的数据报文,并转发。
在数据报文正常输入不中断的情况下,每个第二类块存储空间都会填充完整,且将相应的第二BD信息送入到调度管理模块,完成后续流程。
当数据报文中断,且第二类块存储空间块未填满情况下,第二BD信息无 法送出,导致部分数据报文残留在第二类块存储空间内。为了减少残留,快速将数据报文转发,相关技术给出了一种解决的技术方案,即引入定时查看机制,内部需要设置定时器,不断的查看块存储空间中是否残留数据报文,如果残留数据报文查过一定时间,则将相应的BD信息无条件送入到调度管理模块,完成后续流程。在报文队列数量较少情况下,轮询完所有报文队列所需的时间较短;但报文队列数量较多情况下,轮询需要的时间较长。因此数据报文传输的时延与报文队列数量相关,且当系统报文队列数量达到K级别以上时,延迟时间拉长,可能会达到毫秒级。
针对上述问题,本申请实施例中基于存储状态的切换为触发点,即在报文队列的存储状态由第二状态切换至第一状态时,强制将第二BD信息送入到调度管理模块。
因此,在一种可能的实现方式中,确定报文队列的存储状态发生变化时,将第二类块存储空间中存储的数据报文转发。此时分散存储机制下,数据报文实现了不残留。同时,在报文队列切换至第一状态后,采用捆绑存储机制,再由该机制保障捆绑存储报文不残留。
需要说明的是,为了减少数据报文的等待时延,提升数据报文的转发效率,一般地,可以设置第一类块存储空间不大于第二类块存储空间。
请参考图13,图13示例性提供本申请实施例中一种具体实施的数据报文的转发方法流程图,包括如下步骤:
步骤S1300,通过交换机或路由器系统的端口模块接收目标数据报文。
步骤S1301,识别目标数据报文访问的地址信息,并基于识别出的地址信息,确定目标数据报文对应的报文队列;
一般地,采用报文队列的方式存储数据报文,并将访问同一地址信息的数据报文存储至同一报文队列,但是在本申请实施例中,需要根据该数据报文相应的报文队列的存储状态,确定是否将该数据报文存入相应的报文队列中。
因此,本申请实施例中,应先识别目标数据报文对应的报文队列,然后, 在确定该报文队列的存储状态。
步骤S1302,监测当前接收到的目标数据报文对应的报文队列的存储状态。
步骤S1303,判断存储状态是否为:用于表征数据报文的总流量小于流量阈值的第一状态,若是则执行步骤S1304,否则执行步骤S1306。
步骤S1304,确定存储状态为第一状态时,将目标数据报文存储至低延迟队列对应的第一类块存储空间;
在一种可能的实现方式中,将目标数据报文存入低延迟队列对应的第一类块存储空间,即将目标数据报文存入低延迟队列。
步骤S1305,当满足第一条件时,将第一类块存储空间对应的第一BD信息入队,并基于调度管理确定出队的第一BD信息;
在一种可能的实现方式中,当第一类块存储空间对应的第一BD信息入队后,在调度管理模块中并不会参与调度管理策略,而是将入队的调度信息直接进行出队操作,即入队的第一BD信息和出队的第一BD信息一致。
步骤S1306,确定存储状态为用于表征数据报文的总流量大于等于流量阈值的第二状态时,将目标数据报文存储至相应的报文队列对应的第二类块存储空间;
在一种可能的实现方式中,将目标数据报文存入相应的报文队列对应的第二类块存储空间,即将目标数据报文存入相应的报文队列。
步骤S1307,当满足第二条件时,将第二类块存储空间对应的第二BD信息入队,并基于调度管理确定出队的第二BD信息;
在一种可能的实现方式中,当第二类块存储空间对应的第二BD信息入队后,在调度管理模块中参与先进先出的调度管理策略,确定出队的第二BD信息,此时入队的第二BD信息和出队的第二BD信息可能不一致。
步骤S1308,基于调度管理出队的第一BD信息或第二BD信息寻址内存,读取相应的数据报文,并将读取的数据报文通过端口模块发出。
需要说明的是,上述实施例的实现方式可扩展到其他需要按照队列、类别进行信息数据存储转发的设备或系统中。
本申请实施例中,在片外缓存中划分低延迟队列对应的第一类块存储空间,提升片外存储的带宽利用率,且减少片上资源的消耗。
同时,本申请实施例中,当接收到目标数据报文后,确定目标数据报文对应的报文队列,并监测报文队列对应的存储状态,在确定存储状态为第一状态时,说明目标数据报文对应的报文队列当前存储的数据报文的总流量较小,由于报文队列用于存储与目标数据报文访问的地址信息一致的数据报文,因此将目标数据报文存储至报文队列时,等到数据报文的总流量满足数据报文的转发条件时,才会被转发,导致数据报文传输延时。因此,在确定目标数据报文对应的报文队列的存储状态为第一状态时,将目标数据报文存储至低延迟队列对应的第一类块存储空间,由于低延迟队列用于存储与目标数据报文访问的地址信息一致和/或不一致的数据报文,因此很快就可以使数据报文的总流量满足数据报文的转发条件,并在满足转发条件时,将第一类块存储空间中存储的数据报文转发,有效降低多报文队列的数据报文存储转发时延,提升多报文队列的数据报文转发性能。
与本申请上述方法实施例基于同一发明构思,本申请实施例中还提供了一种数据报文的转发装置,该装置解决问题的原理与上述实施例的方法相似,因此该装置的实施可以参见上述方法的实施,重复之处不再赘述。
请参考图14,图14示例性提供本申请实施例中一种数据报文的转发装置结构图,该数据报文的转发装置1400中,包括监测单元1401、存储单元1402以及转发单元1403,其中:
监测单元1401,用于监测当前接收到的目标数据报文对应的报文队列的存储状态;报文队列用于存储与目标数据报文访问的地址信息一致的数据报文;
存储单元1402,用于确定存储状态为第一状态时,将目标数据报文存储至低延迟队列对应的第一类块存储空间;其中,第一状态用于表征报文队列存储的数据报文的总流量小于流量阈值;
转发单元1403,用于当满足第一条件时,将第一类块存储空间中存储的 数据报文转发。
在一种可能的实现方式中,监测单元1401具体用于:
监测报文队列的目标信用点,并基于目标信用点,确定目标数据报文对应的报文队列的存储状态,目标信用点用于表征转发数据报文的转发速率;或
监测报文队列的剩余队列长度,并基于剩余队列长度,确定目标数据报文对应的报文队列的存储状态。
在一种可能的实现方式中,监测单元1401具体用于:
针对报文队列,确定分配给报文队列的第一信用点,以及转发报文队列对应的数据报文时,基于转发的数据报文的总流量确定扣减的第二信用点;
基于第一信用点,以及第二信用点,确定目标信用点。
在一种可能的实现方式中,转发单元1403具体用于:
当第一类块存储空间存储的数据报文的总量达到第一阈值时,将第一类块存储空间中存储的数据报文转发;或
定时检测低延迟队列对应的第一BD信息,确定第一BD信息有效后,将第一类块存储空间中存储的数据报文转发。
在一种可能的实现方式中,存储单元1402还用于:
确定存储状态为第二状态时,将目标数据报文存储至相应的报文队列对应的第二类块存储空间,第二状态用于表征报文队列存储的数据报文的总流量大于等于流量阈值;
当满足第二条件时,将第二类块存储空间对应的第二BD信息送出,以转发第二类块存储空间中存储的数据报文。
在一种可能的实现方式中,转发单元1403还用于:
当第二类块存储空间存储的数据报文的总量达到第二阈值时,将第二类块存储空间对应的第二BD信息送出,以转发第二类块存储空间中存储的数据报文;或
确定报文队列的存储状态由第二状态切换至第一状态时,将第二类块存 储空间对应的第二BD信息送出,以转发第二类块存储空间中存储的数据报文。
在一种可能的实现方式中,第一类块存储空间不大于第二类块存储空间。
为了描述的方便,以上各部分按照功能划分为各模块(或单元)分别描述。当然,在实施本申请时可以把各模块(或单元)的功能在同一个或多个软件或硬件中实现。
在介绍了本申请示例性实施方式的话题推荐方法和装置之后,接下来,介绍根据本申请的另一示例性实施方式的用于话题推荐的装置。
所属技术领域的技术人员能够理解,本申请的各个方面可以实现为系统、方法或程序产品。因此,本申请的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。
与本申请上述方法实施例基于同一发明构思,本申请实施例中还提供了一种电子设备150,如图15所示,包括至少一个处理器1501,以及与至少一个处理器连接的存储器1502,本申请实施例中不限定处理器1501与存储器1502之间的具体连接介质。图15中,处理器1501和存储器1502之间通过总线连接为例。总线可以分为地址总线、数据报文总线、控制总线等。
在本申请实施例中,存储器1502存储有可被至少一个处理器1501执行的指令,至少一个处理器1501通过执行存储器1502存储的指令,可以执行前述的数据报文的转发方法中所包括的步骤。
其中,处理器1501是电子设备的控制中心,可以利用各种接口和线路连接终端设备的各个部分,通过运行或执行存储在存储器1502内的指令以及调用存储在存储器1502内的数据报文,从而获得客户端地址。可选的,处理器1501可包括一个或多个处理单元,处理器1501可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、定位目标界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1501中。在一些实施例中,处理器1501和存储器1502可以在同一芯片上实现,在一些实施例中,它们也可以在独立的芯片上分别 实现。
处理器1501可以是通用处理器,例如中央处理器(CPU)、数字信号处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
存储器1502作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块。存储器1502可以包括至少一种类型的存储介质,例如可以包括闪存、硬盘、多媒体卡、卡型存储器、随机访问存储器(Random Access Memory,RAM)、静态随机访问存储器(StaticRandom Access Memory,SRAM)、可编程只读存储器(Programmable Read Only Memory,PROM)、只读存储器(ReadOnly Memory,ROM)、带电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、磁性存储器、磁盘、光盘等等。存储器1502是能够用于携带或存储具有指令或数据报文结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器1502还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据报文。
在一些可能的实施方式中,本申请提供的数据报文的转发方法的各个方面还可以实现为一种程序产品的形式,其包括计算机程序,当程序产品在电子设备上运行时,计算机程序用于使电子设备执行本说明书上述描述的根据本申请各种示例性实施方式的数据报文的转发方法中的步骤,例如,电子设备可以执行如图5中所示的步骤。
程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以是但不限于电、磁、 光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
本申请的实施方式的程序产品可以采用便携式紧凑盘只读存储器(CD-ROM)并包括计算机程序,并可以在计算装置上运行。然而,本申请的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被命令执行系统、装置或者器件使用或者与其结合使用。
可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读计算机程序。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由命令执行系统、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的计算机程序可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
应当注意,尽管在上文详细描述中提及了装置的若干单元或子单元,但是这种划分仅仅是示例性的并非强制性的。实际上,根据本申请的实施方式,上文描述的两个或更多单元的特征和功能可以在一个单元中具体化。反之,上文描述的一个单元的特征和功能可以进一步划分为由多个单元来具体化。
此外,尽管在附图中以特定顺序描述了本申请方法的操作,但是,这并非要求或者暗示必须按照该特定顺序来执行这些操作,或是必须执行全部所示的操作才能实现期望的结果。附加地或备选地,可以省略某些步骤,将多个步骤合并为一个步骤执行,和/或将一个步骤分解为多个步骤执行。
本领域内的技术人员应明白,本申请实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或 结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用计算机程序的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (10)

  1. 一种数据报文的转发方法,其中,所述方法包括:
    监测当前接收到的目标数据报文对应的报文队列的存储状态;所述报文队列用于存储与所述目标数据报文访问的地址信息一致的数据报文;
    在所述存储状态为第一状态的情况下,将所述目标数据报文存储至低延迟队列对应的第一类块存储空间;其中,所述第一状态用于表征所述报文队列存储的数据报文的总流量小于流量阈值;
    当满足第一条件时,将所述第一类块存储空间中存储的数据报文进行转发。
  2. 如权利要求1所述的方法,其中,所述监测当前接收到的目标数据报文对应的报文队列的存储状态,包括:
    监测所述报文队列的目标信用点,并基于所述目标信用点,确定所述目标数据报文对应的报文队列的存储状态;其中,所述目标信用点用于表征转发数据报文的转发速率。
  3. 如权利要求2所述的方法,其中,所述监测所述报文队列的目标信用点,包括:
    针对所述报文队列,确定分配给所述报文队列的第一信用点,以及转发所述报文队列对应的数据报文时,基于转发的数据报文的总流量确定扣减的第二信用点;
    基于所述第一信用点,以及所述第二信用点,确定所述目标信用点。
  4. 如权利要求1所述的方法,其中,所述监测当前接收到的目标数据报文对应的报文队列的存储状态,包括:
    监测所述报文队列的剩余队列长度,并基于所述剩余队列长度,确定所述目标数据报文对应的报文队列的存储状态。
  5. 如权利要求1所述的方法,其中,所述当满足第一条件时,将所述第一类块存储空间中存储的数据报文进行转发,包括:
    当所述第一类块存储空间存储的数据报文的总量达到第一阈值时,将所述第一类块存储空间中存储的数据报文进行转发;或
    定时检测所述低延迟队列对应的第一缓存描述符BD信息,确定所述第一BD信息有效后,将所述第一类块存储空间中存储的数据报文转发。
  6. 如权利要求1-5中任一项所述的方法,其中,所述方法还包括:
    在所述存储状态为第二状态的情况下,将所述目标数据报文存储至所述报文队列对应的第二类块存储空间,其中,所述第二状态用于表征所述报文队列存储的数据报文的总流量大于等于流量阈值;
    当满足第二条件时,转发所述第二类块存储空间中存储的数据报文。
  7. 如权利要求6所述的方法,其中,所述当满足第二条件时,转发所述第二类块存储空间中存储的数据报文,包括:
    当所述第二类块存储空间存储的数据报文的总量达到第二阈值时,转发所述第二类块存储空间中存储的数据报文;或
    确定所述报文队列的存储状态由所述第二状态切换至所述第一状态时,转发所述第二类块存储空间中存储的数据报文。
  8. 如权利要求6所述的方法,其中,所述第一类块存储空间不大于所述第二类块存储空间。
  9. 一种电子设备,包括处理器和存储器,其中,所述存储器存储有程序代码,当所述程序代码被所述处理器执行时,使得所述处理器执行权利要求1~8中任一所述方法的步骤。
  10. 一种计算机可读存储介质,其中,包括程序代码,当所述程序代码在电子设备上运行时,所述程序代码用于使所述电子设备执行权利要求1~8中任一所述方法的步骤。
PCT/CN2022/134231 2021-12-20 2022-11-25 一种数据报文的转发方法及装置 WO2023116340A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/317,969 US20230283578A1 (en) 2021-12-20 2023-05-16 Method for forwarding data packet, electronic device, and storage medium for the same

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111558739.0 2021-12-20
CN202111558739.0A CN114257559B (zh) 2021-12-20 2021-12-20 一种数据报文的转发方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/317,969 Continuation US20230283578A1 (en) 2021-12-20 2023-05-16 Method for forwarding data packet, electronic device, and storage medium for the same

Publications (1)

Publication Number Publication Date
WO2023116340A1 true WO2023116340A1 (zh) 2023-06-29

Family

ID=80795840

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/134231 WO2023116340A1 (zh) 2021-12-20 2022-11-25 一种数据报文的转发方法及装置

Country Status (3)

Country Link
US (1) US20230283578A1 (zh)
CN (1) CN114257559B (zh)
WO (1) WO2023116340A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114257559B (zh) * 2021-12-20 2023-08-18 锐捷网络股份有限公司 一种数据报文的转发方法及装置
CN114827288B (zh) * 2022-03-30 2024-06-25 阿里云计算有限公司 数据转发设备、数据处理方法及计算机可读存储介质
CN116055420A (zh) * 2022-12-07 2023-05-02 蔚来汽车科技(安徽)有限公司 整合办公网络与工业网络后的信息传输方法及控制装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102571559A (zh) * 2011-12-12 2012-07-11 北京交控科技有限公司 基于时间触发的网络报文发送方法
WO2021128104A1 (zh) * 2019-12-25 2021-07-01 华为技术有限公司 一种报文缓存方法、集成电路系统及存储介质
CN113382442A (zh) * 2020-03-09 2021-09-10 中国移动通信有限公司研究院 报文传输方法、装置、网络节点及存储介质
CN114257559A (zh) * 2021-12-20 2022-03-29 锐捷网络股份有限公司 一种数据报文的转发方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020176430A1 (en) * 2001-01-25 2002-11-28 Sangha Onkar S. Buffer management for communication systems
US6850999B1 (en) * 2002-11-27 2005-02-01 Cisco Technology, Inc. Coherency coverage of data across multiple packets varying in sizes
US9485200B2 (en) * 2010-05-18 2016-11-01 Intel Corporation Network switch with external buffering via looparound path
US10419370B2 (en) * 2015-07-04 2019-09-17 Avago Technologies International Sales Pte. Limited Hierarchical packet buffer system
US10715441B2 (en) * 2015-09-04 2020-07-14 Arista Networks, Inc. System and method of a high buffered high bandwidth network element
JP6980689B2 (ja) * 2016-03-31 2021-12-15 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. 撮像システム及び撮像システムの複数のノード間における通信のための通信プラットフォーム
CN113454957B (zh) * 2019-02-22 2023-04-25 华为技术有限公司 一种存储器的管理方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102571559A (zh) * 2011-12-12 2012-07-11 北京交控科技有限公司 基于时间触发的网络报文发送方法
WO2021128104A1 (zh) * 2019-12-25 2021-07-01 华为技术有限公司 一种报文缓存方法、集成电路系统及存储介质
CN113382442A (zh) * 2020-03-09 2021-09-10 中国移动通信有限公司研究院 报文传输方法、装置、网络节点及存储介质
CN114257559A (zh) * 2021-12-20 2022-03-29 锐捷网络股份有限公司 一种数据报文的转发方法及装置

Also Published As

Publication number Publication date
CN114257559A (zh) 2022-03-29
US20230283578A1 (en) 2023-09-07
CN114257559B (zh) 2023-08-18

Similar Documents

Publication Publication Date Title
WO2023116340A1 (zh) 一种数据报文的转发方法及装置
US11916781B2 (en) System and method for facilitating efficient utilization of an output buffer in a network interface controller (NIC)
USRE47756E1 (en) High performance memory based communications interface
US9344490B2 (en) Cross-channel network operation offloading for collective operations
US7505410B2 (en) Method and apparatus to support efficient check-point and role-back operations for flow-controlled queues in network devices
KR102082020B1 (ko) 다수의 링크된 메모리 리스트들을 사용하기 위한 방법 및 장치
US8149708B2 (en) Dynamically switching streams of packets among dedicated and shared queues
US20190044879A1 (en) Technologies for reordering network packets on egress
CN108023829B (zh) 报文处理方法及装置、存储介质、电子设备
EP4002119A1 (en) System, apparatus, and method for streaming input/output data
CN111290979B (zh) 数据传输方法、装置及系统
US7466716B2 (en) Reducing latency in a channel adapter by accelerated I/O control block processing
US10616116B1 (en) Network traffic load balancing using rotating hash
US11283723B2 (en) Technologies for managing single-producer and single consumer rings
US9268621B2 (en) Reducing latency in multicast traffic reception
US11552907B2 (en) Efficient packet queueing for computer networks
US9288163B2 (en) Low-latency packet receive method for networking devices
CN115955441A (zh) 一种基于tsn队列的管理调度方法、装置
US10284501B2 (en) Technologies for multi-core wireless network data transmission
CN115705303A (zh) 数据访问技术
US11210089B2 (en) Vector send operation for message-based communication
US20060140203A1 (en) System and method for packet queuing
US20230396561A1 (en) CONTEXT-AWARE NVMe PROCESSING IN VIRTUALIZED ENVIRONMENTS
CN118057792A (zh) 一种传输数据的方法和装置
CN116074273A (zh) 一种数据传输方法以及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22909655

Country of ref document: EP

Kind code of ref document: A1