WO2020125652A1 - 报文转发方法、装置、网络设备及计算机可读介质 - Google Patents

报文转发方法、装置、网络设备及计算机可读介质 Download PDF

Info

Publication number
WO2020125652A1
WO2020125652A1 PCT/CN2019/126079 CN2019126079W WO2020125652A1 WO 2020125652 A1 WO2020125652 A1 WO 2020125652A1 CN 2019126079 W CN2019126079 W CN 2019126079W WO 2020125652 A1 WO2020125652 A1 WO 2020125652A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory block
packet receiving
packet
message
queue
Prior art date
Application number
PCT/CN2019/126079
Other languages
English (en)
French (fr)
Inventor
冯仰忠
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2020125652A1 publication Critical patent/WO2020125652A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication

Definitions

  • the embodiments of the present application relate to the field of communication technologies, for example, to a message forwarding method, device, network device, and computer-readable medium.
  • Embodiments of the present application provide a message forwarding method, device, network device, and computer-readable medium, which can increase the transmission rate of messages within a network device.
  • An embodiment of the present application provides a message forwarding method, including: removing memory block information stored in the memory block address pool from a memory block address pool, and storing a message received by an input and output hardware in the memory block information indication
  • the memory block of the message according to the storage location of the message in the memory block to obtain the description information of the message, put the description information into the first packet receiving queue;
  • the packet queue reads the description information; through the packet receiving thread, a piece of memory block information stored in the second packet receiving queue marked as idle is stored in the memory block address pool, and will be received from the first
  • the description information read by the packet queue is put into the second packet receiving queue;
  • the application process corresponding to the second packet receiving queue reads the description information from the second packet receiving queue, according to Obtaining the message by using the description information read by the second packet receiving queue, and marking the memory block information in the second packet receiving queue used to indicate the memory block where the message is located as idle; wherein,
  • the memory block information stored in the memory block address pool does not overlap
  • An embodiment of the present application provides a message forwarding device, including: a first packet receiving module, configured to take out memory block information stored in the memory block address pool from a memory block address pool, and input and output messages received by the hardware Store to the memory block indicated by the memory block information, obtain the description information of the message according to the storage location of the message in the memory block, and put the description information of the message into the first packet receiving queue
  • the second packet receiving module is set to read the description information from the first packet receiving queue through the packet receiving thread; mark a memory stored in the second packet receiving queue as idle by the packet receiving thread Block information is stored in the memory block address pool, and the description information read from the first packet receiving queue is placed in the second packet receiving queue;
  • a third packet receiving module passes the second packet receiving The application process corresponding to the queue reads the description information from the second packet receiving queue, obtains a message according to the description information read from the second packet receiving queue, and uses the second packet receiving queue to indicate The memory block information of the memory block where the message is located is marked as idle; where
  • An embodiment of the present application provides a network device, including: input and output hardware, a processor, and a memory; the input and output hardware is configured to receive or send a message; the memory is configured to store a message forwarding program, and the message is forwarded When the program is executed by the processor, the above message forwarding method is realized.
  • An embodiment of the present application provides a computer-readable medium that stores a message forwarding program, and when the message forwarding program is executed, the above message forwarding method is implemented.
  • FIG. 1 is a schematic diagram of a Linux kernel socket (Socket) packet receiving technology
  • FIG. 2 is a schematic diagram of a zero-copy packet receiving technology
  • FIG. 3 is a flowchart of a message forwarding method provided by an embodiment of this application.
  • FIG. 4 is an exemplary schematic diagram of a message forwarding method provided by an embodiment of the present application.
  • FIG. 5 is an exemplary schematic diagram of another message forwarding method provided by an embodiment of the present application.
  • FIG. 6 is an exemplary schematic diagram of another message forwarding method provided by an embodiment of the present application.
  • FIG. 7 is an exemplary schematic diagram of another message forwarding method provided by an embodiment of the present application.
  • FIG. 8 is an exemplary schematic diagram of another message forwarding method provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of another example of a packet forwarding method provided by an embodiment of the present application.
  • FIG. 10 is an exemplary schematic diagram of another message forwarding method provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of another example of a packet forwarding method provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a packet forwarding device provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of another message forwarding device provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of a network device provided by an embodiment of the present application.
  • FIG 1 is a schematic diagram of a Linux kernel Socket packet receiving technology.
  • the Linux kernel Socket packet receiving process may include: the message enters the network card driver from the network card; the network card driver notifies the kernel thread to process the message in the network protocol stack through an interrupt. This process needs to go through the network protocol (Internet Protocol, IP ) Layer and Transmission Control Protocol (Transmission Control Protocol, TCP)/User Datagram Protocol (User Datagram Protocol, UDP) layer; the network protocol stack notifies the application layer (for example, application process P1, Pn) to receive the packet after processing the message.
  • IP Internet Protocol
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • the Socket packet receiving technology shown in Figure 1 has good versatility, it can support multi-process packet receiving and is not limited, but it has the following disadvantages: from the kernel to the application layer, it has to pass through the IP layer and the TCP/UDP layer. Just increase the copy of the message. Increasing the copy of the message will seriously affect the performance of receiving packets. When the application process in the container needs to receive the packet, it is limited by NameSpace and other restrictions. The transmission of the message depends on the container network. Message copy will also be increased. It can be seen that the message copy in the Linux kernel protocol stack is an important factor affecting the message transmission rate.
  • FIG. 2 is a schematic diagram of a zero-copy packet receiving technology.
  • the zero-copy packet receiving process may include: the message comes from the network card and is sent to the frame management; the frame management will parse, classify, or hash the message, and then send it to a specific queue ; Queue management is responsible for allocating queues to application processes (for example, allocating queue 1 to application process P1, allocating queue n to application process Pn), where each application process needs to allocate at least one queue to solve concurrency problems; application process from Receive and process messages in the specified queue.
  • the message zero copy technology shown in FIG. 2 can directly map the network card driver to the application process, so that the application process can directly access the message queue, thereby achieving zero copy of the message.
  • the network card driver can be placed in the kernel or directly in the application process.
  • the application process directly interacts with the driver queue. When interacting, you need to determine the queue number, Pool number, and priority scheduling strategy used by the application process. A series of questions. If there are multiple application processes that need to implement packet collection, each application process must map the management network card driver, determine the queue number, pool number, and priority scheduling strategy. Since generally different application processes may be maintained by different users, the above method Undoubtedly increased workload and wasted manpower. Moreover, this solution has some problems in the scenario where multiple application processes or containers send and receive messages.
  • the hardware resources of the network card are not enough, so the number of application processes will be limited. ; Some network cards do not support priority scheduling or scheduling is not flexible; the process in the container is limited by NameSpace and other restrictions when receiving packets, and the transmission of messages depends on the container network, resulting in increased message copies; each application process must directly Operating the user mode driver will bring unnecessary workload and so on.
  • Embodiments of the present application provide a message forwarding method, device, network device, and computer-readable medium, which are implemented by a packet receiving thread passing a memory address in a memory block address pool, a first packet receiving queue, and a second packet receiving queue With zero copy of the message, no copy is added during the message transmission process in the network device, thereby increasing the message transmission rate within the network device.
  • the embodiments of the present application can achieve multi-application process focus on applications without having to consider the details of the underlying hardware driver, without affecting performance while improving versatility, work efficiency, and reducing maintenance costs.
  • the application process receiving packets can be increased by increasing the second packet receiving queue and memory, thereby overcoming the problem of limited number of application processes; by increasing the second packet receiving queue Priority can be distinguished, so as to achieve priority scheduling of packets; by increasing the packet receiving thread and setting the affinity and exclusive fast packet reception, it can solve the hardware resource limitation and the hardware does not support priority scheduling or the scheduling is not flexible enough. Differences such as packet loss.
  • FIG. 3 is a flowchart of a message forwarding method provided by an embodiment of the present application.
  • the message forwarding method provided in this embodiment is applied to a network device, and is used to implement message transmission from an input and output hardware (for example, a network card) of the network device to an application process inside the network device.
  • the packet forwarding method provided in this embodiment can be applied to network devices that have high requirements on multi-process or multi-threading, containerization, versatility, and packet sending and receiving rates, such as routers and switches. However, this application is not limited to this.
  • the packet forwarding method provided in this embodiment includes the following steps:
  • Step S1010 Remove the memory block information stored in the memory block address pool from the memory block address pool, store the message received by the input and output hardware in the memory block indicated by the memory block information, and store the message in the memory block according to the message The location obtains the description information of the message, and puts the description information of the message into the first packet receiving queue.
  • Step S1020 Read the description information from the first packet receiving queue through the packet receiving thread.
  • Step S1030 Store a piece of memory block information marked as idle in the second packet receiving queue into the memory block address pool through the packet receiving thread, and put the description information read from the first packet receiving queue in the first Two queues for receiving packets.
  • Step S1040 The application process corresponding to the second packet receiving queue reads the description information from the second packet receiving queue, obtains a message according to the description information read from the second packet receiving queue, and converts the first The memory block information in the second packet receiving queue used to indicate the memory block where the obtained message is located is marked as idle.
  • the memory block information stored in the memory block address pool and the memory block information stored in the second packet receiving queue are not duplicated.
  • the memory block information stored in the memory block address pool and the second packet receiving queue may include the first address of the memory block or the memory block identifier (Identifier, ID); the memory block is a piece of physical memory with continuous addresses. Used to buffer the messages received by the input and output hardware.
  • the pre-allocated first address of the memory block can be injected into the memory block address pool and the second packet queue, and the first address of the memory block injected into the memory block address pool and the first address of the memory block injected into the second packet queue do not overlap .
  • a pre-allocated memory block ID may be injected into the memory block address pool and the second packet receiving queue, and the memory block ID injected into the memory block address pool and the memory block ID injected into the second packet receiving queue do not overlap.
  • the description information of the message may include: the first address of the memory block of the memory block that caches the message, the message length, and the offset information of the message based on the first address of the memory block.
  • this application is not limited to this.
  • the second packet receiving queue may be a ring queue; each ring queue is a lock-free queue, thereby achieving lock-free.
  • this application is not limited to this.
  • the second packet receiving queue may be a first-in first-out queue (First Input, First Output, FIFO).
  • the packet forwarding method of this embodiment may further include: after receiving the packet receiving request of the application process, allocating at least one memory chip with a continuous physical address to the application process, Cut out multiple memory blocks from the memory slice, and store the memory block information (for example, the first address or ID of the memory block) corresponding to the multiple memory blocks in the memory block address pool and the second packet queue corresponding to the application process, And mark the memory block information stored in the second packet receiving queue as idle; or, reserve at least one memory chip with a continuous physical address, and cut a plurality of memory chips from the memory chip after receiving a packet request from the application process Memory block, store the memory block information corresponding to multiple memory blocks (for example, the first address or ID of the memory block) to the memory block address pool and the second packet queue corresponding to the application process, and mark the second block to store The memory block information of the packet queue is idle.
  • the memory block information injected in the memory block address pool and the memory block information injected in the second packet receiving queue are not duplicated.
  • the address allocation of the memory slice and the injection of the first address or ID of the memory block in the memory block address pool and the second packet receiving queue may be implemented through the packet receiving process. However, this application is not limited to this.
  • each memory block cut out from a memory chip with continuous physical addresses can be used to cache messages, and the physical addresses inside each memory block are continuous.
  • the continuous physical address provided by one memory chip is insufficient, a sufficient number of memory blocks can be cut from multiple memory chips, as long as the physical addresses inside the memory blocks cut from the memory chip are continuous.
  • the packet forwarding method of this embodiment may further include: when there is no memory block information marked as idle in the second packet receiving queue, the packet receiving thread will The memory block information corresponding to the description information read from the first packet receiving queue is put back into the memory block address pool. Among them, when there is no idle memory block information in the second packet receiving queue (that is, the description information of the packets in the second packet receiving queue is full), the packet receiving thread can reclaim the corresponding memory block information, thereby discarding the corresponding Message.
  • the packet forwarding method of this embodiment may further include: reading, according to the description information read from the first packet receiving queue through the packet receiving thread, the information indicated in the description information For the packets cached by the physical address, by parsing the read packets, the second packet receiving queue corresponding to the read packets is determined; accordingly, step S1030 may include: using the packet receiving thread to read the read packets A piece of memory block information marked as idle stored in the corresponding second packet receiving queue is put into the memory block address pool, and the description information read from the first packet receiving queue is put into the second packet receiving queue.
  • the packet forwarding method of this embodiment may further include: the second packet receiving queue corresponding to the message read through the packet receiving thread is not marked as In the case of the memory block information in the idle state, the memory block information corresponding to the description information read from the first packet receiving queue is returned to the memory block address pool through the packet receiving thread.
  • the packet receiving thread can reclaim the corresponding memory block information, thereby discarding the corresponding Message.
  • the packet forwarding method of this embodiment may further include: receiving a packet receiving request of the application process; according to the packet receiving request of the application process, creating one or more corresponding ones for the application process A second packet receiving queue; return to the application process the creation information of the second packet receiving queue corresponding to the application process.
  • one application process may correspond to one second packet receiving queue, or correspond to multiple second packet receiving queues (for example, one second packet receiving queue group), and one second packet receiving queue corresponds to only one application process.
  • the receiving process of the packet receiving request and the process of creating the second packet receiving queue may be implemented by a packet receiving process.
  • the receiving of the packet receiving request and the creation of the second packet receiving queue can be implemented by the packet receiving thread in the packet receiving process.
  • this application is not limited to this.
  • the receiving of the packet receiving request and the creation of the second packet receiving queue may be implemented by other threads (for example, channel management threads) in the packet receiving process.
  • the packet receiving request of the application process may carry the following information: the number of the second packet receiving queue created by the request, the size of the second packet receiving queue, the maximum length of the received message, and the characteristics of the received message Information etc.
  • the creation information of the second packet receiving queue corresponding to the application process may include information such as the number of the second packet receiving queue corresponding to the application process. However, this application is not limited to this.
  • the packet receiving thread reads the message buffered at the physical address indicated by the description information according to the description information read from the first packet receiving queue, and determines the result by parsing the read message
  • the second packet receiving queue corresponding to the read message may include: by mapping the description information read from the first packet receiving queue to a virtual address, reading and parsing the message, and obtaining characteristic information of the message; According to the parsed characteristic information of the message, determine the application process that receives the message; according to the application process that receives the message and the correspondence between the application process and the second packet receiving queue (for example, the application process and the second packet receiving The queue is in a one-to-one correspondence), and the second packet receiving queue corresponding to the message is determined.
  • creating a corresponding one or more second packet receiving queues for the application process according to the packet receiving request of the application process may include: according to the packet receiving request of the application process, creating support priority for the application process A plurality of second packet receiving queues scheduled at a level, wherein any priority level to which a packet to be received by the application process belongs corresponds to one or more second packet receiving queues.
  • an application process has two priority levels for packets to be received, you can create at least two second packet receiving queues (for example, queue 1 and queue 2) for the application process, where one priority level can correspond to at least one The second packet receiving queue (for example, queue 1), another priority can correspond to at least one second packet receiving queue (for example, queue 2); in other words, a packet belonging to one of the priorities can pass at least one second packet receiving The queue (for example, queue 1) is received, and packets belonging to another priority level can be received through at least one second packet receiving queue (for example, queue 2).
  • the packet receiving thread reads the message buffered at the physical address indicated by the description information according to the description information read from the first packet receiving queue, and determines the result by parsing the read message
  • the second packet receiving queue corresponding to the read message may include: by mapping the description information read from the first packet receiving queue to a virtual address, reading and parsing the message, and obtaining characteristic information of the message; According to the parsed characteristic information of the message, determine the application process receiving the message and the priority to which the message belongs; according to the application process receiving the message, the priority to which the message belongs, and the application process correspondence The corresponding relationship between the second packet receiving queue and the priority determines the second packet receiving queue corresponding to the message.
  • the application process may receive a message from the corresponding second packet receiving queue according to a certain percentage, thereby implementing priority scheduling of the message. For example, the application process may preferentially receive higher priority packets from the second packet receiving queue corresponding to the higher priority.
  • the packet forwarding method of this embodiment may further include: after receiving a packet receiving request from the application process, creating a corresponding memory block address pool for the application process; or, according to Input and output the types of messages received by the hardware, and create one or more memory block address pools.
  • an independent memory block address pool can be created for each application process, for example, an independent memory block address pool can be created for the application process according to the application process's packet receiving request to improve the application process's packet receiving performance; or, Multiple application processes can share one or more memory block address pools. For example, one or more memory block address pools can be created in advance.
  • multiple memory block address pools can be created based on the type of message received by the input and output hardware (for example, the size of the message). For example, two memory block address pools can be created, one of which is stored in the memory block address pool.
  • the memory block indicated by the memory block information can be used to cache messages whose message size is less than the preset value, and the memory block indicated by the memory block information stored in another memory block address pool can be used to cache the message size greater than or equal to Message with preset value.
  • the memory block address pool can be created through the packet receiving process. However, this application is not limited to this.
  • the packet forwarding method of this embodiment may further include: after receiving the packet receiving request of the application process, creating a corresponding first packet receiving queue for the application process; or Based on the type of input and output hardware, create one or more first packet receiving queues.
  • an independent first packet receiving queue can be created for each application process, for example, an independent first packet receiving queue can be created for the application process according to the packet receiving request of the application process to improve the packet receiving performance of the application process;
  • multiple application processes may share one or more first packet receiving queues.
  • one or more first packet receiving queues may be created in advance.
  • the first packet receiving queue can be created according to the type of input and output hardware (network card).
  • a first packet receiving queue can be created.
  • Multiple second receiving queues can be created.
  • the first packet receiving queue can be created through the packet receiving process.
  • this application is not limited to this.
  • the packet forwarding method of this embodiment may further include: after receiving the packet receiving request of the application process, creating a corresponding packet receiving thread for the application process; or, receiving After the package receiving request of the application process, one of the created package receiving threads is selected as the package receiving thread corresponding to the application process.
  • a separate packet collection thread can be created for each application process, or multiple application processes can share a packet collection thread.
  • you can select one of the package receiving threads created for other application processes as the corresponding package receiving of the application process Thread for example, you can set a default packet receiving thread to provide to multiple application processes.
  • the creation of the packet receiving thread can be achieved through the packet receiving process.
  • this application is not limited to this.
  • multiple application processes may correspond to only one packet receiving thread, or multiple application processes may correspond to multiple packet receiving threads.
  • messages can be delivered to multiple application processes through only one packet receiving thread; or, messages can be delivered to multiple application processes through multiple packet receiving threads, for example, five application processes can be passed through two packet receiving threads To transmit messages, one packet receiving thread can deliver messages to three application processes, and the other packet receiving thread can deliver messages to the remaining two application processes.
  • one or more application processes may be located in the container.
  • the packet forwarding method provided in this embodiment may be applicable to a scenario where an application process in a container needs to receive a packet.
  • an application process in a container needs to receive a packet.
  • the packet receiving thread on the host Host
  • a piece of physical memory with continuous addresses needs to be used to create the second packet receiving queue.
  • the application process and the packet receiving thread may both be located in the container.
  • the packet forwarding method provided in this embodiment may be suitable for a scenario where a packet is directly received from input and output hardware in a container.
  • the message forwarding method of this embodiment may further include: setting the affinity or exclusiveness of the packet receiving thread to the resources of the Central Processing Unit (CPU).
  • CPU Central Processing Unit
  • the CPU affinity of the packet receiving thread can be set so that the packet receiving thread exclusively occupies a CPU resource, thereby reducing the probability of indiscriminate packet loss .
  • this application is not limited to this.
  • the CPU affinity of the packet receiving thread can also be set to improve packet receiving performance.
  • the message forwarding method of this embodiment may further include: after reading the message according to the description information read from the first packet receiving queue through the packet receiving thread, updating the service to which the message belongs The flow statistics count of the flow. When the flow statistics count within the speed limit duration meets the set conditions, the packet is discarded; after each time the speed limit duration is reached, the flow statistics count is set to the initial value.
  • the packet forwarding method provided in this embodiment may be applicable to a scenario where the traffic of the service flow is excessive.
  • the initial value of the flow statistics count may be 0.
  • the flow statistics count of the service flow to which the packet belongs may be increased by one. (For example, one second)
  • the flow statistics count meets the set condition (for example, greater than the rate limit of the service flow)
  • the packet is discarded; and after each time the rate limit is reached, the flow statistics count is set Is the initial value (here 0).
  • the initial value of the flow statistics count of the service flow may be the speed limit value of the service flow.
  • the flow statistics of the service flow to which the packet belongs The count is decremented by one.
  • the flow statistics count within the speed limit duration for example, one second
  • the set condition for example, the flow statistics count within the speed limit duration is 0
  • the packet is discarded; and each time the limit is reached After the speed duration, set the flow statistics count to the initial value (speed limit value here).
  • the message forwarding method of this embodiment may further include: removing from the first packet queue the information of the memory block marked as idle in the first packet queue (for example, the first address of the memory block or ID), store the message to be sent by the application process to the memory block indicated by the memory block information, obtain the description information of the message according to the storage location of the message in the memory block, and store the message
  • the description information is put into the first packet sending queue; the description information is read from the first packet sending queue through the packet sending thread, and the memory block information (for example, the first address or ID of the memory block) stored in the memory block address pool is marked as idle.
  • the packet sending thread can send the message by transferring the memory address in the memory block address pool, the first packet sending queue, and the second packet sending queue.
  • the message sending process may not use the above manner.
  • FIG. 4 is an exemplary schematic diagram of a packet forwarding method provided by an embodiment of the present application.
  • This exemplary embodiment illustrates a receiving process of implementing packet zero copy transmission through a packet receiving thread, a memory block address pool, a first packet receiving queue, and a second packet receiving queue using memory address replacement.
  • the second packet receiving queue is described by taking a ring queue as an example, that is, a second packet receiving queue is a ring queue (hereinafter referred to as a ring), and each ring queue is a lock-free queue.
  • one application process corresponds to a group of second packet receiving queues (that is, one ring group).
  • a piece of memory chip A with a continuous physical address is reserved, and multiple memory blocks (Block) can be cut out in memory chip A for buffering messages; wherein, the size of memory chip A is greater than or equal to the total number of blocks (For example, n in Figure 4, n is an integer greater than 1) multiplied by the maximum length of the allowed message (such as 10K bytes (Byte)); each Block represents a continuous physical memory address, the first address of the Block It means the first address of this segment of continuous physical memory.
  • multiple memory slices may be reserved, and multiple memory blocks may be cut out from these memory slices, as long as the physical addresses inside the memory blocks cut out from them are continuous.
  • Pool B Assign a memory block address pool (hereinafter referred to as Pool) B and a first packet receiving queue (hereinafter referred to as Queue) C to a hardware driver (for example, a network card driver).
  • Pool B is used to store the first address of the memory block.
  • Pool B can be a FIFO queue, a linked list, an array, or a circular queue.
  • Queue C may be a FIFO structure or a ring queue structure, however, this application is not limited to this.
  • Ring group D that is, the above-mentioned multiple second packet receiving queues
  • the Ring group D may include m+1 Rings, and m may be an integer greater than or equal to 0.
  • Thread E Create a packet receiving thread (hereinafter referred to as Thread) E for receiving packets from the hardware driver.
  • Thread E can map the first address of the memory chip A with continuous physical addresses to a virtual address for use in parsing packets.
  • the process of implementing zero-copy message transfer is performed between Pool B, Queue C, Ring group D, and Thread E, and the memory address replacement action occurs between Pool B and Ring group D.
  • a total of n-k Block first addresses of k+1 to n parts may be injected into Pool B.
  • Place Block 1 to Block i a total of i Block first addresses in Ring 0
  • Place Block j to Block a total of k-j+1 Block first addresses in Ring
  • Ring group D in other Ring
  • the injection method of the first address of the block is the same as the injection method of the first address of the block in Ring 0 and Ring m.
  • the usage state of all the first addresses of the blocks injected in Ring group D is idle.
  • i, j, k are all integers greater than 1.
  • the number of Block first addresses injected into each Ring and Pool B may be the same or different, which is not limited in this application. Among them, the sum of the number of all block first addresses in Ring group D plus the number of all block first addresses in Pool B can be n.
  • the packet forwarding method of this exemplary embodiment may include steps 1010 to 1090.
  • Step 1010 The network card sends the received message to the frame management.
  • Step 1020 The frame management parses, classifies/hashes the message, and takes a Block first address from Pool B to store the message.
  • Step 1030 the frame management fills in the descriptor (corresponding to the above description information) of the first address of the Block, the message length, and the offset information of the message based on the first address of the Block, and puts this descriptor in the Queue in.
  • the number of Queue C can be one or more; when the number of Queue C is multiple, that is, multiple queues are used, the frame management can choose which Queue to put the descriptor of the message according to the characteristics of the message C, thereby supporting priority scheduling.
  • a Queue C is used as an example for description.
  • a separate thread can be set up to extract a Block first address from Pool B to store the message, and the Block first address, message length, and message offset information based on the Block first address, etc.
  • the information is filled in the descriptor, and this descriptor is placed in Queue C.
  • Thread E polls the descriptor from Queue C, takes out the information such as the first address of the block of the message, the length of the message, and the offset information of the message based on the first address of the block, and obtains the message through a simple offset operation
  • the virtual address of the text may be: the virtual address of the message is equal to the first address of the Block of the message minus the first address of the continuous memory chip A plus the virtual address mapped from the first address of the continuous memory chip A.
  • Thread E can read and parse the message, and according to the message's characteristic information (for example, from the message's characteristic field), it can determine the application process to which the message is to be forwarded and the corresponding Ring.
  • Thread E can put the first block address of the packet, the length of the packet, and the offset of the packet based on the first block address into the corresponding ring by replacing the first block address in steps 1050 to 1060.
  • the Ring corresponding to the message is Ring as an example.
  • Step 1050 Thread E pops up a free Block first address from Ring and returns it to Pool B.
  • Step 1060 Thread E puts the information such as the first address of the block of the message, the length of the message, and the offset information of the message based on the first address of the message into the corresponding position in the ring for reading by the process P11.
  • the position after the pop-up of an idle Block first address in Ring can be put into the information such as the Block first address of the message, the length of the message, and the offset information of the message based on the first address of the Block.
  • step 1070 is executed, that is, Thread E can change
  • the first address of the block corresponding to the received message is returned to Pool B; this situation implements the discard operation when the message cannot be sent, and the first address of the block is recovered.
  • Thread E polls the descriptor from Queue C and takes out the first address of the block of the message. After the information such as the packet length and the offset of the packet based on the first address of the block, it is possible to directly perform step 1050 and step 1060 without reading and parsing the packet.
  • the application process P11 may take out the information such as the first address of the block of the message from the ring, the length of the message, and the offset information of the message based on the first address of the block, and then read the message from the block that stores the message.
  • the application process P11 is placed in the container 1.
  • this application is not limited to this. In other embodiments, the application process P11 may not be placed in the container.
  • Step 1090 After processing the message, the application process P11 may set the first address of the block corresponding to the message in Ring to the idle state, so that ThreadE can continue to use it.
  • the subsequent replacement of the first address of the block that stores the message so as to achieve zero copy delivery of the message to the application process.
  • it can encapsulate the access of the application process to the network card, shielding the direct interaction between the application process and the network card driver, so that the application process does not need to consider the details of the underlying hardware driver when receiving the package, which improves the versatility and work without affecting the transfer performance. Efficiency and reduced maintenance costs.
  • the network card when the network card does not support priority scheduling, that is, as shown in FIG. 4, there is only one Queue C, then all packets sent from the network card must enter Queue C.
  • the network card does not support priority scheduling or packet forwarding in scenarios where scheduling is not flexible enough.
  • a cgroup or other exclusive technology may be used to make the packet receiving thread monopolize a CPU resource.
  • FIG. 5 is an exemplary schematic diagram of another message forwarding method provided by an embodiment of the present application.
  • This exemplary embodiment illustrates a process of creating a packet receiving channel for multiple application processes, where a set of second packet receiving queues (eg, a ring group) that supports priority scheduling can be created for each application process.
  • a set of second packet receiving queues eg, a ring group
  • the packet forwarding method provided by this exemplary embodiment includes the following steps:
  • Step 2010 the application process needs to send and receive packets, and sends a packet receiving request to the packet receiving process P0;
  • the application process P11 to the application process P1n in the container 1 and the application process Pn1 to the application process in the container n Pnn has a package receiving requirement, and all can send a package receiving request to the package receiving process P0.
  • the request information of the application process may include: the number and size of rings requested to be created, the maximum length of the received message, the feature information of the received message, and so on.
  • a packet receiving process P0 is used as an example for description, however, this application is not limited to this. In other implementations, multiple packet receiving processes may also be used.
  • Step 2020 The task (Job) of the packet receiving process P0 may create a packet receiving channel for multiple application processes according to the packet receiving requests of the multiple application processes. Among them, Job is specifically responsible for distributing and managing package receiving requests carrying package receiving demand information. However, this application is not limited to this. In other embodiments, the packet receiving process P0 may open a channel management thread to manage the packet receiving request and create a packet receiving channel.
  • the Job can reserve a continuous piece of memory with a physical address, create a memory block address pool and a first packet receiving queue, and create a packet receiving thread; and according to the packet receiving requirements of each application process, An application process creates a corresponding Ring group that supports priority scheduling.
  • An application process creates a corresponding Ring group that supports priority scheduling.
  • any application process can correspond to one Ring group; for example, as shown in FIG. 5, application process P11 corresponds to Ring group D11, and application process Pnn corresponds to Ring group Dnn, where the number of Rings in each Ring group can be The same (for example, m+1, m is an integer greater than or equal to 0) or different.
  • this application is not limited to this.
  • Ring group D11 corresponding to the application process P11 supporting priority scheduling as an example.
  • Any Ring in the Ring group D11 can correspond to the first level priority.
  • the subsequent packet receiving thread can analyze the priority of the packet to describe the packet.
  • the information is put into the ring corresponding to the priority.
  • this application is not limited to this.
  • multiple rings in a ring group that supports priority scheduling may correspond to a first-level priority.
  • step 2030 after the Job of the packet receiving process P0 creates the priority group D11 that supports the priority to the application process P11, it returns the corresponding creation information of the priority group D11 that supports the priority to the application process P11. Similarly, after the Job of the packet receiving process P0 creates a ring group that supports priority for any other application process, it will return the creation information of the corresponding ring group to the application process. Among them, each time Job creates a corresponding Ring group for an application process, it will return the creation information of the corresponding Ring group to the application process.
  • the creation information may include queue management information of the Ring group supporting priority corresponding to the application process (for example, the correspondence between Ring and priority in the Ring group), etc. In this way, a packet receiving channel is created for each application process.
  • the application process can read packets of different priorities from the corresponding Ring according to a certain ratio, for example, read packets from the Ring corresponding to a higher priority first.
  • the number of containers may be multiple, such as 1 to n; the application process in each container may also be multiple, such as Pn1 to Pnn.
  • the number of application processes is limited by the memory chip A with continuous addresses (as shown in FIG. 4), as long as the memory chip A is expanded and the number of packet receiving processes is increased, the number of application processes can be eliminated Limited by the hardware resources of the network card.
  • each Ring group corresponds to an application process to receive packets. By increasing the Ring group and memory, the application process for receiving packets can be increased, thereby overcoming the situation that the number of application processes for receiving packets is limited by the hardware resources of the network card.
  • FIG. 6 is an exemplary schematic diagram of another message forwarding method provided by an embodiment of the present application.
  • This exemplary embodiment illustrates that a single packet receiving thread uniformly delivers messages to multiple application processes of multiple containers.
  • the application processes P11 to P1n in the container 1 and the application processes Pn1 to Pnn in the container n all have package receiving requirements.
  • the packet forwarding method provided by this exemplary embodiment includes the following processes:
  • Step 3010 Each application process determines whether it needs to support priority scheduling, the maximum message buffer size supported by each priority, the maximum length of the received message, and the characteristic information of the received message, etc., and sends the received packet The request, wherein the packet receiving request may carry the above information.
  • Step 3020 The Job of the packet receiving process P0 creates a packet receiving channel for each application process according to the packet receiving request of each application process; wherein, the creation process of the packet receiving channel can refer to the description in FIG. 5, so it will not be repeated here. .
  • the application process P11 corresponds to the ring group D11
  • the application process P1n corresponds to the ring group D1n
  • the application process Pn1 corresponds to the ring group Dn1
  • the application process Pnnn corresponds to the ring group Dnn, where each Ring group
  • the number can be the same (for example, m+1, m is an integer greater than or equal to 0) or different. However, this application is not limited to this.
  • Step 3030 When a packet is sent from the network card, it is sent to the first packet receiving queue through the frame management.
  • the packet receiving thread in the packet receiving process P0 can poll this packet and store it after parsing the feature information.
  • the description information of the message is replaced in the Ring of the application process corresponding to this message.
  • step 3040 the application process P11 to the application process P1n in the container 1 and the application process Pn1 to the application process Pnn in the container n may each poll the corresponding Ring group to obtain a message.
  • Step 3050 After processing the message according to business requirements, each application process may set the first address of the memory block in the corresponding Ring to indicate the memory block storing the message to an idle state, so as to continue to use it.
  • FIG. 7 is an exemplary schematic diagram of another packet forwarding method provided by an embodiment of the present application.
  • This exemplary embodiment illustrates that a single packet receiving thread uniformly delivers messages to multiple application processes.
  • the packet receiving process P0 and the application processes P1 to Pn are on the same Host, therefore, application processes with lower performance requirements can also use shared memory to create rings.
  • the packet forwarding method provided by this exemplary embodiment includes the following processes:
  • Step 4010 Each application process (for example, application process P1 to application process Pn) sends a packet receiving request according to its own packet receiving requirements.
  • the packet receiving request may carry the following information: the number and size of rings requested to be created, the maximum length of the received message, and the characteristic information of the received message, etc.
  • Step 4020 The Job of the packet receiving process P0 creates a packet receiving channel for each application process according to the packet receiving request of each application process; wherein, the process of creating the packet receiving channel can refer to the description in FIG. 5, so it will not be repeated here. .
  • the application process P1 corresponds to the ring group D1
  • the application process Pm corresponds to the ring group Dm
  • the application process Pn corresponds to the ring group Dn, where the number of rings in each ring group can be the same (for example, m +1, m is an integer greater than or equal to 0) or different.
  • this application is not limited to this.
  • Step 4030 When a message is sent from the network card, it is sent to the first packet receiving queue through the frame management.
  • the packet receiving thread in the packet receiving process P0 can poll this message and store this after parsing the feature information.
  • the description information of the message is replaced in the Ring of the application process corresponding to this message.
  • Step 4040 The application process P1 to the application process Pn may each poll the corresponding Ring group to obtain a message.
  • Step 4050 After processing the message according to business requirements, each application process may set the first address of the memory block in the corresponding Ring to indicate the memory block storing the message to an idle state, so as to continue to use it.
  • FIG. 8 is a schematic diagram of another message forwarding method provided by an embodiment of the present application.
  • This exemplary embodiment illustrates that multiple packet receiving threads uniformly receive packets for multiple application processes in multiple containers.
  • a single packet receiving thread cannot meet the needs of some services, such as sampling, Network Address Translation (NAT) and other services, which have very high requirements for packet receiving performance. Based on this, you can increase the number of packet receiving threads. Ways to meet some of the business requirements for packet receiving performance.
  • some services such as sampling, Network Address Translation (NAT) and other services, which have very high requirements for packet receiving performance. Based on this, you can increase the number of packet receiving threads. Ways to meet some of the business requirements for packet receiving performance.
  • NAT Network Address Translation
  • the packet forwarding method provided by this exemplary embodiment includes the following processes:
  • Step 5010 Each application process (for example, the application processes P11 to P1n in the container 1 and the application processes Pn1 to Pnn in the container n) sends a packet receiving request according to its own packet receiving requirements.
  • the packet receiving request may carry the following information: the number and size of rings requested to be created, the maximum length of the received message, and the characteristic information of the received message, etc.
  • Step 5020 The Job of the packet receiving process P0 creates a packet receiving channel for each application process according to the packet receiving request of each application process.
  • the correspondence between the packet receiving thread and the application process can be distinguished.
  • the packet receiving thread 1 can be used to receive packets from the application processes P11 to P1n and the application process Pn1
  • the packet receiving process s can be used to receive packets from the application process Pnn.
  • s may be an integer greater than or equal to 1.
  • the application process P11 corresponds to the Ring group D11
  • the application process P1n corresponds to the Ring group D1n
  • the application process Pn1 corresponds to the Ring group Dn1
  • the application process Pnn corresponds to the Ring group Dnn, where each Ring group
  • the number can be the same (for example, m+1, m is an integer greater than or equal to 0) or different. However, this application is not limited to this.
  • Step 5030 Based on the created packet receiving channel, the packet receiving thread (for example, packet receiving threads 1 to s) may receive packets for the corresponding application process.
  • the packet receiving thread for example, packet receiving threads 1 to s
  • the packet receiving thread may receive packets for the corresponding application process.
  • the relevant description of this step please refer to step 1010 to step 1070 in FIG. 4, so it will not be repeated here.
  • Step 5040 Each application process can poll the corresponding Ring group to obtain the message.
  • Step 5050 After processing the message according to business requirements, each application process may set the first address of the memory block in the corresponding Ring to indicate the memory block storing the message to an idle state, so as to continue to use it.
  • FIG. 9 is a schematic diagram of another message forwarding method provided by an embodiment of the present application.
  • This exemplary embodiment illustrates a process of receiving packets from multiple application receiving threads to multiple application processes and multiple application processes in multiple containers.
  • the application process that needs to receive the package may be on the Host or in the container, so that there are scenarios where both the Host and the container need the application process to receive the package.
  • the packet forwarding method provided by this exemplary embodiment includes the following processes:
  • Step 6010 The multiple application processes Pi to Pk and the application processes Pn1 to Pnn in the container n send a packet receiving request according to their packet receiving requirements.
  • the packet receiving request may carry the following information: the number and size of rings requested to be created, the maximum length of the received message, and the characteristic information of the received message, etc.
  • Step 6020 The Job of the packet receiving process P0 creates a packet receiving channel for each application process according to the packet receiving request of each application process.
  • the correspondence between the packet receiving thread and the application process can be distinguished.
  • the packet receiving thread 1 can be used to receive packets from the application processes Pi to Pk and the application process Pn1
  • the packet receiving process s can be used to receive packets from the application process Pnn.
  • s may be an integer greater than or equal to 1.
  • the application process on the host can use either shared memory or reserved physical memory with continuous addresses to create the ring group, but the process in the container can only use the reserved physical memory with continuous addresses to create the ring group.
  • the application process Pi corresponds to the Ring group Di
  • the application process Pk corresponds to the Ring group Dk
  • the application process Pn1 corresponds to the Ring group Dn1
  • the application process Pnn corresponds to the Ring group Dnn, where Ring in each Ring group
  • the number can be the same (for example, m+1, m is an integer greater than or equal to 0) or different.
  • this application is not limited to this.
  • Step 6030 Based on the created packet receiving channel, the packet receiving thread (for example, packet receiving threads 1 to s) may receive packets for the corresponding application process.
  • the packet receiving thread for example, packet receiving threads 1 to s
  • the packet receiving thread may receive packets for the corresponding application process.
  • the relevant description of this step please refer to step 1010 to step 1070 in FIG. 4, so it will not be repeated here.
  • Step 6040 Each application process can poll the corresponding Ring group to obtain a message.
  • Step 6050 After processing the message according to business requirements, each application process may set the first address of the memory block in the corresponding Ring to indicate the memory block storing the message to an idle state, so as to continue to use it.
  • FIG. 10 is a schematic diagram of another message forwarding method provided by an embodiment of the present application.
  • This exemplary embodiment illustrates the implementation of unified packet reception for multiple application processes in a physical memory replacement manner in a container.
  • the hardware can already support virtualization technology. By virtualizing the hardware network into individual objects, you can receive packets directly from the network port Media Access Control (MAC) in the container. For this scenario, a packet receiving thread can reside in the container to receive packets for each application process.
  • MAC Media Access Control
  • the packet forwarding method provided by this exemplary embodiment includes the following processes:
  • Step 7010 The multiple application processes P1 to Pm in the container send a packet receiving request according to their packet receiving requirements.
  • the packet receiving request may carry the following information: the number and size of rings requested to be created, the maximum length of the received message, and the characteristic information of the received message, etc.
  • Step 7020 The Job of the packet receiving process P0 creates a packet receiving channel for each application process according to the packet receiving request of each application process.
  • the creation process of the packet receiving channel can refer to the description of FIG.
  • the application process P1 corresponds to the ring group D1
  • the application process Pm corresponds to the ring group Dm; wherein, the number of rings in each ring group can be the same (for example, a+1, a is greater than or Integer equal to 0) or different.
  • this application is not limited to this.
  • Step 7030 Based on the created packet receiving channel, the packet receiving thread can receive packets for the corresponding application process.
  • the packet receiving thread can receive packets for the corresponding application process.
  • the relevant description of this step please refer to step 1010 to step 1070 in FIG. 4, so it will not be repeated here.
  • Step 7040 Each application process may poll the corresponding ring group to obtain a message.
  • Step 7050 After processing the message according to business requirements, each application process may set the first address of the memory block in the corresponding Ring to indicate the memory block storing the message to an idle state, so as to continue to use it.
  • a rate limiting process for each business flow may be added at the packet receiving thread.
  • Step 8010 Each application process sends a packet receiving request according to its own packet receiving requirements.
  • the packet receiving request can carry the following information: the number and size of the rings created by the request, the maximum length of the received message, the characteristic information of the received message, and the rate limit value of the received service flow within the rate limit duration (for example, per second Speed limit value).
  • Step 8020 The Job of the packet receiving process creates a packet receiving channel for each application process according to the packet receiving request of each application process, and records the rate limit value of each type of service flow.
  • the creation process of the packet receiving channel can refer to the description of FIG.
  • Step 8030 Based on the created packet receiving channel, the packet receiving thread can receive packets for the corresponding application process.
  • each time the packet receiving thread receives a packet it updates the flow statistics count of the service flow to which the packet belongs. For example, the flow statistics count corresponding to the service flow to which the packet belongs is incremented by one (the initial value of the flow statistics count 0); If the flow statistics count of the service flow is greater than the rate limit value of the service flow within the speed limit duration, packet loss processing is performed. After each time the speed limit is reached (for example, after one second), the packet receiving thread sets the flow statistics count of the business flow to 0, thereby completing the speed limit processing flow of the business flow.
  • the packet receiving thread each time the packet receiving thread receives a packet, the flow statistics count corresponding to the service flow to which the packet belongs is decremented by one (the initial value of the flow statistics count is the speed limit value); if the flow of the service flow is within the speed limit duration If the statistical count is equal to 0, packet loss will be processed.
  • the packet receiving thread sets the flow statistical count of the business flow to the speed limit value, thereby completing the speed limit processing flow of the business flow.
  • step 1010 For the description of the packet receiving process in this step, please refer to step 1010 to step 1070 in FIG. 4, so it will not be repeated here.
  • Step 8040 Each application process may poll the corresponding ring group to obtain the message.
  • Step 8050 After processing the message according to the business requirements, each application process may set the first address of the corresponding memory block in the corresponding Ring to an idle state so as to continue to use it.
  • FIG. 11 is a schematic diagram of another message forwarding method provided by an embodiment of the present application. This exemplary embodiment illustrates a transmission process of implementing packet zero copy transmission through a packet sending thread, a memory block address pool, a first packet sending queue, and a second packet sending queue using memory address replacement.
  • each Block represents a continuous physical memory address
  • the first address of the block represents the continuous physical memory address 'S first address.
  • multiple memory slices may be reserved, and multiple memory blocks may be cut out from these memory slices, as long as the physical addresses inside the memory blocks cut out from them are continuous.
  • the memory block address pool is used to store the first address of the memory block
  • the memory block address pool can be FIFO Queue, linked list, array, or circular queue, however, this application is not limited to this.
  • the second packet sending queue may be a FIFO structure or a ring queue structure, however, this application is not limited to this.
  • a ring group that supports priority scheduling (that is, the above-mentioned multiple first packet queues).
  • the ring group may include v Rings, and v may be an integer greater than or equal to 1.
  • the memory address replacement process occurs between the memory block address pool and the ring group used to send packets.
  • a total of n-k Block first addresses of k+1 to n parts may be injected into the memory block address pool.
  • Put Block 1 to Block i a total of i Block first addresses in Ring 0; Place Block j to Block, a total of k-j+1 Block first addresses in Ring v; Rings in other Ring groups
  • the injection method of the first address is the same as the injection method of the block first address in Ring 0 and Ring v. Initially, the use state of all the block first addresses injected in the Ring group is idle.
  • i, j, k are all integers greater than 1.
  • the number of Block first addresses injected into each Ring and memory block address pool may be the same or different, which is not limited in this application. Among them, the sum of the number of all Block first addresses in the Ring group plus the number of all Block first addresses in the memory block address pool can be n.
  • the packet forwarding method of this exemplary embodiment may include steps 9010 to 9060.
  • Step 9010 The application process P11 in the container 1 takes the first address of the block marked as idle in the ring queue (for example, ring v) in the corresponding ring group, and stores the packet to be sent by the application process P11 to the The memory block indicated by the first address of the block, and the information such as the first address of the block buffering the message, the length of the message, and the offset information of the message based on the first address of the block are placed in the ring queue (ie Ring).
  • the ring queue for example, ring v
  • Step 9020 The packet sending thread polls ring v, and reads the information such as the block first address, the packet length, and the offset information of the packet based on the block first address from the ring v.
  • Step 9030 the packet sending thread reads the information such as the first block address, the length of the packet, and the offset information of the packet based on the first address of the block from the ring v, and then stores an idle block first in the address block of the memory block The address is put into ring v.
  • Step 9040 The packet sending thread puts information such as the first address of the block buffering the message, the length of the message, and the offset information of the message based on the first address of the block into the second packet sending queue.
  • Step 9050 The frame management reads from the second packet queue the information such as the first block address, the length of the packet, and the offset information of the packet based on the first address of the block, and then reads the corresponding block from the corresponding block according to the above information of the packet. Get the message.
  • Step 9060 Send the message externally through the network card.
  • Step 9070 After the frame management sends the message, the first address of the Block that stores the message is returned to the memory block address pool for subsequent use.
  • the packet sending thread may cache the first address of the block of the packet, the length of the packet, the offset information of the packet based on the first address of the block, the queue identifier of the second packet sending queue (such as QueueID)
  • the packet is sent, information such as the pool identifier (for example, PoolID) of the memory block address pool to which the corresponding first address of the memory block needs to be released constitutes a descriptor, which can be sent by calling the network card driver interface.
  • the network card driver After the message is sent, the network card driver returns the first address of the block corresponding to the physical address of the cached message back to the memory block address pool.
  • the packet forwarding device provided in this embodiment includes: a first packet receiving module 1201, configured to select the memory block information stored in the memory block address pool from the memory block address pool, and input and output hardware (For example, the network card)
  • the received message is stored in the memory block indicated by the memory block information, the description information of the message is obtained according to the storage location of the message in the memory block, and the description information of the message is placed in The first packet-receiving queue;
  • the second packet-receiving module 1202 is configured to read description information from the first packet-receiving queue through the packet-receiving thread; put a piece of memory block information stored in the second packet-receiving queue marked as idle into A memory block address pool, and put the description information read from the first packet receiving queue into the second packet receiving queue;
  • the third packet receiving module 1203 is set to pass from the second through the application process corresponding to the second packet receiving queue The receiving queue reads the
  • the second packet receiving module 1202 may also be set to receive packets from the first packet receiving thread through the packet receiving thread when there is no memory block information marked as idle in the second packet receiving queue.
  • the memory block information corresponding to the description information read by the queue is put back into the memory block address pool.
  • the second packet receiving module 1202 may be further configured to read the message cached at the physical address indicated by the description information according to the description information read from the first packet receiving queue through the packet receiving thread, By parsing the read message, the second packet receiving queue corresponding to the read message is determined.
  • the second packet receiving module 1202 may include a packet receiving thread and a Job (or a channel management thread).
  • FIG. 13 is a schematic diagram of another message forwarding apparatus provided by an embodiment of the present application.
  • the packet forwarding apparatus provided in this embodiment may further include: a second packet receiving queue creation and management module 1204 configured to receive a packet receiving request of an application process; according to the application The packet receiving request of the process creates one or more second packet receiving queues for the application process; and returns the creation information of the second packet receiving queue corresponding to the application process to the application process.
  • the second packet receiving queue creation and management module 1204 may be configured to create a second packet receiving queue and provide interfaces for reading, writing, freeing, and replacing messages. If the application process is in the container, due to differences such as NameSpace, a continuous piece of physical memory can be used to create a second packet receiving queue group. If the application process is not in the container, you can use a continuous piece of physical memory, or you can use Linux shared memory to create a second packet queue group (for example, Ring group). In addition, each ring group corresponds to an application process, so that by increasing the ring group and memory, the application process can receive packets.
  • the message forwarding apparatus of this embodiment may further include: a memory block address pool creation module 1205, which is configured to, after receiving a packet receiving request from the application process, Create a corresponding memory block address pool; or, create one or more memory block address pools based on the type of packets received by the input and output hardware (network card). Among them, multiple memory block address pools can be created according to business requirements.
  • the packet forwarding apparatus of this embodiment may further include: a first packet receiving queue creation and management module 1206, which is configured to, after receiving a packet receiving request from an application process, Create a corresponding first packet receiving queue for the application process; or, according to the type of input and output hardware (network card), create one or more first packet receiving queues.
  • a first packet receiving queue creation and management module 1206, which is configured to, after receiving a packet receiving request from an application process, Create a corresponding first packet receiving queue for the application process; or, according to the type of input and output hardware (network card), create one or more first packet receiving queues.
  • the packet forwarding apparatus of this embodiment may further include: a physical memory allocation management module 1207 configured to allocate to the application process after receiving a packet receiving request from the application process At least one memory chip with continuous physical addresses, multiple memory blocks are cut from the memory chip, and the memory block information corresponding to the multiple memory blocks is injected into the memory block address pool and the second packet queue corresponding to the application process respectively, and marked to be stored in The memory block information of the second packet receiving queue is in an idle state; or, at least one memory chip with a continuous physical address is reserved, and after receiving the packet receiving request of the application process, multiple memory blocks are cut out from the memory chip, and multiple memory The memory block information corresponding to the block is injected into the memory block address pool and the second packet receiving queue corresponding to the application process respectively, and the memory block information stored in the second packet receiving queue is marked as idle.
  • a physical memory allocation management module 1207 configured to allocate to the application process after receiving a packet receiving request from the application process At least one memory chip with continuous physical addresses, multiple memory blocks are cut from the memory chip
  • the memory block information injected into the memory block address pool (for example, the first address or identifier of the memory block) and the memory block information injected into the second packet receiving queue are not duplicated.
  • the physical memory allocation management module 1207 can be configured to allocate a piece of memory with continuous physical addresses to the application process and the driver. When there are more application processes, it can support the provision of segmented management.
  • the network device 1400 (eg, router, switch, etc.) provided in this embodiment includes: input/output hardware (eg, network card) 1403, processor 1402, and memory 1401; input/output hardware 1403 is set to receive or send Message;
  • the memory 1401 is configured to store a message forwarding program, which is executed by the processor 1402 to implement the steps of the above message forwarding method, such as the steps shown in FIG. 3.
  • the structure shown in FIG. 14 is only a schematic diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the network device 1400 to which the solution of the present application is applied.
  • the network device 1400 may include more More or fewer components, or some combination of components, or have different component arrangements.
  • the memory 1401 may be configured to store software programs and modules of application software, such as program instructions or modules corresponding to the message forwarding method in this embodiment, and the processor 1402 runs the software programs stored in the memory 1401 and Module to perform various functional applications and data processing, such as implementing the packet forwarding method provided in this embodiment.
  • the memory 1401 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • an embodiment of the present application further provides a computer-readable medium that stores a message forwarding program.
  • the message forwarding program When executed, the steps of the above message forwarding method are implemented, such as the steps shown in FIG. 3.
  • All or some of the steps, systems, and functional modules/units in the method disclosed above may be implemented as software, firmware, hardware, and appropriate combinations thereof.
  • the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be composed of several physical The components are executed in cooperation.
  • Some or all components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit.
  • Such software may be distributed on computer-readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
  • computer storage medium includes both volatile and nonvolatile implemented in any method or technology for storing information such as computer readable instructions, data structures, program modules, or other data Sex, removable and non-removable media.
  • Computer storage media include but are not limited to Random Access Memory (RAM), Read-Only Memory (ROM), Erasable Programmable Read-Only Memory (Electrically Programmable Read-Only Memory, EEPROM) , Flash memory or other memory technology, compact disc read-only memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disc (Digital Video Disc, DVD) or other optical disc storage, magnetic box, magnetic tape, magnetic disk storage or other magnetic A storage device, or any other medium that can be used to store desired information and can be accessed by a computer.
  • communication media typically contains computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本文公开了一种报文转发方法、装置、网络设备及计算机可读介质;该报文转发方法包括:从内存块地址池取出内存块地址池中存放的内存块信息,将输入输出硬件接收到的报文存放到内存块信息指示的内存块,根据报文在内存块中的存放位置得到报文的描述信息,将报文的描述信息放入第一收包队列;通过收包线程从第一收包队列读取所述描述信息;通过收包线程将第二收包队列内存放的一个标记为空闲状态的内存块信息存放到内存块地址池,并将从第一收包队列读取的描述信息放入第二收包队列;通过与第二收包队列对应的应用进程从第二收包队列读取描述信息,根据从第二收包队列读取的描述信息获取报文,并将第二收包队列中用于指示所述报文所在内存块的内存块信息标记为空闲状态。

Description

报文转发方法、装置、网络设备及计算机可读介质
本申请要求在2018年12月18日提交中国专利局、申请号为201811546772.X的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及通信技术领域,例如涉及一种报文转发方法、装置、网络设备及计算机可读介质。
背景技术
随着第五代移动通信系统(Fifth-generation mobile communication system,5G)时代的到来,对通信网络的传输速率、性能提出了更高的需求,这就要求在网络数据传输过程中,网络节点处理报文的速率要越来越快,反映到路由器或交换机等设备上,就要求报文能够快速地在设备内部网络中传输、处理。
发明内容
本申请实施例提供了一种报文转发方法、装置、网络设备及计算机可读介质,可以提高网络设备内部报文的传输速率。
本申请实施例提供一种报文转发方法,包括:从内存块地址池取出所述内存块地址池中存放的内存块信息,将输入输出硬件接收到的报文存放到所述内存块信息指示的内存块,根据所述报文在所述内存块中的存放位置得到所述报文的描述信息,将所述描述信息放入第一收包队列;通过收包线程从所述第一收包队列读取所述描述信息;通过所述收包线程将第二收包队列内存放的一个标记为空闲状态的内存块信息存放到所述内存块地址池,并将从所述第一收包队列读取的所述描述信息放入所述第二收包队列;通过与所述第二收包队列对应的应用进程从所述第二收包队列读取所述描述信息,根据从所述第二收包队列读取的所述描述信息获取所述报文,并将所述第二收包队列中用于指示所述报文所在内存块的内存块信息标记为空闲状态;其中,所述内存块地址池内存放的内存块信息与所述第二收包队列中存放的内存块信息不重复。
本申请实施例提供一种报文转发装置,包括:第一收包模块,设置为从内存块地址池取出所述内存块地址池中存放的内存块信息,将输入输出硬件接收到的报文存放到所述内存块信息指示的内存块,根据所述报文在所述内存块中的存放位置得到所述报文的描述信息,将所述报文的描述信息放入第一收包队 列;第二收包模块,设置为通过收包线程从所述第一收包队列读取所述描述信息;通过所述收包线程将第二收包队列内存放的一个标记为空闲状态的内存块信息存放到所述内存块地址池,并将从所述第一收包队列读取的描述信息放入所述第二收包队列;第三收包模块,通过与所述第二收包队列对应的应用进程从所述第二收包队列读取描述信息,根据从所述第二收包队列读取的描述信息获取报文,并将所述第二收包队列中用于指示所述报文所在内存块的内存块信息标记为空闲状态;其中,所述内存块地址池内存放的内存块信息与所述第二收包队列中存放的内存块信息不重复。
本申请实施例提供一种网络设备,包括:输入输出硬件、处理器以及存储器;所述输入输出硬件设置为接收或发送报文;所述存储器设置为存储报文转发程序,所述报文转发程序被所述处理器执行时实现上述报文转发方法。
本申请实施例提供一种计算机可读介质,存储有报文转发程序,所述报文转发程序被执行时实现上述报文转发方法。
附图说明
图1为一种Linux内核套接字(Socket)收包技术的示意图;
图2为一种零拷贝收包技术的示意图;
图3为本申请实施例提供的一种报文转发方法的流程图;
图4为本申请实施例提供的一种报文转发方法的示例示意图;
图5为本申请实施例提供的另一种报文转发方法的示例示意图;
图6为本申请实施例提供的另一种报文转发方法的示例示意图;
图7为本申请实施例提供的另一种报文转发方法的示例示意图;
图8为本申请实施例提供的另一种报文转发方法的示例示意图;
图9为本申请实施例提供的另一种报文转发方法的示例示意图;
图10为本申请实施例提供的另一种报文转发方法的示例示意图;
图11为本申请实施例提供的另一种报文转发方法的示例示意图;
图12为本申请实施例提供的一种报文转发装置的示意图;
图13为本申请实施例提供的另一种报文转发装置的示意图;
图14为本申请实施例提供的一种网络设备的示意图。
具体实施方式
下面将结合附图对本申请的实施例进行说明。
在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行。并且,虽然在流程图中示出了逻辑顺序,但是在一些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1为一种Linux内核Socket收包技术的示意图。如图1所示,Linux内核Socket收包过程可以包括:报文从网卡进入网卡驱动;网卡驱动通过中断通知内核线程在网络协议栈中处理报文,该流程需要经过网络协议(Internet Protocol,IP)层和传输控制协议(Transmission Control Protocol,TCP)/用户数据报协议(User Datagram Protocol,UDP)层;网络协议栈处理完报文后通知应用层(比如,应用进程P1、Pn)收包。
图1所示的Socket收包技术虽然通用性较好,可以很好地支持多进程收包并且不受限制,但是存在以下缺点:从内核到应用层要经过IP层和TCP/UDP层,这就增加了报文拷贝,增加报文拷贝会严重影响收包性能;当容器中的应用进程需要收包时,受限于命名空间(NameSpace)等限制,报文的传输要依赖于容器网络,也会增加报文拷贝。由此可见,Linux内核协议栈中的报文拷贝是影响报文传输速率的重要因素。
图2为一种零拷贝收包技术的示意图。如图2所示,零拷贝收包过程可以包括:报文从网卡上来,并送到帧管理;帧管理会对报文进行解析、分类或哈希(Hash)处理,然后送到特定的队列;队列管理负责分配队列给应用进程使用(比如,给应用进程P1分配队列1,给应用进程Pn分配队列n),其中,每个应用进程至少需要分配一个队列才可以解决并发问题;应用进程从指定的队列中接收报文并处理。
图2所示的报文零拷贝技术可以直接将网卡驱动映射到应用进程中,这样应用进程就可以直接访问报文队列,从而实现报文的零拷贝。其中,网卡驱动可以放在内核中也可以直接放在应用进程中,应用进程直接与驱动队列进行交互,交互时需要确定本应用进程使用的队列编号、Pool(池)编号、优先级调度策略等一系列问题。如果有多个应用进程都要实现收包,那么每个应用进程都要映射管理网卡驱动、确定队列编号、Pool编号以及优先级调度策略,由于一般不同的应用进程可能由不同用户维护,上述方式无疑增加了工作量、浪费了人力。而且,本方案在有多个应用进程或容器收发报文的场景下存在一些问题,比如,在应用进程较多的情况下,网卡的硬件资源不够用,这样应用进程的个数就会受到限制;有些网卡不支持优先级调度或者调度不够灵活;容器中的进程收包时受限于NameSpace等限制,报文的传输要依赖于容器网络,导致增加报文拷贝;每个应用进程都要直接操作用户态驱动,会带来不必要的工作量等。
本申请实施例提供一种报文转发方法、装置、网络设备及计算机可读介质,通过收包线程在内存块地址池、第一收包队列及第二收包队列传递内存地址的方式来实现报文零拷贝,网络设备内的报文传递过程没有增加拷贝,从而提高了网络设备内部的报文传输速率。而且,本申请实施例可以实现多应用进程收包时专注于应用而不必考虑底层硬件驱动细节,在不影响性能的同时提升了通用性、工作效率,并且减少了维护成本。通过不同的第二收包队列来对应不同的应用进程,通过增加第二收包队列和内存即可增加应用进程收包,从而克服应用进程的个数受限问题;通过增加第二收包队列可以区分优先级,从而实现报文的优先级调度;通过增加收包线程并设置亲和性、排他性快速收包可以解决硬件资源受限以及硬件不支持优先级调度或者调度不够灵活所导致的无差别丢包等问题。
图3为本申请实施例提供的一种报文转发方法的流程图。如图3所示,本实施例提供的报文转发方法应用于网络设备,用于实现在网络设备内部从网络设备的输入输出硬件(比如,网卡)到应用进程的报文传输。本实施例提供的报文转发方法可以应用于对多进程或多线程、容器化、通用性以及收发报文速率要求较高的网络设备,比如,路由器、交换机等。然而,本申请对此并不限定。
如图3所示,本实施例提供的报文转发方法包括以下步骤:
步骤S1010、从内存块地址池取出该内存块地址池中存放的内存块信息,将输入输出硬件接收到的报文存放到内存块信息指示的内存块,根据该报文在内存块中的存放位置得到该报文的描述信息,将该报文的描述信息放入第一收包队列。
步骤S1020、通过收包线程从第一收包队列读取该描述信息。
步骤S1030、通过收包线程将第二收包队列内存放的一个标记为空闲状态的内存块信息存放到内存块地址池,并将从第一收包队列读取的该描述信息放入该第二收包队列。
步骤S1040、通过与该第二收包队列对应的应用进程从该第二收包队列读取该描述信息,根据从该第二收包队列读取的该描述信息获取报文,并将该第二收包队列中用于指示所获取的报文所在内存块的内存块信息标记为空闲状态。
本实施例中,内存块地址池内存放的内存块信息与第二收包队列中存放的内存块信息不重复。
在一示例性实施例中,内存块地址池和第二收包队列中存放的内存块信息可以包括内存块首地址或者内存块标识(Identifier,ID);内存块为一段地址连 续的物理内存,用于缓存输入输出硬件接收到的报文。例如,内存块地址池和第二收包队列中可以注入预先分配的内存块首地址,且内存块地址池中注入的内存块首地址与第二收包队列中注入的内存块首地址不重复。或者,内存块地址池和第二收包队列中可以注入预先分配的内存块ID,且内存块地址池中注入的内存块ID与第二收包队列中注入的内存块ID不重复。
在一示例性实施例中,报文的描述信息可以包括:缓存报文的内存块的内存块首地址、报文长度以及报文基于该内存块首地址的偏移信息。然而,本申请对此并不限定。
在一示例性实施例中,第二收包队列可以为环形队列;每一个环形队列都是一个无锁队列,从而实现无锁。然而,本申请对此并不限定。在其他实施例中,第二收包队列可以为先进先出队列(First Input First Output,FIFO)。
在一示例性实施例中,在步骤S1010之前,本实施例的报文转发方法还可以包括:在接收到应用进程的收包请求后,给该应用进程分配至少一个物理地址连续的内存片,从该内存片切割出多个内存块,将多个内存块对应的内存块信息(比如,内存块首地址或ID)分别存放到内存块地址池和该应用进程对应的第二收包队列,并标记存放到该第二收包队列的内存块信息为空闲状态;或者,预留至少一个物理地址连续的内存片,在接收到应用进程的收包请求后,从该内存片切割出多个内存块,将多个内存块对应的内存块信息(比如,内存块首地址或ID)分别存放到内存块地址池和该应用进程对应的第二收包队列,并标记存放到该第二收包队列的内存块信息为空闲状态。其中,内存块地址池中注入的内存块信息与该第二收包队列中注入的内存块信息不重复。示例性地,可以通过收包进程实现内存片的地址分配、以及内存块地址池和第二收包队列中内存块首地址或ID的注入。然而,本申请对此并不限定。
本实施例中,在物理地址连续的内存片切割出的每一个内存块可以用于缓存报文,每一个内存块内部的物理地址连续。当一个内存片所提供的连续物理地址不足时,可以从多个内存片中切割出足够数目的内存块,只要满足从内存片中切割出的内存块内部的物理地址连续即可。
在一示例性实施例中,在步骤S1020之后,本实施例的报文转发方法还可以包括:在第二收包队列中没有标记为空闲状态的内存块信息的情况下,通过收包线程将从第一收包队列读取的描述信息所对应的内存块信息放回内存块地址池。其中,当第二收包队列中没有空闲状态的内存块信息(即第二收包队列内报文的描述信息已存满),则收包线程可以回收相应的内存块信息,从而丢弃相应的报文。
在一示例性实施例中,在步骤S1020之后,本实施例的报文转发方法还可 以包括:通过收包线程根据从第一收包队列读取的描述信息,读取在该描述信息指示的物理地址缓存的报文,通过解析读取到的报文,确定读取到的报文对应的第二收包队列;相应地,步骤S1030可以包括:通过收包线程将读取到的报文对应的第二收包队列内存放的一个标记为空闲状态的内存块信息放入内存块地址池,并将从第一收包队列读取的该描述信息放入该第二收包队列。
在一示例性实施例中,在通过收包线程根据从第一收包队列读取的描述信息,读取在该描述信息指示的物理地址缓存的报文,通过解析读取到的报文,确定读取到的报文对应的第二收包队列之后,本实施例的报文转发方法还可以包括:在通过收包线程读取到的报文对应的第二收包队列中没有标记为空闲状态的内存块信息的情况下,通过收包线程将从第一收包队列读取的描述信息所对应的内存块信息放回内存块地址池。其中,当第二收包队列中没有空闲状态的内存块信息(即第二收包队列内报文的描述信息已存满),则收包线程可以回收相应的内存块信息,从而丢弃相应的报文。
在一示例性实施例中,在步骤S1010之前,本实施例的报文转发方法还可以包括:接收应用进程的收包请求;根据应用进程的收包请求,给应用进程创建对应的一个或多个第二收包队列;向应用进程返回该应用进程对应的第二收包队列的创建信息。其中,一个应用进程可以对应一个第二收包队列,或者,对应多个第二收包队列(比如,一个第二收包队列组),一个第二收包队列仅对应一个应用进程。其中,收包请求的接收以及第二收包队列的创建过程可以通过收包进程实现。比如,收包请求的接收以及第二收包队列的创建可以通过收包进程内的收包线程实现。然而,本申请对此并不限定。在其他实施例中,收包请求的接收以及第二收包队列的创建可以通过收包进程内的其他线程(比如,通道管理线程)实现。
在一示例性实施例中,应用进程的收包请求可以携带以下信息:请求创建的第二收包队列的数量、第二收包队列的大小、接收报文的最大长度、接收报文的特征信息等。应用进程对应的第二收包队列的创建信息可以包括:该应用进程对应的第二收包队列的编号等信息。然而,本申请对此并不限定。
在一示例性实施例中,通过收包线程根据从第一收包队列读取的描述信息,读取在该描述信息指示的物理地址缓存的报文,通过解析读取到的报文,确定读取到的报文对应的第二收包队列,可以包括:通过将从第一收包队列读取的描述信息映射到虚拟地址,读取并解析报文,得到该报文的特征信息;根据解析出的该报文的特征信息,确定接收该报文的应用进程;根据接收该报文的应用进程以及应用进程与第二收包队列的对应关系(比如,应用进程与第二收包队列为一一对应关系),确定该报文对应的第二收包队列。
在一示例性实施例中,根据应用进程的收包请求,给应用进程创建对应的一个或多个第二收包队列,可以包括:根据应用进程的收包请求,给该应用进程创建支持优先级调度的多个第二收包队列,其中,该应用进程待接收的报文所属的任一级优先级对应一个或多个第二收包队列。比如,一个应用进程待接收的报文对应有两个优先级,则可以给该应用进程创建至少两个第二收包队列(比如,队列1和队列2),其中一个优先级可以对应至少一个第二收包队列(比如,队列1),另一个优先级可以对应至少一个第二收包队列(比如,队列2);换言之,属于其中一个优先级的报文可以通过至少一个第二收包队列(比如,队列1)接收,属于另一个优先级的报文可以通过另外的至少一个第二收包队列(比如,队列2)接收。
在一示例性实施例中,通过收包线程根据从第一收包队列读取的描述信息,读取在该描述信息指示的物理地址缓存的报文,通过解析读取到的报文,确定读取到的报文对应的第二收包队列,可以包括:通过将从第一收包队列读取的描述信息映射到虚拟地址,读取并解析报文,得到该报文的特征信息;根据解析出的该报文的特征信息,确定接收该报文的应用进程以及该报文所属的优先级;根据接收该报文的应用进程、该报文所属的优先级、以及该应用进程对应的第二收包队列与优先级的对应关系,确定该报文对应的第二收包队列。其中,当应用进程对应的第二收包队列支持优先级调度时,应用进程可以按照一定比例从对应的第二收包队列接收报文,从而实现报文的优先级调度。比如,应用进程可以优先从较高优先级对应的第二收包队列接收较高优先级的报文。
在一示例性实施例中,在步骤S1010之前,本实施例的报文转发方法还可以包括:在接收到应用进程的收包请求后,给应用进程创建对应的内存块地址池;或者,根据输入输出硬件接收的报文类型,创建一个或多个内存块地址池。其中,可以给每个应用进程创建独立的内存块地址池,比如,可以根据应用进程的收包请求,给应用进程创建独立的内存块地址池,以提升该应用进程的收包性能;或者,多个应用进程可以共用一个或多个内存块地址池,比如,可以预先创建一个或多个内存块地址池。示例性地,可以根据输入输出硬件接收的报文类型(比如,报文大小)来创建多个内存块地址池,例如,可以创建两个内存块地址池,其中一个内存块地址池中存放的内存块信息所指示的内存块可以用于缓存报文大小小于预设值的报文,另一个内存块地址池中存放的内存块信息所指示的内存块可以用于缓存报文大小大于或等于预设值的报文。其中,可以通过收包进程实现内存块地址池的创建。然而,本申请对此并不限定。
在一示例性实施例中,在步骤S1010之前,本实施例的报文转发方法还可以包括:在接收到应用进程的收包请求后,给该应用进程创建对应的第一收包队列;或者,根据输入输出硬件的类型,创建一个或多个第一收包队列。其中, 可以给每个应用进程创建独立的第一收包队列,比如,可以根据应用进程的收包请求,给应用进程创建独立的第一收包队列,以提升该应用进程的收包性能;或者,多个应用进程可以共用一个或多个第一收包队列,比如,可以预先创建一个或多个第一收包队列。示例性地,可以根据输入输出硬件(网卡)的类型来创建第一收包队列,例如,当网卡不支持优先级调度时,可以创建一个第一收包队列,当网卡支持优先级调度时,可以创建多个第二收包队列。其中,可以通过收包进程实现第一收包队列的创建。然而,本申请对此并不限定。
在一示例性实施例中,在步骤S1010之前,本实施例的报文转发方法还可以包括:在接收到应用进程的收包请求后,给应用进程创建对应的收包线程;或者,接收到应用进程的收包请求后,从已创建的收包线程中选择一个作为该应用进程对应的收包线程。其中,可以给每个应用进程创建单独的收包线程,或者,多个应用进程可以共用一个收包线程。例如,当接收到应用进程的收包请求后,若该应用进程可以与其他应用进程共用收包线程,则可以从给其他应用进程创建的收包线程中选择一个作为该应用进程对应的收包线程,比如可以设置一个默认的收包线程提供给多个应用进程。其中,可以通过收包进程实现收包线程的创建。然而,本申请对此并不限定。
在一示例性实施例中,多个应用进程可以仅对应一个收包线程,或者,多个应用进程对应多个收包线程。其中,可以仅通过一个收包线程给多个应用进程传递报文;或者,可以通过多个收包线程给多个应用进程传递报文,比如,可以通过两个收包线程给五个应用进程传递报文,其中一个收包线程可以给三个应用进程传递报文,另外一个收包线程可以给其余两个应用进程传递报文。
在一示例性实施例中,一个或多个应用进程可以位于容器内。本实施例提供的报文转发方法可以适用于容器内的应用进程需要收包的场景。其中,在主机(Host)上的收包线程给容器中的应用进程收包时,由于Namespace)等不同,需要使用一段地址连续的物理内存来创建第二收包队列。
在一示例性实施例中,应用进程和收包线程可以均位于容器内。本实施例提供的报文转发方法可以适用于在容器中直接从输入输出硬件收包的场景。
在一示例性实施例中,本实施例的报文转发方法还可以包括:设置收包线程对中央处理单元(Central Processing Unit,CPU)资源的亲和性或排他性。其中,可以设置收包线程的CPU亲和性,或者通过控制组群(Control Groups,cgroup)或排它技术独占一个CPU资源,从而提高收包性能。比如,当网卡不支持优先级调度,仅创建一个第一收包队列时,可以设置收包线程的CPU亲和性使得该收包线程排它独占一个CPU资源,从而减少无差别丢包的几率。然而,本申请对此并不限定。当第一收包队列为多个时,也可以设置收包线程的CPU 亲和性,以提升收包性能。
在一示例性实施例中,本实施例的报文转发方法还可以包括:在通过收包线程根据从第一收包队列读取的描述信息读取到报文后,更新该报文所属业务流的流统计计数,在限速时长内的流统计计数满足设定条件的情况下,丢弃该报文;在每次达到限速时长后,将该流统计计数置为初始值。其中,本实施例提供的报文转发方法可以适用于业务流的流量过大的场景。
在一种示例性实施例中,流统计计数的初始值可以为0,在通过收包线程读取到报文后,可以将该报文所属业务流的流统计计数加一,当限速时长(比如,一秒)内的流统计计数满足设定条件(比如,大于该业务流的限速值)时,丢弃该报文;并在每次达到限速时长后,将该流统计计数置为初始值(此处为0)。
在一示例性实施例中,业务流的流统计计数的初始值可以为该业务流的限速值,在通过收包线程读取到报文后,可以将该报文所属业务流的流统计计数减一,当限速时长(比如,一秒)内的流统计计数满足设定条件(比如,限速时长内的流统计计数为0)时,丢弃该报文;并在每次达到限速时长后,将该流统计计数置为初始值(此处为限速值)。
在一示例性实施例中,本实施例的报文转发方法还可以包括:从第一发包队列取出该第一发包队列中存放的标记为空闲状态的内存块信息(比如,内存块首地址或ID),将应用进程待发送的报文存放到该内存块信息所指示的内存块,根据所述报文在所述内存块中的存放位置得到该报文的描述信息,并将该报文的描述信息放入第一发包队列;通过发包线程从第一发包队列读取描述信息,将内存块地址池内存放的一个标记为空闲状态的内存块信息(比如,内存块首地址或ID)放入第一发包队列,并将从第一发包队列读取的描述信息放入第二发包队列;从第二发包队列读取描述信息,并根据从第二发包队列读取的描述信息获取报文,通过输入输出硬件(比如,网卡)发送获取到的报文,并在发送获取到的报文后,将用于指示该获取到的报文所在内存块的内存块信息放回内存块地址池。本实施例可以通过发包线程在内存块地址池、第一发包队列以及第二发包队列传递内存地址的方式来发送报文。然而,本申请对此并不限定。在其他实施例中,报文发送过程也可以不采用上述方式。
图4为本申请实施例提供的一种报文转发方法的示例示意图。本示例性实施例说明通过收包线程、内存块地址池、第一收包队列和第二收包队列采用内存地址置换方式实现报文零拷贝传递的接收流程。其中,第二收包队列以环形队列为例进行说明,即一个第二收包队列即为一个环形队列(以下简称为环(Ring)),每个环形队列是一个无锁队列。本示例中,一个应用进程对应一组第二收包队列(即一个Ring组)。
在本示例性实施例中,在进行报文接收之前,先进行以下工作:
1)预留一段物理地址连续的内存片A,在内存片A中可以切割出多个内存块(Block),用于缓存报文;其中,内存片A的大小大于或等于Block的总个数(比如,图4中的n,n为大于1的整数)乘以允许支持的报文最大长度(比如10K字节(Byte));每个Block都表示一段地址连续的物理内存,Block首地址就表示这段地址连续的物理内存的首地址。
在其他实施例中,可以预留多个内存片,从这些内存片中切割出多个内存块,只要保证从中切割出的内存块内部的物理地址连续即可。
2)给硬件驱动(比如,网卡驱动)分配内存块地址池(以下简称为Pool)B以及第一收包队列(以下简称为Queue)C。其中,Pool B用于存放内存块首地址,Pool B可以为FIFO队列、链表、数组、或者环形队列,然而,本申请对此并不限定。Queue C可以为FIFO结构或者环形队列结构,然而,本申请对此并不限定。
3)创建支持优先级调度的Ring组D(即上述的多个第二收包队列);本示例中,Ring组D可以包括m+1个Ring,m可以为大于或等于0的整数。
4)创建用来从硬件驱动收包的收包线程(以下简称为Thread)E,Thread E可以将物理地址连续的内存片A的首地址映射成虚拟地址以备解析报文使用。
本实施例中,实现报文零拷贝传递的过程在Pool B、Queue C、Ring组D以及Thread E之间进行,内存地址置换动作发生在Pool B和Ring组D之间。
在本示例性实施例中,如图4所示,可以将k+1到n部分共n-k个Block首地址注入到Pool B中。将Block 1到Block i,总共i个Block首地址放在Ring 0中;将Block j到Block k,总共k-j+1个Block首地址放在Ring m中;Ring组D中其它Ring内的Block首地址的注入方式和Ring 0、Ring m内Block首地址的注入方式一样,初始时,Ring组D中注入的所有Block首地址的使用状态均为空闲状态。其中,i,j,k均为大于1的整数。整个Block首地址的注入过程中保证所有的Ring、Pool B中注入的Block首地址均不重复。其中,注入每个Ring和Pool B中的Block首地址的数目可以相同或不同,本申请对此并不限定。其中,Ring组D中所有Block首地址个数加上Pool B中所有Block首地址个数之和可以为n。
基于上述设置的Pool B、Queue C、Ring组D以及Thread E,本示例性实施例的报文转发方法可以包括步骤1010至步骤1090。
步骤1010、网卡将接收到的报文送到帧管理。
步骤1020、帧管理对报文进行解析、分类/哈希,并从Pool B中取出一个 Block首地址存放报文。
步骤1030、帧管理将该Block首地址、报文长度、报文基于该Block首地址的偏移信息等信息填入描述符(对应上述的描述信息)中,并将此描述符放入Queue C中。其中,Queue C的数目可以为一个或多个;当Queue C的数目为多个,即采用多队列,则帧管理可以根据报文的特征信息来选择将该报文的描述符放入哪个Queue C,从而支持优先级调度。本示例性实施例中,以一个Queue C为例进行说明。
在其他实现方式中,可以设置一个单独的线程,用于从Pool B中取出一个Block首地址存放报文,将该Block首地址、报文长度、报文基于该Block首地址的偏移信息等信息填入描述符中,并将此描述符放入Queue C中。
步骤1040、Thread E从Queue C中轮询出描述符,取出报文的Block首地址、报文长度、报文基于Block首地址的偏移信息等信息,并通过简单的偏移运算得到这个报文的虚拟地址。其中,虚拟地址的计算方法可以为:报文的虚拟地址等于报文的Block首地址减去连续的内存片A的首地址再加上连续的内存片A的首地址映射的虚拟地址。然后,Thread E可以读取并解析该报文,根据该报文的特征信息(比如,从报文的特征字段得到)可以确定这个报文要转发到的应用进程以及对应的Ring。然后,Thread E可以通过步骤1050至步骤1060采用置换Block首地址的方式将这个报文的Block首地址、报文长度、报文基于Block首地址的偏移等信息放入对应的Ring。本示例中,以该报文对应的Ring为Ring m为例进行说明。
步骤1050、Thread E从Ring m中弹出一个空闲的Block首地址还给Pool B。
步骤1060、Thread E将该报文的Block首地址、报文长度、报文基于Block首地址的偏移信息等信息放入Ring m中对应的位置以供应用进程P11读取。其中,Ring m中弹出一个空闲Block首地址之后的位置可以放入该报文的Block首地址、报文长度、报文基于Block首地址的偏移信息等信息。
如果Ring m中没有空闲的Block首地址用于置换,即Ring m中存放的都是报文的描述信息(表示Ring m中存储的描述信息已满),则执行步骤1070,即Thread E可以将收到的这个报文对应的Block首地址还给Pool B;这种情况实现了报文无法上送时的丢弃操作,同时回收了Block首地址。
当仅有一个应用进程存在收包需求,且该应用进程对应一个Ring或者对应的一个Ring组不存在优先级时,Thread E从Queue C中轮询出描述符,取出报文的Block首地址、报文长度、报文基于Block首地址的偏移等信息之后,可以不进行报文读取和解析,直接执行步骤1050和步骤1060。
步骤1080、应用进程P11可以从Ring m中取出报文的Block首地址、报文长度、报文基于Block首地址的偏移信息等信息,然后从存储该报文的Block读取该报文。在本示例性实施例中,应用进程P11放在容器1中。然而,本申请对此并不限定。在其他实施例中,应用进程P11也可以不放在容器中。
步骤1090、应用进程P11在处理完报文后,可以将Ring m中该报文对应的Block首地址置成空闲状态,以便Thread E继续使用。
在本示例性实施例中,帧管理将报文放入Block后,后续通过对存储报文的Block首地址进行置换,从而实现报文零拷贝传递给应用进程。如此,可以封装应用进程对网卡的访问,屏蔽了应用进程和网卡驱动之间的直接交互,使得应用进程收包时不必考虑底层硬件驱动细节,在不影响传递性能的同时提升了通用性、工作效率,并减少了维护成本。
在一示例性实施例中,当网卡不支持优先级调度,即如图4所示中只有一个Queue C,则所有从网卡上送的报文都要进入Queue C。为了达到Queue C不产生无差别丢包效果,可以设置收包线程的CPU亲和性,使收包线程排它独占一个CPU资源,如此,收包线程可以尽可能地将Queue C中的报文都收上来,从而减少无差别丢包的几率。如此,解决网卡不支持优先级调度或者调度不够灵活场景的报文转发。然而,本申请对此并不限定。在其他实施例中,可以通过cgroup或其他排它技术使得收包线程独占一个CPU资源。
图5为本申请实施例提供的另一种报文转发方法的示例示意图。本示例性实施例说明给多个应用进程创建收包通道的过程,其中,可以给每个应用进程创建支持优先级调度的一组第二收包队列(比如,一个Ring组)。
如图5所示,本示例性实施例提供的报文转发方法包括以下步骤:
步骤2010、应用进程有收发包需求,向收包进程P0发出收包请求;在本示例性实施例中,容器1中的应用进程P11至应用进程P1n、容器n中的应用进程Pn1至应用进程Pnn都有收包需求,则都可以向收包进程P0发出收包请求。其中,请求的方式有多种多样,可以是消息、保留内存等。示例性地,应用进程的请求信息可以包括:请求创建的Ring的数目及大小、接收报文的最大长度、接收报文的特征信息等。本示例性实施例中,以一个收包进程P0为例进行说明,然而,本申请对此并不限定。在其他实现方式中,也可以采用多个收包进程。
步骤2020、收包进程P0的任务(Job)可以根据多个应用进程的收包请求,分别给多个应用进程创建收包通道。其中,Job专门负责分配、管理携带收包需求信息的收包请求。然而,本申请对此并不限定。在其他实施例中,收包进程P0可以开启通道管理线程负责管理收包请求,及创建收包通道。
在本示例性实施例中,Job可以预留一段物理地址连续的内存片,创建内存块地址池和第一收包队列,创建收包线程;以及根据每个应用进程的收包需求,给每个应用进程创建对应的支持优先级调度的Ring组。其中,关于内存片、内存块地址池、第一收包队列、收包线程、Ring组的说明可以参照图4中的相关描述,故于此不再赘述。
本实施例中,任一应用进程可以对应一个Ring组;比如,如图5所示,应用进程P11对应Ring组D11,应用进程Pnn对应Ring组Dnn,其中,每个Ring组中Ring的数目可以相同(比如,m+1个,m为大于或等于0的整数)或不同。然而,本申请对此并不限定。
以应用进程P11对应的Ring组D11支持优先级调度为例,Ring组D11中的任一个Ring可以对应一级优先级,后续收包线程通过解析报文的优先级,可以将该报文的描述信息放入对应该级优先级的Ring中。然而,本申请对此并不限定。在其他实施例中,支持优先级调度的Ring组中的多个Ring可以对应一级优先级。
步骤2030、收包进程P0的Job给应用进程P11创建支持优先级的Ring组D11后,向应用进程P11返回其对应的支持优先级的Ring组D11的创建信息。同样地,收包进程P0的Job给其余的任一应用进程创建支持优先级的Ring组后,会向该应用进程返回对应的Ring组的创建信息。其中,Job每给一个应用进程创建对应的Ring组后,会给该应用进程返回对应的Ring组的创建信息。其中,创建信息可以包括应用进程对应的支持优先级的Ring组的队列管理信息(比如,Ring组中的Ring与优先级的对应关系)等。如此一来,即给每个应用进程创建好了收包通道。
在本示例性实施例中,应用进程可以按照一定比例从对应的Ring读取不同优先级的报文,比如,优先从较高优先级对应的Ring读取报文。
在本示例性实施例中,如图5所示,容器个数可以为多个,比如1到n;每个容器中的应用进程也可以为多个,比如Pn1到Pnn。当应用进程的个数受地址连续的内存片A(如图4所示)的限制时,只要将内存片A扩大,并增加收包进程的个数,就可以使得应用进程的个数不再受到网卡硬件资源的限制。其中,每一个Ring组对应一个应用进程进行收包,通过增加Ring组和内存就可以增加收包的应用进程,从而克服收包的应用进程的个数受到网卡硬件资源限制的情况。
本示例性实施例中每个应用进程的收包过程可以参照图4的相关说明,故于此不再赘述。
图6是本申请实施例提供的另一种报文转发方法的示例示意图。本示例性实施例说明单个收包线程给多个容器的多个应用进程统一传递报文。其中,容器1中的应用进程P11至P1n、容器n中的应用进程Pn1至Pnn都有收包需求。
如图6所示,本示例性实施例提供的报文转发方法包括如下过程:
步骤3010、每个应用进程确定自身是否需要支持优先级调度、每种优先级所支持的最大报文缓冲区大小、接收报文的最大长度、接收报文的特征信息等信息,并发送收包请求,其中,收包请求可以携带上述信息。
步骤3020、收包进程P0的Job根据每个应用进程的收包请求,给每个应用进程创建收包通道;其中,收包通道的创建过程可以参照图5的说明,故于此不再赘述。本示例中,如图6所示,应用进程P11对应Ring组D11,应用进程P1n对应Ring组D1n,应用进程Pn1对应Ring组Dn1,应用进程Pnn对应Ring组Dnn,其中,每个Ring组中Ring的数目可以相同(比如,m+1个,m为大于或等于0的整数)或不同。然而,本申请对此并不限定。
步骤3030、当有报文从网卡上送,通过帧管理上送到第一收包队列,收包进程P0中的收包线程可以轮询到这个报文,并经过特征信息解析后将存储这个报文的描述信息置换到这个报文对应的应用进程的Ring中。关于本步骤的相关说明可以参照图4中的步骤1010至步骤1070,故于此不再赘述。
步骤3040、容器1中的应用进程P11至应用进程P1n、容器n中的应用进程Pn1至应用进程Pnn可以各自轮询对应的Ring组,取到报文。
步骤3050、每个应用进程根据业务需求处理完报文后,可以将对应的Ring中用于指示存储报文的内存块的内存块首地址置为空闲状态,以便继续使用。
图7为本申请实施例提供的另一种报文转发方法的示例示意图。本示例性实施例说明单个收包线程给多个应用进程统一传递报文。其中,由于收包进程P0与应用进程P1至Pn都在同一个Host上,因此,性能要求较低的应用进程也可以使用共享内存来创建Ring。
如图7所示,本示例性实施例提供的报文转发方法包括如下过程:
步骤4010、每个应用进程(比如,应用进程P1至应用进程Pn)根据自身的收包需求,发送收包请求。其中,收包请求可以携带以下信息:请求创建的Ring的数量及大小、接收报文的最大长度、接收报文的特征信息等。
步骤4020、收包进程P0的Job根据每个应用进程的收包请求,给每个应用进程创建收包通道;其中,收包通道的创建过程可以参照图5的说明,故于此不再赘述。本示例中,如图7所示,应用进程P1对应Ring组D1,应用进程Pm对应Ring组Dm,应用进程Pn对应Ring组Dn,其中,每个Ring组中Ring的 数目可以相同(比如,m+1个,m为大于或等于0的整数)或不同。然而,本申请对此并不限定。
步骤4030、当有报文从网卡上送,通过帧管理上送到第一收包队列,收包进程P0中的收包线程可以轮询到这个报文,并经过特征信息解析后将存储这个报文的描述信息置换到这个报文对应的应用进程的Ring中。关于本步骤的相关说明可以参照图4中的步骤1010至步骤1070,故于此不再赘述。
步骤4040、应用进程P1至应用进程Pn可以各自轮询对应的Ring组,取到报文。
步骤4050、每个应用进程根据业务需求处理完报文后,可以将对应的Ring中用于指示存储报文的内存块的内存块首地址置为空闲状态,以便继续使用。
图8为本申请实施例提供的另一种报文转发方法的示例示意图。本示例性实施例说明多个收包线程给多个容器内的多个应用进程统一收包。在一些场景下,单个收包线程无法满足一些业务的需要,比如采样、网络地址转换(Network Address Translation,NAT)等业务,其对收包性能要求非常高,基于此可以通过增加收包线程的方式来满足一些对收包性能要求高的业务。
如图8所示,本示例性实施例提供的报文转发方法包括如下过程:
步骤5010、每个应用进程(比如,容器1中的应用进程P11至P1n、容器n中的应用进程Pn1至Pnn)根据自身的收包需求,发送收包请求。其中,收包请求可以携带以下信息:请求创建的Ring的数量及大小、接收报文的最大长度、接收报文的特征信息等。
步骤5020、收包进程P0的Job根据每个应用进程的收包请求,给每个应用进程创建收包通道。本实施例中,在给每个应用进程创建收包通道时,可以区分收包线程与应用进程的对应关系。比如,图8所示,收包线程1可以用于给应用进程P11至P1n以及应用进程Pn1收包,收包进程s可以用于给应用进程Pnn收包。其中,s可以为大于或等于1的整数。
本示例中,如图8所示,应用进程P11对应Ring组D11,应用进程P1n对应Ring组D1n,应用进程Pn1对应Ring组Dn1,应用进程Pnn对应Ring组Dnn,其中,每个Ring组中Ring的数目可以相同(比如,m+1个,m为大于或等于0的整数)或不同。然而,本申请对此并不限定。
其中,收包通道的其余创建过程可以参照图5的说明,故于此不再赘述。
步骤5030、基于创建后的收包通道,收包线程(比如,收包线程1至s)可以为对应的应用进程收包。关于本步骤的相关说明可以参照图4中的步骤1010至步骤1070,故于此不再赘述。
步骤5040、每个应用进程可以轮询对应的Ring组,取到报文。
步骤5050、每个应用进程根据业务需求处理完报文后,可以将对应的Ring中用于指示存储报文的内存块的内存块首地址置为空闲状态,以便继续使用。
图9为本申请实施例提供的另一种报文转发方法的示例示意图。本示例性实施例说明多个收包线程给多个应用进程以及多个容器中的多个应用进程收包的过程。在一些场景下,有收包需求的应用进程可能在Host上,也可能在容器中,这样就存在Host和容器中都有应用进程需要收包的场景。
如图9所示,本示例性实施例提供的报文转发方法包括如下过程:
步骤6010、多个应用进程Pi至Pk,以及容器n中的应用进程Pn1至Pnn根据自身的收包需求,发送收包请求。其中,收包请求可以携带以下信息:请求创建的Ring的数量及大小、接收报文的最大长度、接收报文的特征信息等。
步骤6020、收包进程P0的Job根据每个应用进程的收包请求,给每个应用进程创建收包通道。本实施例中,在给每个应用进程创建收包通道时,可以区分收包线程与应用进程的对应关系。比如,图9所示,收包线程1可以用于给应用进程Pi至Pk以及应用进程Pn1收包,收包进程s可以用于给应用进程Pnn收包。其中,s可以为大于或等于1的整数。
本实施例中,在Host上的应用进程既可以使用共享内存也可以使用保留的地址连续的物理内存来创建Ring组,但是在容器中的进程只能使用保留的物理地址连续的内存来创建Ring组。
本示例中,如图9所示,应用进程Pi对应Ring组Di,应用进程Pk对应Ring组Dk,应用进程Pn1对应Ring组Dn1,应用进程Pnn对应Ring组Dnn,其中,每个Ring组中Ring的数目可以相同(比如,m+1个,m为大于或等于0的整数)或不同。然而,本申请对此并不限定。
其中,收包通道的其余创建过程可以参照图5的说明,故于此不再赘述。
步骤6030、基于创建后的收包通道,收包线程(比如,收包线程1至s)可以为对应的应用进程收包。关于本步骤的相关说明可以参照图4中的步骤1010至步骤1070,故于此不再赘述。
步骤6040、每个应用进程可以轮询对应的Ring组,取到报文。
步骤6050、每个应用进程根据业务需求处理完报文后,可以将对应的Ring中用于指示存储报文的内存块的内存块首地址置为空闲状态,以便继续使用。
图10为本申请实施例提供的另一种报文转发方法的示例示意图。本示例性实施例说明在容器中以物理内存置换方式给多个应用进程实现统一收包。对于 一些CPU子卡芯片,其硬件已经可以支持虚拟化技术,通过将硬件网络虚拟成一个个的对象,就可以在容器中实现直接从网口介质访问控制(Media Access Control,MAC)收包。对于这种场景,可以在容器中驻留一个收包线程来为每个应用进程收包。
如图10所示,本示例性实施例提供的报文转发方法包括如下过程:
步骤7010、容器内的多个应用进程P1至Pm根据自身的收包需求,发送收包请求。其中,收包请求可以携带以下信息:请求创建的Ring的数量及大小、接收报文的最大长度、接收报文的特征信息等。
步骤7020、收包进程P0的Job根据每个应用进程的收包请求,给每个应用进程创建收包通道。其中,收包通道的创建过程可以参照图5的说明,故于此不再赘述。
本示例中,如图10所示,应用进程P1对应Ring组D1,应用进程Pm对应Ring组Dm;其中,每个Ring组中Ring的数目可以相同(比如,a+1个,a为大于或等于0的整数)或不同。然而,本申请对此并不限定。
步骤7030、基于创建后的收包通道,收包线程可以为对应的应用进程收包。关于本步骤的相关说明可以参照图4中的步骤1010至步骤1070,故于此不再赘述。
步骤7040、每个应用进程可以轮询对应的Ring组,取到报文。
步骤7050、每个应用进程根据业务需求处理完报文后,可以将对应的Ring中用于指示存储报文的内存块的内存块首地址置为空闲状态,以便继续使用。
在一示例性实施例中,在一些情况下,为了解决一些业务流的流量过大问题,可以在收包线程处添加针对每一种业务流的限速处理。
本示例性实施例提供的报文转发方法可以包括以下过程:
步骤8010、每个应用进程根据自身的收包需求,发送收包请求。其中,收包请求可以携带以下信息:请求创建的Ring的数量及大小、接收报文的最大长度、接收报文的特征信息、接收业务流在限速时长内的限速值(比如,每秒限速值)。
步骤8020、收包进程的Job根据每个应用进程的收包请求,给每个应用进程创建收包通道,并记录每一种业务流的限速值。其中,收包通道的创建过程可以参照图5的说明,故于此不再赘述。
步骤8030、基于创建的收包通道,收包线程可以给对应的应用进程收包。
本实施例中,收包线程每收到一个报文,更新该报文所属业务流的流统计 计数,比如,将该报文所属业务流对应的流统计计数加一(流统计计数的初始值为0);如果在限速时长内该业务流的流统计计数大于该业务流的限速值,就做丢包处理。在每次达到限速时长后(比如,一秒之后),收包线程将该业务流的流统计计数置为0,从而完成业务流的限速处理流程。或者,收包线程每收到一个报文,将该报文所属业务流对应的流统计计数减一(流统计计数的初始值为限速值);如果在限速时长内该业务流的流统计计数等于0,就做丢包处理。在每次达到限速时长后(比如,一秒之后),收包线程将该业务流的流统计计数置为限速值,从而完成业务流的限速处理流程。
关于本步骤的收包过程说明可以参照图4中的步骤1010至步骤1070,故于此不再赘述。
步骤8040、每个应用进程可以轮询对应的Ring组,取到报文。
步骤8050、每个应用进程根据业务需求处理完报文后,可以将对应的Ring中相应的内存块首地址置为空闲状态,以便继续使用。
图11为本申请实施例提供的另一种报文转发方法的示例示意图。本示例性实施例说明通过发包线程、内存块地址池、第一发包队列和第二发包队列采用内存地址置换方式实现报文零拷贝传递的发送流程。
在本示例性实施例中,在进行报文发送之前,先进行以下工作:
1)预留一段物理地址连续的内存片A,在内存片A中可以切割出多个Block,用于缓存报文;其中,内存片的大小大于或等于Block的总个数(比如,图11中的n,n为大于1的整数)乘以允许支持的报文最大长度(比如10K Byte);每个Block都表示一段地址连续的物理内存,Block首地址就表示这段地址连续的物理内存的首地址。
在其他实施例中,可以预留多个内存片,从这些内存片中切割出多个内存块,只要保证从中切割出的内存块内部的物理地址连续即可。
2)给硬件驱动(比如,网卡驱动)分配内存块地址池以及发包队列(即上述的第二发包队列);其中,内存块地址池用于存放内存块首地址,内存块地址池可以为FIFO队列、链表、数组、或者环形队列,然而,本申请对此并不限定。第二发包队列可以为FIFO结构或者环形队列结构,然而,本申请对此并不限定。
3)创建支持优先级调度的环形队列组(以下简称为Ring组)(即上述的多个第一发包队列)。本示例中,环组可以包括v个Ring,v可以为大于或等于1的整数。
4)创建用来向硬件驱动发包的发包线程。
本实施例中,内存地址置换过程发生在内存块地址池和用于发包的环组之间。
在本示例性实施例中,如图11所示,可以将k+1到n部分共n-k个Block首地址注入到内存块地址池中。将Block 1到Block i,总共i个Block首地址放在Ring 0中;将Block j到Block k,总共k-j+1个Block首地址放在Ring v中;Ring组中其它Ring内的Block首地址的注入方式和Ring 0、Ring v内Block首地址的注入方式一样,初始时,Ring组中注入的所有Block首地址的使用状态均为空闲状态。其中,i,j,k均为大于1的整数。整个Block首地址的注入过程中保证所有的Ring、内存块地址池中注入的Block首地址均不重复。其中,注入每个Ring和内存块地址池中的Block首地址的数目可以相同或不同,本申请对此并不限定。其中,Ring组中所有Block首地址个数加上内存块地址池中所有Block首地址个数之和可以为n。
基于上述设置的内存块地址池、第二发包队列、发包线程以及环组,本示例性实施例的报文转发方法可以包括步骤9010至步骤9060。
步骤9010、容器1中的应用进程P11从对应的环组中的环形队列(比如,环v)取出其中存放的标记为空闲状态的Block首地址,将应用进程P11待发送的报文存放到该Block首地址所指示的内存块,并将缓存该报文的Block首地址、报文长度、报文基于该Block首地址的偏移信息等信息放入该环形队列(即Ring v)。
步骤9020、发包线程轮询环v,并从环v读取缓存该报文的Block首地址、报文长度、报文基于该Block首地址的偏移信息等信息。
步骤9030、发包线程从环v读取缓存该报文的Block首地址、报文长度、报文基于该Block首地址的偏移信息等信息后,将内存块地址池内的一个空闲状态的Block首地址放入环v。
步骤9040、发包线程将缓存该报文的Block首地址、报文长度、报文基于该Block首地址的偏移信息等信息放入第二发包队列。
步骤9050、帧管理从第二发包队列读取缓存该报文的Block首地址、报文长度、报文基于该Block首地址的偏移信息等信息,并根据报文的上述信息从对应的Block获取该报文。
步骤9060、通过网卡对外发送该报文。
步骤9070、帧管理发送完报文后,将存储该报文的Block的Block首地址放回内存块地址池,以便后续继续使用。
上述发包过程仅为一种示例,本申请对此并不限定。不同类型的网卡的发 包流程不同。比如,在步骤9030之后,发包线程可以将缓存该报文的Block首地址、报文长度、报文基于该Block首地址的偏移信息、第二发包队列的队列标识(比如,Queue ID)以及报文发送后相应的内存块首地址需要释放到的内存块地址池的池标识(比如,Pool ID)等信息组成描述符,调用网卡驱动接口发送即可。报文发送完成后,网卡驱动会将缓存报文的物理地址对应的Block首地址还回内存块地址池。
图12为本申请实施例提供的一种报文转发装置的示意图。如图12所示,本实施例提供的报文转发装置,包括:第一收包模块1201,设置为从内存块地址池选取出该内存块地址池中存放的内存块信息,将输入输出硬件(比如,网卡)接收到的报文存放到该内存块信息指示的内存块,根据该报文在内存块中的存放位置得到该报文的描述信息,并将该报文的描述信息放入第一收包队列;第二收包模块1202,设置为通过收包线程从第一收包队列读取描述信息;将第二收包队列内存放的一个标记为空闲状态的内存块信息放入内存块地址池,并将从第一收包队列读取的描述信息放入第二收包队列;第三收包模块1203,设置为通过与该第二收包队列对应的应用进程从第二收包队列读取描述信息,根据从第二收包队列读取的描述信息获取报文,并将第二收包队列中用于指示所获取的报文所在内存块的内存块信息标记为空闲状态;其中,内存块地址池中存放的内存块信息与第二收包队列中存放的内存块信息不重复。
在一示例性实施例中,第二收包模块1202,还可以设置为在第二收包队列中没有标记为空闲状态的内存块信息的情况下,通过收包线程将将从第一收包队列读取的描述信息所对应的内存块信息放回内存块地址池。
在一示例性实施例中,第二收包模块1202还可以设置为通过收包线程根据从第一收包队列读取的描述信息,读取在该描述信息指示的物理地址缓存的报文,通过解析读取到的报文,确定读取到的报文对应的第二收包队列。
在一示例性实施例中,第二收包模块1202可以包括收包线程和Job(或者,通道管理线程)。
图13为本申请实施例提供的另一种报文转发装置的示意图。在一示例性实施例中,如图13所示,本实施例提供的报文转发装置还可以包括:第二收包队列创建和管理模块1204,设置为接收应用进程的收包请求;根据应用进程的收包请求,给应用进程创建对应的一个或多个第二收包队列;向应用进程返回该应用进程对应的第二收包队列的创建信息。
本实施例中,第二收包队列创建和管理模块1204可以设置为创建第二收包队列,并提供报文的读、写、释放(free)、置换等接口。如果应用进程在容器内,由于有NameSpace等差异,可以使用一段连续的物理内存来创建第二收包 队列组。如果应用进程不在容器中,则既可以使用一段连续的物理内存,也可以使用Linux共享内存等来创建第二收包队列组(比如,Ring组)。另外,每一个Ring组对应一个应用进程,从而通过增加Ring组和内存即可以增加应用进程收包。
在一示例性实施例中,如图13所示,本实施例的报文转发装置还可以包括:内存块地址池创建模块1205,设置为在接收到应用进程的收包请求后,给应用进程创建对应的内存块地址池;或者,根据输入输出硬件(网卡)接收的报文类型,创建一个或多个内存块地址池。其中,内存块地址池可以根据业务需求创建多个。比如,可以规划一些Block的大小为1K Byte用来存放短报文,并将这些Block对应的内存块首地址放到一个内存块地址池中,而另外再规划一些Block的大小为10K Byte用来存放长报文,并将这些Block对应的内存块首地址放到另外一个内存块地址池中。
在一示例性实施例中,如图13所示,本实施例的报文转发装置还可以包括:第一收包队列创建和管理模块1206,设置为在接收到应用进程的收包请求后,给应用进程创建对应的第一收包队列;或者,根据输入输出硬件(网卡)的类型,创建一个或多个第一收包队列。
在一示例性实施例中,如图13所示,本实施例的报文转发装置还可以包括:物理内存分配管理模块1207,设置为在接收到应用进程的收包请求后,给应用进程分配至少一个物理地址连续的内存片,从内存片切割出多个内存块,将多个内存块对应的内存块信息分别注入内存块地址池和应用进程对应的第二收包队列,并标记存放到第二收包队列的内存块信息为空闲状态;或者,预留至少一个物理地址连续的内存片,接收到应用进程的收包请求后,从内存片切割出多个内存块,将多个内存块对应的内存块信息分别注入内存块地址池和应用进程对应的第二收包队列,并标记存放到第二收包队列的内存块信息为空闲状态。其中,内存块地址池中注入的内存块信息(比如,内存块首地址或标识)与第二收包队列中注入的内存块信息不重复。其中,物理内存分配管理模块1207可以设置为分配一段物理地址连续的内存片给应用进程和驱动使用,当应用进程比较多时可以支持提供分段管理等。
另外,关于本实施例提供的报文转发装置的相关说明可以参照上述方法实施例的描述,故于此不再赘述。
图14为本申请实施例提供的一种网络设备的示意图。如图14所示,本实施例提供的网络设备1400(比如,路由器、交换机等)包括:输入输出硬件(比如,网卡)1403、处理器1402以及存储器1401;输入输出硬件1403设置为接收或发送报文;存储器1401设置为存储报文转发程序,该报文转发程序被处理 器1402执行时实现上述报文转发方法的步骤,比如图3所示的步骤。图14中示出的结构,仅仅是与本申请方案相关的部分结构的示意图,并不构成对本申请方案所应用于其上的网络设备1400的限定,网络设备1400可以包括比图中所示更多或更少的部件,或者组合一些部件,或者具有不同的部件布置。
本实施例中,存储器1401可设置为存储应用软件的软件程序以及模块,如本实施例中的报文转发方法对应的程序指令或模块,处理器1402通过运行存储在存储器1401内的软件程序以及模块,从而执行多种功能应用以及数据处理,比如实现本实施例提供的报文转发方法。存储器1401可包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。
另外,关于本实施例提供的网络设备的相关实施过程说明可以参照上述报文转发方法及装置的相关描述,故于此不再赘述。
此外,本申请实施例还提供一种计算机可读介质,存储有报文转发程序,该报文转发程序被执行时实现上述报文转发方法的步骤,比如图3所示的步骤。
上文中所公开方法中的全部或一些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。一些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于随机存取存储器(Random Access Memory,RAM)、只读存储器(Read-Only Memory,ROM)、带电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、闪存或其他存储器技术、光盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能盘(Digital Video Disc,DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。

Claims (22)

  1. 一种报文转发方法,包括:
    从内存块地址池取出所述内存块地址池中存放的内存块信息,将输入输出硬件接收到的报文存放到所述内存块信息指示的内存块,根据所述报文在所述内存块中的存放位置得到所述报文的描述信息,将所述报文的描述信息放入第一收包队列;
    通过收包线程从所述第一收包队列读取所述描述信息;
    通过所述收包线程将第二收包队列内存放的一个标记为空闲状态的内存块信息存放到所述内存块地址池,并将从所述第一收包队列读取的描述信息放入所述第二收包队列;
    通过与所述第二收包队列对应的应用进程从所述第二收包队列读取描述信息,根据从所述第二收包队列读取的描述信息获取报文,并将所述第二收包队列中用于指示所述报文所在内存块的内存块信息标记为空闲状态;
    其中,所述内存块地址池内存放的内存块信息与所述第二收包队列中存放的内存块信息不重复。
  2. 根据权利要求1所述的方法,其中,所述内存块信息包括内存块首地址或者内存块标识;所述内存块为一段地址连续的物理内存,用于缓存所述输入输出硬件接收到的报文。
  3. 根据权利要求2所述的方法,在所述从内存块地址池取出所述内存块地址池中存放的内存块信息之前,还包括:
    在接收到所述应用进程的收包请求后,给所述应用进程分配至少一个物理地址连续的内存片,从所述至少一个物理地址连续的内存片切割出多个内存块,将所述多个内存块对应的内存块信息中的一部分内存块信息存放到所述内存块地址池并将所述多个内存块对应的内存块信息中的另一部分内存块信息存放到所述第二收包队列,并标记存放到所述第二收包队列的内存块信息为空闲状态;
    或者,
    预留至少一个物理地址连续的内存片,在接收到所述应用进程的收包请求后,从所述至少一个物理地址连续的内存片切割出多个内存块,将所述多个内存块对应的内存块信息中的一部分内存块信息存放到所述内存块地址池并将所述多个内存块对应的内存块信息中的另一部分内存块信息存放到所述第二收包队列,并标记存放到所述第二收包队列的内存块信息为空闲状态。
  4. 根据权利要求1所述的方法,其中,所述报文的描述信息包括:缓存所述报文的内存块的内存块首地址、所述报文的长度以及所述报文基于所述内存 块首地址的偏移信息。
  5. 根据权利要求1所述的方法,在所述通过收包线程从所述第一收包队列读取所述描述信息之后,还包括:
    在所述第二收包队列中没有标记为空闲状态的内存块信息的情况下,通过所述收包线程将所述描述信息对应的内存块信息放回所述内存块地址池。
  6. 根据权利要求1所述的方法,在所述通过收包线程从所述第一收包队列读取所述描述信息之后,还包括:
    通过所述收包线程根据从所述第一收包队列读取的描述信息,读取在所述描述信息指示的物理地址中缓存的所述报文,通过解析读取到的报文,确定所述读取到的报文对应的第二收包队列;
    所述通过所述收包线程将第二收包队列内存放的一个标记为空闲状态的内存块信息存放到所述内存块地址池,并将从所述第一收包队列读取的描述信息放入所述第二收包队列,包括:
    通过所述收包线程将所述读取到的报文对应的第二收包队列内存放的一个标记为空闲状态的内存块信息存放到所述内存块地址池,并将从所述第一收包队列读取的描述信息放入所述第二收包队列。
  7. 根据权利要求6所述的方法,其中,所述通过所述收包线程根据从所述第一收包队列读取的描述信息,读取在所述描述信息指示的物理地址中缓存的所述报文,通过解析读取到的报文,确定所述读取到的报文对应的第二收包队列,包括:
    通过所述收包线程将从所述第一收包队列读取的描述信息映射到虚拟地址,读取并解析所述报文,得到所述报文的特征信息;根据解析出的报文的特征信息,确定接收所述报文的应用进程以及所述报文所属的优先级;
    根据接收所述报文的应用进程、所述报文所属的优先级、以及所述应用进程对应的第二收包队列与优先级的对应关系,确定所述报文对应的第二收包队列。
  8. 根据权利要求1所述的方法,在所述从内存块地址池取出所述内存块地址池中存放的内存块信息之前,还包括:
    接收所述应用进程的收包请求;
    根据所述应用进程的收包请求,给所述应用进程创建对应的一个或多个第二收包队列;
    向所述应用进程返回所述应用进程对应的所述一个或多个第二收包队列的 创建信息。
  9. 根据权利要求8所述的方法,其中,所述根据所述应用进程的收包请求,给所述应用进程创建对应的一个或多个第二收包队列,包括:
    根据所述应用进程的收包请求,给所述应用进程创建支持优先级调度的多个第二收包队列,其中,所述应用进程待接收的报文所属的一级优先级对应所述多个第二收包队列中的一个或多个第二收包队列。
  10. 根据权利要求1所述的方法,在所述从内存块地址池取出所述内存块地址池中存放的内存块信息之前,还包括:
    在接收到所述应用进程的收包请求后,给所述应用进程创建对应的内存块地址池;或者,根据所述输入输出硬件接收的报文类型,创建一个或多个内存块地址池。
  11. 根据权利要求1所述的方法,在所述从内存块地址池取出所述内存块地址池中存放的内存块信息之前,还包括:
    在接收到所述应用进程的收包请求后,给所述应用进程创建对应的第一收包队列;或者,根据所述输入输出硬件的类型,创建一个或多个第一收包队列。
  12. 根据权利要求1所述的方法,在所述从内存块地址池取出所述内存块地址池中存放的内存块信息之前,还包括:
    在接收到所述应用进程的收包请求后,给所述应用进程创建对应的收包线程;或者,在接收到所述应用进程的收包请求后,从已创建的收包线程中选择一个作为所述应用进程对应的收包线程。
  13. 根据权利要求1所述的方法,还包括:
    设置所述收包线程对中央处理器CPU资源的亲和性或排他性。
  14. 根据权利要求1所述的方法,在所述通过收包线程从所述第一收包队列读取所述描述信息之后,还包括:
    在通过所述收包线程根据从所述第一收包队列读取的描述信息读取到所述报文后,更新所述报文所属业务流的流统计计数,在限速时长内的所述流统计计数满足设定条件的情况下,丢弃所述报文;
    在限速时长结束时,将所述流统计计数置为初始值。
  15. 根据权利要求1至14中任一项所述的方法,其中,多个应用进程对应一个收包线程,或者多个应用进程对应多个收包线程。
  16. 根据权利要求1至14中任一项所述的方法,其中,一个或多个应用进 程位于容器内。
  17. 根据权利要求1至14中任一项所述的方法,其中,所述收包线程和所述应用进程均位于容器内。
  18. 根据权利要求1至14中任一项所述的方法,其中,所述第二收包队列为环形队列。
  19. 根据权利要求1所述的方法,还包括:
    从第一发包队列中取出所述第一发包队列中存放的标记为空闲状态的内存块信息,将所述应用进程待发送的报文存放到所述内存块信息所指示的内存块,根据所述待发送的报文在所述内存块中的存放位置得到所述待发送的报文的描述信息,并将所述待发送的报文的描述信息放入所述第一发包队列;
    通过发包线程从所述第一发包队列读取所述待发送的报文的描述信息,将内存块地址池内存放的一个标记为空闲状态的内存块信息存放到所述第一发包队列,并将从所述第一发包队列读取的描述信息放入第二发包队列;
    从所述第二发包队列读取描述信息,并根据从所述第二发包队列读取的描述信息获取报文,通过所述输入输出硬件发送所述获取到的报文,并在发送所述获取到的报文后,将用于指示所述待发送的报文所在内存块的内存块信息放回所述内存块地址池。
  20. 一种报文转发装置,包括:
    第一收包模块,设置为从内存块地址池取出所述内存块地址池中存放的内存块信息,将输入输出硬件接收到的报文存放到所述内存块信息指示的内存块,根据所述报文在所述内存块中的存放位置得到所述报文的描述信息,将所述报文的描述信息放入第一收包队列;
    第二收包模块,设置为通过收包线程从所述第一收包队列读取所述描述信息;通过所述收包线程将第二收包队列内存放的一个标记为空闲状态的内存块信息存放到所述内存块地址池,并将从所述第一收包队列读取的描述信息放入所述第二收包队列;
    第三收包模块,通过与所述第二收包队列对应的应用进程从所述第二收包队列读取描述信息,根据从所述第二收包队列读取的描述信息获取报文,并将所述第二收包队列中用于指示所述报文所在内存块的内存块信息标记为空闲状态;
    其中,所述内存块地址池内存放的内存块信息与所述第二收包队列中存放的内存块信息不重复。
  21. 一种网络设备,包括:输入输出硬件、处理器以及存储器;所述输入输出硬件设置为接收或发送报文;所述存储器设置为存储报文转发程序,所述报文转发程序被所述处理器执行时实现如权利要求1至19中任一项所述的报文转发方法。
  22. 一种计算机可读介质,存储有报文转发程序,所述报文转发程序被执行时实现如权利要求1至19中任一项所述的报文转发方法。
PCT/CN2019/126079 2018-12-18 2019-12-17 报文转发方法、装置、网络设备及计算机可读介质 WO2020125652A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811546772.XA CN109783250B (zh) 2018-12-18 2018-12-18 一种报文转发方法及网络设备
CN201811546772.X 2018-12-18

Publications (1)

Publication Number Publication Date
WO2020125652A1 true WO2020125652A1 (zh) 2020-06-25

Family

ID=66497153

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/126079 WO2020125652A1 (zh) 2018-12-18 2019-12-17 报文转发方法、装置、网络设备及计算机可读介质

Country Status (2)

Country Link
CN (1) CN109783250B (zh)
WO (1) WO2020125652A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783250B (zh) * 2018-12-18 2021-04-09 中兴通讯股份有限公司 一种报文转发方法及网络设备
CN110336702B (zh) * 2019-07-11 2022-08-26 上海金融期货信息技术有限公司 一种消息中间件的系统和实现方法
CN112491979B (zh) * 2020-11-12 2022-12-02 苏州浪潮智能科技有限公司 一种网卡数据包缓存管理方法、装置、终端及存储介质
CN113259006B (zh) * 2021-07-14 2021-11-26 北京国科天迅科技有限公司 一种光纤网络通信系统、方法及装置
CN114024923A (zh) * 2021-10-30 2022-02-08 江苏信而泰智能装备有限公司 一种多线程报文捕获方法、电子设备及计算机存储介质
CN114003366B (zh) * 2021-11-09 2024-04-16 京东科技信息技术有限公司 一种网卡收包处理方法及装置
CN114500400B (zh) * 2022-01-04 2023-09-08 西安电子科技大学 基于容器技术的大规模网络实时仿真方法
CN115801629B (zh) * 2023-02-03 2023-06-23 天翼云科技有限公司 双向转发侦测方法、装置、电子设备及可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796337A (zh) * 2015-04-10 2015-07-22 京信通信系统(广州)有限公司 一种转发报文的方法及装置
CN108132889A (zh) * 2017-12-20 2018-06-08 东软集团股份有限公司 内存管理方法、装置、计算机可读存储介质及电子设备
CN108243118A (zh) * 2016-12-27 2018-07-03 华为技术有限公司 转发报文的方法和物理主机
CN109783250A (zh) * 2018-12-18 2019-05-21 中兴通讯股份有限公司 一种报文转发方法及网络设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150488B (zh) * 2007-11-15 2012-01-25 曙光信息产业(北京)有限公司 一种零拷贝网络报文接收方法
CN101719872B (zh) * 2009-12-11 2012-06-06 曙光信息产业(北京)有限公司 基于零拷贝方式的多队列报文发送和接收方法和装置
US9575796B2 (en) * 2015-02-16 2017-02-21 Red Hat Isreal, Ltd. Virtual device timeout by memory offlining
CN105591979A (zh) * 2015-12-15 2016-05-18 曙光信息产业(北京)有限公司 报文的处理系统和方法
CN106789617B (zh) * 2016-12-22 2020-03-06 东软集团股份有限公司 一种报文转发方法及装置
CN106850565B (zh) * 2016-12-29 2019-06-18 河北远东通信系统工程有限公司 一种高速的网络数据传输方法
CN108566387B (zh) * 2018-03-27 2021-08-20 中国工商银行股份有限公司 基于udp协议进行数据分发的方法、设备以及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796337A (zh) * 2015-04-10 2015-07-22 京信通信系统(广州)有限公司 一种转发报文的方法及装置
CN108243118A (zh) * 2016-12-27 2018-07-03 华为技术有限公司 转发报文的方法和物理主机
CN108132889A (zh) * 2017-12-20 2018-06-08 东软集团股份有限公司 内存管理方法、装置、计算机可读存储介质及电子设备
CN109783250A (zh) * 2018-12-18 2019-05-21 中兴通讯股份有限公司 一种报文转发方法及网络设备

Also Published As

Publication number Publication date
CN109783250B (zh) 2021-04-09
CN109783250A (zh) 2019-05-21

Similar Documents

Publication Publication Date Title
WO2020125652A1 (zh) 报文转发方法、装置、网络设备及计算机可读介质
US11916781B2 (en) System and method for facilitating efficient utilization of an output buffer in a network interface controller (NIC)
US9800513B2 (en) Mapped FIFO buffering
CN106817317B (zh) 具有入口控制的业务量管理
US8462632B1 (en) Network traffic control
US9813283B2 (en) Efficient data transfer between servers and remote peripherals
EP4057579A1 (en) Data forwarding method, data buffering method, device, and related apparatus
JP5892500B2 (ja) メッセージ処理方法及び装置
CN111327391B (zh) 一种时分复用方法及装置、系统、存储介质
JP2014522202A (ja) パケットを再構築し再順序付けするための方法、装置、およびシステム
US8312243B2 (en) Memory management in network processors
WO2020038009A1 (zh) 报文处理方法及相关设备
US9292466B1 (en) Traffic control for prioritized virtual machines
CN114079638A (zh) 多协议混合网络的数据传输方法、装置和存储介质
CN114531488B (zh) 一种面向以太网交换器的高效缓存管理系统
CN114500418B (zh) 数据统计方法及相关装置
EP4336796A1 (en) Deterministic traffic transmission method and apparatus
CN110708255B (zh) 一种报文控制方法及节点设备
WO2019095942A1 (zh) 一种数据传输方法及通信设备
CN113347112B (zh) 一种基于多级缓存的数据包转发方法及装置
CN117118762B (zh) 中央处理器收包处理方法、装置、电子设备和存储介质
KR102112270B1 (ko) 다중계층 네트워크 환경에서 패킷을 처리하는 방법 및 그 장치
US10992601B2 (en) Packet processing method and apparatus in multi-layered network environment
CN108111571B (zh) 一种基于iSCSI协议的存储QoS方法
CN115499386A (zh) 一种数据转发方法、vpp网元设备和可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19901340

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.11.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19901340

Country of ref document: EP

Kind code of ref document: A1