CN109783250B - Message forwarding method and network equipment - Google Patents

Message forwarding method and network equipment Download PDF

Info

Publication number
CN109783250B
CN109783250B CN201811546772.XA CN201811546772A CN109783250B CN 109783250 B CN109783250 B CN 109783250B CN 201811546772 A CN201811546772 A CN 201811546772A CN 109783250 B CN109783250 B CN 109783250B
Authority
CN
China
Prior art keywords
packet receiving
memory block
packet
message
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811546772.XA
Other languages
Chinese (zh)
Other versions
CN109783250A (en
Inventor
冯仰忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201811546772.XA priority Critical patent/CN109783250B/en
Publication of CN109783250A publication Critical patent/CN109783250A/en
Priority to PCT/CN2019/126079 priority patent/WO2020125652A1/en
Application granted granted Critical
Publication of CN109783250B publication Critical patent/CN109783250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a message forwarding method and network equipment; the message forwarding method comprises the following steps: the method comprises the steps that memory block information stored in a memory block address pool is taken out from the memory block address pool, a message received by input and output hardware is stored in a memory block indicated by the memory block information, description information of the message is obtained, and the description information of the message is placed in a first packet receiving queue; reading the description information from the first packet receiving queue through a packet receiving thread; storing memory block information which is stored in a second packet receiving queue and marked as an idle state into a memory block address pool through a packet receiving thread, and putting description information read from the first packet receiving queue into the second packet receiving queue; reading the description information from the second packet receiving queue through the application process corresponding to the second packet receiving queue, acquiring a message according to the description information read from the second packet receiving queue, and marking memory block information in the second packet receiving queue, which is used for indicating a memory block where the acquired message is located, as an idle state.

Description

Message forwarding method and network equipment
Technical Field
The present invention relates to, but not limited to, the field of communications technologies, and in particular, to a message forwarding method and a network device.
Background
With the advent of the Fifth generation mobile communication technology (5G), higher requirements are put on the transmission rate and performance of a communication network, which requires that the rate of processing a packet by a network node is faster and faster in the network data transmission process, and the packet is reflected to a device such as a router or a switch, and is required to be rapidly transmitted and processed in a device internal network.
Disclosure of Invention
The embodiment of the application provides a message forwarding method and network equipment, which can improve the transmission rate of messages in the network equipment.
In one aspect, an embodiment of the present application provides a packet forwarding method, including: the method comprises the steps that memory block information stored in a memory block address pool is taken out from the memory block address pool, a message received by input and output hardware is stored in a memory block indicated by the memory block information, description information of the message is obtained, and the description information of the message is placed in a first packet receiving queue; reading description information from the first packet receiving queue through a packet receiving thread; storing one memory block information marked as an idle state stored in a second packet receiving queue into the memory block address pool through the packet receiving thread, and putting the description information read from the first packet receiving queue into the second packet receiving queue; reading description information from the second packet receiving queue through an application process corresponding to the second packet receiving queue, acquiring a message according to the description information read from the second packet receiving queue, and marking memory block information in the second packet receiving queue, which is used for indicating a memory block in which the acquired message is located, as an idle state; and the memory block information stored in the memory block address pool and the memory block information stored in the second packet receiving queue are not repeated.
In another aspect, an embodiment of the present application provides a network device, including: input-output hardware, a processor, and a memory; the input and output hardware is suitable for receiving or sending messages; the memory is adapted to store a message forwarding program, which when executed by the processor implements the steps of the message forwarding method described above.
In another aspect, an embodiment of the present application provides a computer-readable medium, which stores a message forwarding program, and when the message forwarding program is executed, the message forwarding program implements the steps of the message forwarding method.
In the embodiment of the application, the zero-copy transmission of the message is realized by adopting a memory address replacement mode between the memory block address pool and the second packet receiving queue through the packet receiving thread, and no copy is added in the whole message transmission process, so that the message transmission rate in the network equipment is improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the claimed subject matter and are incorporated in and constitute a part of this specification, illustrate embodiments of the subject matter and together with the description serve to explain the principles of the subject matter and not to limit the subject matter.
FIG. 1 is a schematic diagram of a Linux kernel Socket packing technique;
FIG. 2 is a schematic diagram of a zero-copy packet receiving technique;
fig. 3 is a flowchart of a message forwarding method provided in the embodiment of the present application;
fig. 4 is a schematic diagram illustrating an example of a message forwarding method according to an embodiment of the present application;
fig. 5 is another exemplary schematic diagram of a packet forwarding method provided in the embodiment of the present application;
fig. 6 is another exemplary schematic diagram of a packet forwarding method provided in the embodiment of the present application;
fig. 7 is another exemplary schematic diagram of a packet forwarding method provided in the embodiment of the present application;
fig. 8 is another exemplary schematic diagram of a packet forwarding method provided in the embodiment of the present application;
fig. 9 is another exemplary schematic diagram of a packet forwarding method provided in the embodiment of the present application;
fig. 10 is another exemplary schematic diagram of a packet forwarding method provided in the embodiment of the present application;
fig. 11 is another exemplary schematic diagram of a packet forwarding method provided in the embodiment of the present application;
fig. 12 is a schematic diagram of a message forwarding apparatus according to an embodiment of the present application;
fig. 13 is another schematic diagram of a message forwarding apparatus according to an embodiment of the present application;
fig. 14 is a schematic diagram of a network device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings. It should be noted that the embodiments and features of the embodiments in the present application may be arbitrarily combined with each other without conflict.
The steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
Fig. 1 is a schematic diagram of a Linux kernel Socket packing technique. As shown in fig. 1, the Linux kernel Socket packet receiving process may include: entering a network card drive for a message from a network card; the network card driver informs a kernel thread to process a message in a network Protocol stack through interruption, and the process needs to pass through an IP (Internet Protocol) layer and a TCP (Transmission Control Protocol)/UDP (User Datagram Protocol) layer; after the network protocol stack processes the message, it notifies the application layer (e.g., application processes P1, Pn) to receive the packet.
Although the Socket packet receiving technology shown in fig. 1 has good generality, can well support multi-process packet receiving, and is not limited, the following disadvantages exist: an IP layer and a TCP/UDP layer are required to pass from the kernel to the application layer, so that message copy is increased, and the packet receiving performance is seriously influenced by the increased message copy; when an application process in a container needs to receive a packet, the packet is limited by a NameSpace (NameSpace) and the like, the transmission of the packet depends on a container network, and the copy of the packet is increased. Therefore, the message copy in the Linux kernel protocol stack is an important factor influencing the message transmission rate.
Fig. 2 is a schematic diagram of a zero-copy packet receiving technique. As shown in fig. 2, the zero-copy packet receiving process may include: the message comes from the network card and is sent to frame management; the frame management will analyze, classify or Hash the message, and then send it to a specific queue; the queue management is responsible for allocating the queue to the application process (for example, allocating queue 1 to application process P1 and allocating queue n to application process Pn), wherein each application process needs to allocate at least one queue to solve the concurrence problem; and the application process receives the message from the designated queue and processes the message.
The message zero-copy technology shown in fig. 2 can directly map the network card driver to the application process, so that the application process can directly access the message queue, thereby implementing zero-copy of the message. The network card driver can be placed in a kernel or directly placed in an application process, the application process directly interacts with the driver queue, and a series of problems such as a queue number, a Pool number, a priority scheduling strategy and the like used by the application process need to be determined during interaction. If a plurality of application processes are required to implement packet receiving, each application process is required to map and manage a network card driver, determine a queue number, a Pool number and a priority scheduling policy, and since different application processes may be maintained by different users, the above manner undoubtedly increases workload and wastes manpower. Moreover, the scheme has some problems in a scene that a plurality of application processes or containers receive and send messages, for example, under the condition that a plurality of application processes exist, hardware resources of the network card are not enough, so that the number of the application processes is limited; some network cards do not support priority scheduling or scheduling is not flexible enough; when a process in a container receives a packet, the process is limited by a NameSpace (NameSpace) and other limitations, and the transmission of the message depends on a container network, so that the copy of the message is increased; each application process needs to directly operate the user mode driver, which brings unnecessary workload and the like.
The embodiment of the application provides a message forwarding method and network equipment, wherein zero copy of a message is realized by a mode that a packet receiving thread transmits a memory address in a memory block address pool, a first packet receiving queue and a second packet receiving queue, and copy is not added in a message transmitting process in the network equipment, so that the message transmission rate in the network equipment is improved. Moreover, the embodiment of the application can realize that the multi-application process is concentrated on the application without considering the bottom hardware driving details during package collection, improves the universality and the working efficiency without influencing the performance, and reduces the maintenance cost. Different second packet receiving queues correspond to different application processes, and packet receiving of the application processes can be increased by increasing the second packet receiving queues and the memory, so that the problem that the number of the application processes is limited is solved; in addition, the priority can be distinguished by adding the second packet receiving queue, so that the priority scheduling of the message is realized. By increasing the packet receiving thread and setting the affinity and the exclusivity to receive the packet quickly, the problems of indiscriminate packet loss and the like caused by limited hardware resources and incapability of supporting priority scheduling or inflexible scheduling of the hardware can be solved.
Fig. 3 is a flowchart of a message forwarding method according to an embodiment of the present application. As shown in fig. 3, the message forwarding method provided in this embodiment is applied to a network device, and is used to implement message transmission from input/output hardware (e.g., a network card) of the network device to an application process inside the network device. The packet forwarding method provided by this embodiment may be applied to network devices, such as routers and switches, which have high requirements on multiprocess or multithreading, containerization, universality and packet receiving and sending rate. However, this is not limited in this application.
As shown in fig. 3, the message forwarding method provided in this embodiment includes the following steps:
step S101, extracting the memory block information stored in the memory block address pool from the memory block address pool, storing the message received by the input and output hardware into the memory block indicated by the memory block information to obtain the description information of the message, and placing the description information of the message into a first packet receiving queue;
step S102, reading description information from a first packet receiving queue through a packet receiving thread;
step S103, storing memory block information which is stored in a second packet receiving queue and marked as an idle state into a memory block address pool through a packet receiving thread, and putting description information read from the first packet receiving queue into the second packet receiving queue;
step S104, reading description information from the second packet receiving queue through the application process corresponding to the second packet receiving queue, acquiring a message according to the description information read from the second packet receiving queue, and marking memory block information in the second packet receiving queue, which is used for indicating a memory block where the acquired message is located, as an idle state;
and the memory block information stored in the memory block address pool and the memory block information stored in the second packet receiving queue are not repeated.
In an exemplary embodiment, the memory block information stored in the memory block address pool and the second packet receiving queue may include a memory block head address or a memory block Identifier (ID); the memory block is a segment of physical memory with continuous addresses and is used for caching messages received by input and output hardware. For example, the memory block address pool and the second packet receiving queue may inject a pre-allocated memory block head address, and the memory block head address injected in the memory block address pool and the memory block head address injected in the second packet receiving queue are not repeated. Alternatively, the memory block address pool and the second packet receiving queue may be injected with a pre-allocated memory block ID, and the memory block ID injected in the memory block address pool and the memory block ID injected in the second packet receiving queue are not duplicated.
In an exemplary embodiment, the description information of the packet may include: caching the memory block head address of the memory block of the message, the message length and the offset information of the message based on the memory block head address. However, this is not limited in this application.
In an exemplary embodiment, the second packet receiving queue may be a ring queue; each circular queue is a lock-free queue, thereby realizing lock-free. However, this is not limited in this application. In other embodiments, the second packet receiving queue may be a FIFO (First Input First Output queue).
In an exemplary embodiment, before step S101, the message forwarding method of this embodiment may further include: after a packet receiving request of an application process is received, allocating at least one memory slice with continuous physical addresses to the application process, cutting a plurality of memory blocks from the memory slice, respectively storing memory block information (such as a memory block head address or an ID) corresponding to the memory blocks into a memory block address pool and a second packet receiving queue corresponding to the application process, and marking the memory block information stored into the second packet receiving queue to be in an idle state; or, reserving at least one memory slice with continuous physical addresses, cutting a plurality of memory blocks from the memory slice after receiving a packet receiving request of an application process, respectively storing memory block information (for example, a memory block head address or ID) corresponding to the plurality of memory blocks into a memory block address pool and a second packet receiving queue corresponding to the application process, and marking the memory block information stored into the second packet receiving queue as an idle state. And the memory block information injected into the memory block address pool and the memory block information injected into the second packet receiving queue are not repeated. For example, address allocation of the memory slices and injection of the memory block head address or ID in the memory block address pool and the second packet receiving queue may be implemented by a packet receiving process. However, this is not limited in this application.
Each memory block cut from the memory slice with continuous physical addresses can be used for caching messages, and the physical addresses in each memory block are continuous. When the continuous physical addresses provided by one memory chip are insufficient, a sufficient number of memory blocks can be cut from a plurality of memory chips as long as the physical addresses inside the memory blocks cut from the memory chips are continuous.
In an exemplary embodiment, after step S102, the message forwarding method of this embodiment may further include: and when the memory block information marked as the idle state is not in the second packet receiving queue, the memory block information corresponding to the description information read from the first packet receiving queue is stored back to the memory block address pool through the packet receiving thread. When there is no memory block information in an idle state in the second packet receiving queue (that is, the description information of the packet in the second packet receiving queue is full), the packet receiving thread may recover the corresponding memory block information, so as to discard the corresponding packet.
In an exemplary embodiment, after step S102, the message forwarding method of this embodiment may further include: reading a message cached at a physical address indicated by the description information according to the description information read from the first packet receiving queue through a packet receiving thread, and determining a second packet receiving queue corresponding to the read message by analyzing the read message;
accordingly, step S103 may include: and putting one piece of memory block information marked as an idle state stored in a second packet receiving queue corresponding to the read message into a memory block address pool through a packet receiving thread, and putting the description information read from the first packet receiving queue into the second packet receiving queue.
In an exemplary embodiment, after reading, by a packet receiving thread, a packet cached at a physical address indicated by description information according to the description information read from a first packet receiving queue, and determining, by analyzing the read packet, a second packet receiving queue corresponding to the read packet, the packet forwarding method of this embodiment may further include: and when the memory block information marked as the idle state does not exist in the second packet receiving queue corresponding to the message read by the packet receiving thread, the memory block information corresponding to the description information read from the first packet receiving queue is returned to the memory block address pool by the packet receiving thread. When there is no memory block information in an idle state in the second packet receiving queue (that is, the description information of the packet in the second packet receiving queue is full), the packet receiving thread may recover the corresponding memory block information, so as to discard the corresponding packet.
In an exemplary embodiment, before step S101, the message forwarding method of this embodiment may further include: receiving a packet receiving request of an application process; according to the packet receiving request of the application process, one or more corresponding second packet receiving queues are created for the application process; and returning the creation information of the second packet receiving queue corresponding to the application process. One application process may correspond to one second packet receiving queue, or correspond to multiple second packet receiving queues (for example, one second packet receiving queue group); a second receive queue corresponds to only one application process. The receiving of the packet receiving request and the creating process of the second packet receiving queue can be realized by a packet receiving process. For example, the receiving of the receive packet request and the creating of the second receive packet queue may be implemented by a receive packet thread in the receive packet process. However, this is not limited in this application. In other embodiments, the receiving of the receive packet request and the creating of the second receive packet queue may be implemented by other threads (e.g., a channel management thread) within the receive packet process.
In an exemplary embodiment, the package receiving request of the application process may carry the following information: the number of the second packet receiving queues requested to be created, the size of the second packet receiving queues, the maximum length of the received messages, the characteristic information of the received messages and the like. The creating information of the second packet receiving queue corresponding to the application process may include: and the number and other information of the second packet receiving queue corresponding to the application process. However, this is not limited in this application.
In an exemplary embodiment, reading, by a packet receiving thread, a packet cached at a physical address indicated by description information according to the description information read from a first packet receiving queue, and determining, by analyzing the read packet, a second packet receiving queue corresponding to the read packet, may include: mapping the description information read from the first packet receiving queue to a virtual address, reading and analyzing a message to obtain the characteristic information of the message; determining an application process for receiving the message according to the analyzed feature information of the message; and determining a second packet receiving queue corresponding to the message according to the application process receiving the message and the corresponding relationship between the application process and the second packet receiving queue (for example, the application process and the second packet receiving queue are in a one-to-one corresponding relationship).
In an exemplary embodiment, creating one or more corresponding second packet receiving queues for the application process according to the packet receiving request of the application process may include: and according to a packet receiving request of the application process, creating a plurality of second packet receiving queues supporting priority scheduling for the application process, wherein any priority of a message to be received by the application process corresponds to one or more second packet receiving queues. For example, if a message to be received by an application process corresponds to two priorities, at least two second packet receiving queues (e.g., queue 1 and queue 2) may be created for the application process, where one priority may correspond to at least one second packet receiving queue (e.g., queue 1) and another priority may correspond to at least one second packet receiving queue (e.g., queue 2); in other words, a packet belonging to one of the priority levels may be received through at least one second packet receiving queue (e.g., queue 1), and a packet belonging to the other priority level may be received through another at least one second packet receiving queue (e.g., queue 2).
In an exemplary embodiment, reading, by a packet receiving thread, a packet cached at a physical address indicated by description information according to the description information read from a first packet receiving queue, and determining, by analyzing the read packet, a second packet receiving queue corresponding to the read packet, may include: mapping the description information read from the first packet receiving queue to a virtual address, reading and analyzing a message to obtain the characteristic information of the message; determining an application process for receiving the message and the priority of the message according to the analyzed feature information of the message; and determining a second packet receiving queue corresponding to the message according to the application process for receiving the message, the priority of the message and the corresponding relation between the second packet receiving queue corresponding to the application process and the priority. When the second packet receiving queue corresponding to the application process supports priority scheduling, the application process can receive the packet from the corresponding second packet receiving queue according to a certain proportion, so that the priority scheduling of the packet is realized. For example, the application process may preferentially receive the higher-priority packet from the second packet receiving queue corresponding to the higher priority.
In an exemplary embodiment, before step S101, the message forwarding method of this embodiment may further include: after receiving a packet receiving request of an application process, establishing a corresponding memory block address pool for the application process; or, one or more memory block address pools are created according to the message types received by the input and output hardware. For example, an independent memory block address pool may be created for an application process according to a packet receiving request of the application process, so as to improve the packet receiving performance of the application process; alternatively, multiple application processes may share one or more memory block address pools, for example, one or more memory block address pools may be created in advance. For example, two memory block address pools may be created, where a memory block indicated by memory block information stored in one memory block address pool may be used to cache a packet with a packet size smaller than a preset value, and a memory block indicated by memory block information stored in the other memory block address pool may be used to cache a packet with a packet size greater than or equal to a preset value. The creation of the memory block address pool can be realized through a packet receiving process. However, this is not limited in this application.
In an exemplary embodiment, before step S101, the message forwarding method of this embodiment may further include: after receiving a packet receiving request of an application process, creating a corresponding first packet receiving queue for the application process; or, one or more first receiving queues are created according to the type of input and output hardware. An independent first packet receiving queue can be created for each application process, for example, an independent first packet receiving queue can be created for an application process according to a packet receiving request of the application process, so as to improve the packet receiving performance of the application process; alternatively, the plurality of application processes may share one or more first packet receiving queues, for example, one or more first packet receiving queues may be created in advance. Illustratively, the first packet receiving queue may be created according to a type of input/output hardware (network card), for example, when the network card does not support priority scheduling, one first packet receiving queue may be created, and when the network card supports priority scheduling, a plurality of second packet receiving queues may be created. The creation of the first packet receiving queue can be realized through a packet receiving process. However, this is not limited in this application.
In an exemplary embodiment, before step S101, the message forwarding method of this embodiment may further include: after receiving a packet receiving request of an application process, creating a corresponding packet receiving thread for the application process; or after receiving a packet receiving request of the application process, selecting one of the created packet receiving threads as the packet receiving thread corresponding to the application process. Wherein, a separate packet receiving thread can be created for each application process, or a plurality of application processes can share one packet receiving thread. For example, after receiving a packet receiving request of an application process, if the application process can share a packet receiving thread with other application processes, one of the packet receiving threads created for other application processes may be selected as a packet receiving thread corresponding to the application process, for example, a default packet receiving thread may be set to be provided for a plurality of application processes. The creation of the packet receiving thread can be realized through the packet receiving process. However, this is not limited in this application.
In an exemplary embodiment, the plurality of application processes may correspond to only one packet receiving thread, or the plurality of application processes may correspond to a plurality of packet receiving threads. Wherein, the message can be transmitted to a plurality of application processes only through one packet receiving thread; alternatively, the packet may be transmitted to the plurality of application processes through a plurality of packet receiving threads, for example, the packet may be transmitted to five application processes through two packet receiving threads, one of the packet receiving threads may transmit the packet to three application processes, and the other packet receiving thread may transmit the packet to the other two application processes.
In an exemplary embodiment, one or more application processes may be located within a container. The message forwarding method provided by the embodiment can be applied to a scene that an application process in a container needs to receive a packet. When a packet receiving thread on a Host (Host) receives a packet for an application process in a container, a section of physical memory with continuous addresses is required to be used for creating a second packet receiving queue due to different namespaces (namespaces) and the like.
In an exemplary embodiment, the application process and the receive thread may both be located within the container. The packet forwarding method provided by this embodiment may be applied to a scenario where a packet is directly received from input/output hardware in a container.
In an exemplary embodiment, the message forwarding method of this embodiment may further include: the affinity or exclusivity of the receive thread to Central Processing Unit (CPU) resources is set. The CPU affinity of the packet receiving thread may be set, or a CPU resource may be monopolized by a cgroup or an exclusive technique, so as to improve the packet receiving performance. For example, when the network card does not support priority scheduling and only creates one first packet receiving queue, the CPU affinity of the packet receiving thread may be set so that the packet receiving thread exclusively monopolizes a certain CPU resource, thereby reducing the probability of indiscriminate packet loss. However, this is not limited in this application. When the number of the first packet receiving queues is multiple, the CPU affinity of the packet receiving thread may also be set to improve the packet receiving performance.
In an exemplary embodiment, the message forwarding method of this embodiment may further include: after reading a message according to the description information read from the first packet receiving queue through the packet receiving thread, updating the flow statistical count of the service flow to which the message belongs, and discarding the message when the flow statistical count in the speed limit duration meets the set condition; and setting the flow statistic count as an initial value after the speed limit duration is reached each time. The message forwarding method provided by this embodiment may be applicable to a scenario where the traffic of the service flow is too large.
In an exemplary embodiment, the initial value of the flow statistic count may be 0, after a packet is read by the packet receiving thread, the flow statistic count of the service flow to which the packet belongs may be incremented by one, and when the flow statistic count within the speed limit duration (e.g., one second) meets a set condition (e.g., is greater than the speed limit value of the service flow), the packet is discarded; and sets the flow statistic count to an initial value (here, 0) after each time the speed limit duration is reached.
In an exemplary embodiment, an initial value of a flow statistic count of a traffic flow may be a speed limit value of the traffic flow, after a packet is read by a packet receiving thread, the flow statistic count of the traffic flow to which the packet belongs may be decremented by one, and when the flow statistic count within a speed limit duration (for example, one second) meets a set condition (for example, the flow statistic count within the speed limit duration is 0), the packet is discarded; and setting the flow statistic count as an initial value (here, a speed limit value) after the speed limit duration is reached each time.
In an exemplary embodiment, the message forwarding method of this embodiment may further include: taking out memory block information (such as a memory block head address or an ID) which is stored in the first packet sending queue and marked as an idle state from the first packet sending queue, storing a message to be sent by an application process into a memory block indicated by the memory block information to obtain description information of the message, and putting the description information of the message into the first packet sending queue; reading the description information from the first packet sending queue through the packet sending thread, putting memory block information (such as a memory block head address or an ID) which is stored in a memory block address pool and marked as an idle state into the first packet sending queue, and putting the description information read from the first packet sending queue into a second packet sending queue; reading the description information from the second packet sending queue, obtaining a message according to the description information read from the second packet sending queue, sending the obtained message through input/output hardware (such as a network card), and after sending the obtained message, storing memory block information for indicating a memory block in which the obtained message is located back to the memory block address pool. The embodiment may send the packet by the packet sending thread in the memory block address pool, the first packet sending queue, and the second packet sending queue to transfer the memory address. However, this is not limited in this application. In other embodiments, the message sending process may not adopt the above-mentioned method.
Fig. 4 is a schematic diagram illustrating an example of a packet forwarding method according to an embodiment of the present application. The present exemplary embodiment describes a receiving process for implementing zero-copy packet transmission by using a memory address replacement manner through a packet receiving thread, a memory block address pool, a first packet receiving queue, and a second packet receiving queue. The second packet receiving queues are exemplified by circular queues, that is, one second packet receiving queue is a circular queue (hereinafter, referred to as a Ring), and each circular queue is a lock-free queue. In this example, one application process corresponds to one set of second packet receiving queues (i.e., one Ring set).
In the present exemplary embodiment, before performing message reception, the following operations are performed:
1) reserving a memory slice A with continuous physical addresses, and cutting a plurality of memory blocks (Block) in the memory slice A for caching messages; the size of the memory slice a is greater than or equal to the total number of memory blocks (Block) (for example, n in fig. 4 is an integer greater than 1) multiplied by the maximum length of the message allowed to be supported (for example, 10K Byte); each Block represents a physical memory with continuous addresses, and the Block first address represents the first address of the physical memory with continuous addresses.
In other embodiments, a plurality of memory slices may be reserved, and a plurality of memory blocks are cut out from the memory slices as long as the internal physical addresses of the memory blocks cut out from the memory slices are continuous.
2) A hardware driver (e.g., a network card driver) is allocated with a memory block address Pool (hereinafter, referred to as Pool) B and a first packet receiving Queue (hereinafter, referred to as Queue) C. The Pool B is used for storing the initial address of the memory block, and the Pool B may be an FIFO queue, a linked list, an array, or a ring queue, however, the present application is not limited thereto. Queue C may be a FIFO structure or a circular Queue structure, however, this application is not limited thereto.
3) Creating a Ring group D (namely the plurality of second packet receiving queues) supporting priority scheduling; in this example, Ring group D may include m rings, and m may be an integer greater than or equal to 1.
4) Creating a packet receiving Thread (hereinafter referred to as Thread) E for receiving packets from a hardware driver, where the Thread E can map the first address of the memory chip a with continuous physical addresses into a virtual address for use in resolving a packet.
The process of realizing message zero copy transmission is carried out among Pool B, Queue C, Ring group D and Thread E, and the specific memory address replacement action is carried out between Pool B and Ring group D.
In the present exemplary embodiment, as shown in fig. 4, n-k Block first addresses of k +1 to n parts may be injected into Pool B. Putting blocks 1 to Block i, and a total of i Block initial addresses in Ring 0; putting blocks j to Block k, and putting k-j +1 Block initial addresses in Ring m; the Block head addresses in other Ring groups D are injected in the same way as the Block head addresses in Ring 0 and Ring m, and all the Block head addresses injected in Ring group D are in an idle state initially. Wherein i, j and k are integers which are more than 1. And ensuring that all Block initial addresses injected in Ring and Pool B are not repeated in the whole injection process of the Block initial addresses. The number of Block first addresses injected into each Ring and Pool B may be the same or different, and the present application does not limit this. The sum of the number of all Block first addresses in the Ring group D and the number of all Block first addresses in the Pool B can be n.
Based on Pool B, Queue C, Ring group D, and Thread E set as above, the message forwarding method of the present exemplary embodiment may include steps 101 to 109.
Step 101, the network card sends the received message to frame management.
And 102, analyzing, classifying/hashing the message by frame management, and taking out a Block first address from Pool B to store the message.
Step 103, frame management fills the information (corresponding to the above description information) such as the Block first address, the message length, and the offset information of the message based on the Block first address into a descriptor, and puts the descriptor into Queue C. Wherein, the number of Queue C can be one or more; when the number of Queue C is multiple, that is, multiple queues are adopted, frame management can select which Queue C the descriptor of the packet is put into according to the characteristic information of the packet, thereby supporting priority scheduling. In the present exemplary embodiment, one Queue C is exemplified.
It should be noted that in other implementation manners, an independent thread may be set, and is configured to take out a Block header address from Pool B to store a packet, fill information such as the Block header address, a packet length, and offset information of the packet based on the Block header address into a descriptor, and place the descriptor into Queue C.
And 104, through E polling the descriptor from Queue C, taking out the information of the Block initial address, the message length, the offset information of the message based on the Block initial address and the like, and obtaining the virtual address of the message through simple offset operation. The virtual address calculation method may be as follows: the virtual address of the message is equal to the virtual address mapped by the Block initial address of the message minus the initial address of the continuous memory chip A plus the initial address of the continuous memory chip A. Thread E may then read and parse the packet, and determine, according to the characteristic information of the packet (e.g., obtained from the characteristic field of the packet), the application process to which the packet is to be forwarded and the corresponding Ring. Then, Thread E may put the Block header address, the packet length, the offset of the packet based on the Block header address, and other information of the packet into the corresponding Ring by replacing the Block header address in steps 105 to 106. In this example, Ring corresponding to the message is Ring m.
Step 105, Thread E pops a free Block header from Ring m and gives it to Pool B.
Step 106, Thread E puts the information of the Block first address, the message length, the offset information of the message based on the Block first address, etc. into the corresponding position in Ring m for the application process P11 to read. And the information of the Block initial address, the message length, the offset information of the message based on the Block initial address and the like can be put into the position, which is in Ring m and is behind the free Block initial address.
It should be noted that, if there is no free Block header address in Ring m for replacement, that is, all the description information stored in Ring m is the description information of the message (indicating that the description information stored in Ring m is full), step 107 is executed, that is, Thread E may return the Block header address corresponding to the received message to Pool B; the situation realizes the discarding operation when the message cannot be uploaded, and meanwhile, the Block first address is recycled.
It should be noted that, when there is a packet receiving requirement for only one application process, and the application process corresponds to one Ring or there is no priority in the Ring group corresponding to the application process, Thread E polls a descriptor from Queue C, and after taking out information of a Block header address, a packet length, an offset of the packet based on the Block header address, and the like, may not perform packet reading and parsing, and directly perform step 105 and step 106.
Step 108, the application process P11 may take out information such as the Block first address, the message length, and the offset information of the message based on the Block first address from Ring m, and then read the message from the Block storing the message. In the present exemplary embodiment, the application process P11 is placed in the container 1. However, this is not limited in this application. In other embodiments, the application process P11 may not be placed in a container.
Step 109, after the application process P11 finishes processing the message, the Block first address corresponding to the message in Ring m may be set to an idle state, so that Thread E continues to use the message.
In the exemplary embodiment, after the frame management puts the message into the Block, the Block first address of the stored message is subsequently replaced, so that zero copy of the message is transmitted to the application process. Therefore, the network card can be accessed by the application process in a packaging mode, direct interaction between the application process and the network card driver is shielded, bottom hardware driver details are not needed to be considered when the application process receives the package, the universality and the working efficiency are improved while the transmission performance is not influenced, and the maintenance cost is reduced.
In an exemplary embodiment, when the network card does not support priority scheduling, that is, there is only one Queue C as shown in fig. 4, all messages sent from the network card enter Queue C. In order to achieve the effect that Queue C does not generate indiscriminate packet loss, the CPU affinity of the packet receiving thread can be set to enable the CPU affinity to exclusively monopolize a certain CPU resource, and therefore the packet receiving thread can receive all the messages in Queue C as much as possible, and therefore the probability of indiscriminate packet loss is reduced. Therefore, the problem that the network card does not support priority scheduling or message forwarding of scenes with inflexible scheduling is solved. However, this is not limited in this application. In other embodiments, the contracting thread may be made exclusive to a certain CPU resource by a cgroup (control groups) or other exclusive technique.
Fig. 5 is a schematic diagram illustrating an example of a packet forwarding method according to an embodiment of the present application. The present exemplary embodiment illustrates a process of creating a packet reception channel for a plurality of application processes, wherein a set of second packet reception queues (e.g., a Ring set) supporting priority scheduling may be created for each application process.
As shown in fig. 5, the message forwarding method provided in this exemplary embodiment includes the following steps:
step 201, an application process has a packet receiving and sending requirement, and a packet receiving request is sent to a packet receiving process P0; in the present exemplary embodiment, if the application processes P11 through P1n in the container 1 and the application processes Pn1 through Pnn in the container n all have packet receiving requirements, all of them may send packet receiving requests to the packet receiving process P0; the request mode is various, and may be a message, a reserved memory, and the like. Exemplary request information for an application process may include: the number and size of rings requested to be created, the maximum length of the received message, the characteristic information of the received message, etc. In the exemplary embodiment, a single package receiving process P0 is taken as an example for explanation, however, the present application is not limited to this. In other implementations, multiple receive processes may also be employed.
In step 202, the task (Job) of the package receiving process P0 may create a package receiving channel for each application process according to the package receiving request of each application process. Wherein, Job is specially responsible for distributing and managing the packet receiving request carrying the packet receiving requirement information. However, this is not limited in this application. In other embodiments, the receive process P0 may start a channel management thread responsible for managing the receive request and creating the receive channel.
In this exemplary embodiment, the Job may reserve a segment of memory slices with continuous physical addresses, create a memory block address pool and a first packet receiving queue, and create a packet receiving thread; and according to the packet receiving requirement of each application process, creating a corresponding Ring group supporting priority scheduling for each application process. The descriptions of the memory chip, the memory block address pool, the first packet receiving queue, the packet receiving thread, and the Ring group may refer to the related description in fig. 4, and therefore are not described herein again.
Wherein, any application process can correspond to a Ring group; for example, as shown in fig. 5, the application process P11 corresponds to Ring group D11, and the application process Pnn corresponds to Ring group Dnn, wherein the number of rings in each Ring group may be the same (e.g., m +1, m being an integer greater than or equal to 0) or different. However, this is not limited in this application.
Taking the example that the Ring group D11 corresponding to the application process P11 supports priority scheduling, any Ring in the Ring group D11 may correspond to a first-level priority, and the following packet receiving thread may put the description information of the packet into the Ring corresponding to the first-level priority by analyzing the priority of the packet. However, this is not limited in this application. In other embodiments, multiple rings in a Ring group supporting priority scheduling may correspond to a first level of priority.
Step 203, after Job of the package receiving process P0 creates a Ring group D11 supporting the priority for the application process P11, the Job returns the creation information of the Ring group D11 supporting the priority corresponding to the Ring group D11 to the application process P11; similarly, after the Job of the package receiving process P0 creates a Ring group supporting priority for any other application process, it returns the creation information of the corresponding Ring group to the application process; after Job creates a corresponding Ring group for an application process, Job returns the creation information of the corresponding Ring group to the application process. The creation information may include queue management information of the Ring group supporting the priority corresponding to the application process (for example, a correspondence relationship between Ring in the Ring group and the priority), and the like. In this way, a packet receiving channel is created for each application process.
In this exemplary embodiment, the application process may read packets of different priorities from corresponding rings according to a certain proportion, for example, preferentially read packets from rings corresponding to higher priorities.
In the present exemplary embodiment, as shown in fig. 5, the number of containers may be plural, such as 1 to n; the application process in each container may also be multiple, such as Pn1 through Pnn. When the number of the application processes is limited by the memory chip a (as shown in fig. 4) with continuous addresses, the number of the application processes is no longer limited by the network card hardware resources as long as the memory chip a is expanded and the number of the packet receiving processes is increased. Each Ring group corresponds to one application process to receive packets, and the number of the application processes for receiving the packets is limited by network card hardware resources by increasing the Ring groups and the memory.
The specific packet receiving process of each application process in the exemplary embodiment may refer to the related description of fig. 4, and therefore, the detailed description thereof is omitted here.
Fig. 6 is a schematic diagram illustrating an example of a packet forwarding method according to an embodiment of the present application. The exemplary embodiment illustrates that a single packet receiving thread delivers packets to multiple application processes of multiple containers in a unified manner. Wherein, the application processes P11 to P1n in the container 1 and the application processes Pn1 to Pnn in the container n all have package receiving requirements.
As shown in fig. 6, the message forwarding method provided in this exemplary embodiment includes the following processes:
step 301, each application process determines whether it needs to support priority scheduling, the size of the maximum message buffer supported by each priority, the maximum length of the received message, the characteristic information of the received message, and other information, and sends a packet receiving request, where the packet receiving request may carry the information.
Step 302, the Job of the package receiving process P0 creates a package receiving channel for each application process according to the package receiving request of each application process; the process of creating the packet receiving channel can refer to the description of fig. 5, and therefore, the description thereof is omitted. In this example, as shown in fig. 6, the application process P11 corresponds to the Ring group D11, the application process P1n corresponds to the Ring group D1n, the application process Pn1 corresponds to the Ring group Dn1, and the application process Pnn corresponds to the Ring group Dnn, where the number of rings in each Ring group may be the same (e.g., m +1, m is an integer greater than or equal to 0) or different. However, this is not limited in this application.
Step 303, when a packet is sent from the network card, the packet is sent to the first packet receiving queue through frame management, and a packet receiving thread in the packet receiving process P0 can poll the packet, and replace the description information of the stored packet into the Ring of the application process corresponding to the packet after feature information analysis. The relevant description of this step can refer to steps 101 to 107 in fig. 4, and therefore, the description thereof is omitted.
In step 304, the application processes P11 through P1n in the container 1 and the application processes Pn1 through Pnn in the container n may poll the corresponding Ring groups to obtain messages.
Step 305, after each application process finishes processing the message according to the service requirement, the first address of the memory block used for indicating to store the message in the corresponding Ring may be set to be in an idle state so as to be used continuously.
Fig. 7 is a schematic diagram illustrating an example of a packet forwarding method according to an embodiment of the present application. The exemplary embodiment illustrates that a single packet receiving thread delivers packets to multiple application processes in a unified manner. Since the package receiving process P0 and the application processes P1 to Pn are on the same Host (Host), the application processes with lower performance requirements can also use the shared memory to create Ring.
As shown in fig. 7, the message forwarding method provided in this exemplary embodiment includes the following processes:
step 401, each application process (for example, application process P1 to application process Pn) sends a package receiving request according to its package receiving requirement; the packet receiving request may carry the following information: the number and size of rings requested to be created, the maximum length of the received message, the characteristic information of the received message, etc.
Step 402, the Job of the package receiving process P0 creates a package receiving channel for each application process according to the package receiving request of each application process; the process of creating the packet receiving channel can refer to the description of fig. 5, and therefore, the description thereof is omitted. In this example, as shown in fig. 7, the application process P1 corresponds to the Ring group D1, the application process Pm corresponds to the Ring group Dm, and the application process Pn corresponds to the Ring group Dn, where the number of rings in each Ring group may be the same (e.g., m +1, m is an integer greater than or equal to 0) or different. However, this is not limited in this application.
Step 403, when there is a packet sent from the network card, the packet is sent to the first packet receiving queue through frame management, and the packet receiving thread in the packet receiving process P0 can poll the packet, and after the feature information is analyzed, the description information of the stored packet is replaced in the Ring of the application process corresponding to the packet. The relevant description of this step can refer to steps 101 to 107 in fig. 4, and therefore, the description thereof is omitted.
In step 404, the application process P1 to the application process Pn may poll the corresponding Ring group to obtain a message.
Step 405, after each application process finishes processing the message according to the service requirement, the first address of the memory block used for indicating to store the message in the corresponding Ring may be set to be in an idle state so as to be used continuously.
Fig. 8 is a schematic diagram illustrating an example of a packet forwarding method according to an embodiment of the present application. The illustrative embodiment illustrates a plurality of receive threads receiving packets uniformly to a plurality of application processes within a plurality of containers. In some scenarios, a single packet receiving thread cannot meet the requirements of some services, such as sampling, NAT (Network Address Translation), and other services, and the requirements on packet receiving performance are very high, so that some services with high requirements on packet receiving performance can be met by increasing the packet receiving thread.
As shown in fig. 8, the message forwarding method provided in this exemplary embodiment includes the following processes:
step 501, each application process (for example, application processes P11 to P1n in the container 1 and application processes Pn1 to Pnn in the container n) sends a packet receiving request according to its own packet receiving requirement; the packet receiving request may carry the following information: the number and size of rings requested to be created, the maximum length of the received message, the characteristic information of the received message, etc.
Step 502, the Job of the package receiving process P0 creates a package receiving channel for each application process according to the package receiving request of each application process. In this embodiment, when a packet receiving channel is created for each application process, the corresponding relationship between the packet receiving thread and the application process can be distinguished. For example, as shown in fig. 8, the package receiving thread 1 may be used to receive packages from the application processes P11 to P1n and the application process Pn1, and the package receiving process s may be used to receive packages from the application process Pnn. Wherein s may be an integer greater than or equal to 1.
In this example, as shown in fig. 8, the application process P11 corresponds to the Ring group D11, the application process P1n corresponds to the Ring group D1n, the application process Pn1 corresponds to the Ring group Dn1, and the application process Pnn corresponds to the Ring group Dnn, where the number of rings in each Ring group may be the same (e.g., m +1, m is an integer greater than or equal to 0) or different. However, this is not limited in this application.
The rest of the creation process of the packet receiving channel can refer to the description of fig. 5, and therefore, the description thereof is omitted.
In step 503, based on the created packet receiving channel, the packet receiving threads (for example, the packet receiving threads 1 to s) may receive packets for the corresponding application processes. The relevant description of this step can refer to steps 101 to 107 in fig. 4, and therefore, the description thereof is omitted.
In step 504, each application process may poll the corresponding Ring group to obtain a message.
Step 505, after each application process has processed the message according to the service requirement, the first address of the memory block in the corresponding Ring, which is used for indicating the memory block storing the message, may be set to be in an idle state, so as to be used continuously.
Fig. 9 is a schematic diagram illustrating an example of a packet forwarding method according to an embodiment of the present application. The illustrative embodiment illustrates a process in which a plurality of receive threads receive packets for a plurality of multiple application processes and a plurality of application processes in a plurality of containers. In some scenarios, an application process requiring packet reception may be on a Host (Host) or in a container, so that there are scenarios where both the Host and the container have application processes requiring packet reception.
As shown in fig. 9, the message forwarding method provided in this exemplary embodiment includes the following processes:
601, sending a packet receiving request by each application process Pi to Pk and the application processes Pn1 to Pnn in the container n according to the packet receiving requirements of the application processes Pi to Pk and the application processes Pn1 to Pnn; the packet receiving request may carry the following information: the number and size of rings requested to be created, the maximum length of the received message, the characteristic information of the received message, etc.
Step 602, the Job of the package receiving process P0 creates a package receiving channel for each application process according to the package receiving request of each application process. In this embodiment, when a packet receiving channel is created for each application process, the corresponding relationship between the packet receiving thread and the application process can be distinguished. For example, as shown in fig. 9, the package receiving thread 1 may be used to receive packages from the application processes Pi to Pk and the application process Pn1, and the package receiving process s may be used to receive packages from the application process Pnn. Wherein s may be an integer greater than or equal to 1.
In this embodiment, the application process on the Host may use the shared memory or the reserved physical memory with continuous addresses to create the Ring group, but the process in the container may only use the reserved physical memory with continuous addresses to create the Ring group.
In this example, as shown in fig. 9, an application process Pi corresponds to the Ring group Di, an application process Pk corresponds to the Ring group Dk, an application process Pn1 corresponds to the Ring group Dn1, and an application process Pnn corresponds to the Ring group Dnn, where the number of rings in each Ring group may be the same (e.g., m +1, m is an integer greater than or equal to 0) or different. However, this is not limited in this application.
The rest of the creation process of the packet receiving channel can refer to the description of fig. 5, and therefore, the description thereof is omitted.
Step 603, based on the created packet receiving channel, the packet receiving threads (for example, the packet receiving threads 1 to s) may receive packets for the corresponding application processes. The relevant description of this step can refer to steps 101 to 107 in fig. 4, and therefore, the description thereof is omitted.
In step 604, each application process may poll the corresponding Ring group to obtain a message.
Step 605, after each application process finishes processing the message according to the service requirement, the first address of the memory block used for indicating to store the message in the corresponding Ring may be set to be in an idle state, so as to be used continuously.
Fig. 10 is a schematic diagram illustrating an example of a packet forwarding method according to an embodiment of the present application. The exemplary embodiment illustrates a unified package receiving for multiple application processes in a container by physical memory replacement. For some CPU daughter card chips, the hardware already can support virtualization technology, and by virtualizing the hardware network into individual objects, it can implement direct packet reception from Media Access Control (MAC) in the container. For this scenario, a receive thread may reside in the container to receive packets for each application process.
As shown in fig. 10, the message forwarding method provided in this exemplary embodiment includes the following processes:
and step 701, each application process P1 to Pm in the container sends a packet receiving request according to the packet receiving requirement of the application process P1 to Pm. The packet receiving request may carry the following information: the number and size of rings requested to be created, the maximum length of the received message, the characteristic information of the received message, etc.
Step 702, the Job of the package receiving process P0 creates a package receiving channel for each application process according to the package receiving request of each application process. The process of creating the packet receiving channel can refer to the description of fig. 5, and therefore, the description thereof is omitted.
In this example, as shown in fig. 10, the application process P1 corresponds to Ring group D1, and the application process Pm corresponds to Ring group Dm; wherein, the number of Ring in each Ring group can be the same (for example, a +1, a is an integer greater than or equal to 0) or different. However, this is not limited in this application.
Step 703, based on the created packet receiving channel, the packet receiving thread may receive a packet for the corresponding application process. The relevant description of this step can refer to steps 101 to 107 in fig. 4, and therefore, the description thereof is omitted.
In step 704, each application process may poll the corresponding Ring group to obtain a message.
Step 705, after each application process has processed the message according to the service requirement, the first address of the memory block in the corresponding Ring, which is used for indicating the memory block storing the message, may be set to an idle state so as to be used continuously.
In an exemplary embodiment, in some cases, in order to solve the problem of excessive traffic of some traffic flows, a rate limiting process for each traffic flow may be added at the packet receiving thread.
The packet forwarding method provided in this exemplary embodiment may include the following processes:
801, each application process sends a packet receiving request according to the packet receiving requirement of the application process; the packet receiving request may carry the following information: the number and size of Ring requested to be created, the maximum length of the received message, the characteristic information of the received message, and the speed limit value (for example, the speed limit value per second) of the received service flow within the speed limit duration.
And step 802, the Job of the packet receiving process creates a packet receiving channel for each application process according to the packet receiving request of each application process, and records the speed limit value of each service flow. The process of creating the packet receiving channel can refer to the description of fig. 5, and therefore, the description thereof is omitted.
Step 803, based on the created packet receiving channel, the packet receiving thread may receive the packet for the corresponding application process.
Wherein, each time the packet receiving thread receives a packet, the flow statistical count of the service flow to which the packet belongs is updated, for example, the flow statistical count corresponding to the service flow to which the packet belongs is increased by one (the initial value of the flow statistical count is 0); and if the flow statistic count of the service flow is greater than the speed limit value of the service flow within the speed limit duration, performing packet loss processing. After the speed limit duration is reached (for example, after one second), the packet receiving thread sets the flow statistical count of the service flow to 0, thereby completing the speed limit processing flow of the service flow. Or, every time a packet receiving thread receives a message, subtracting one from the flow statistical count corresponding to the service flow to which the message belongs (the initial value of the flow statistical count is the speed limit value); and if the flow statistic count of the service flow is equal to 0 in the speed-limiting time length, performing packet loss processing. After the speed limit duration is reached (for example, after one second), the packet receiving thread sets the flow statistic count of the service flow as a speed limit value, thereby completing the speed limit processing flow of the service flow.
The packet receiving process in this step can refer to steps 101 to 107 in fig. 4, and therefore, the description thereof is omitted.
Step 804, each application process can poll the corresponding Ring group to obtain the message.
Step 805, after each application process finishes processing the message according to the service requirement, the corresponding memory block head address in the corresponding Ring can be set to be in an idle state so as to be used continuously.
Fig. 11 is a schematic diagram illustrating an example of a packet forwarding method according to an embodiment of the present application. The present exemplary embodiment describes a transmission process for implementing zero-copy packet transfer by using a memory address replacement manner through a packet sending thread, a memory block address pool, a first packet sending queue, and a second packet sending queue.
In the present exemplary embodiment, before the message transmission, the following operations are performed:
1) reserving a section of memory slice with continuous physical addresses, and cutting a plurality of memory blocks (blocks) in the memory slice A for caching messages; the size of the memory slice is greater than or equal to the total number of memory blocks (Block) (for example, n is an integer greater than 1 in fig. 11) multiplied by the maximum length of the message allowed to be supported (for example, 10K Byte); each Block represents a segment of physical memory with continuous addresses, and the Block first address represents the first address of the segment of physical memory with continuous addresses;
in other embodiments, a plurality of memory slices may be reserved, and a plurality of memory blocks are cut out from the memory slices as long as the internal physical addresses of the memory blocks cut out from the memory slices are continuous.
2) Allocating a memory block address pool and a packet sending queue (i.e. the second packet sending queue) to a hardware driver (e.g. a network card driver); the memory block address pool is used for storing a memory block head address, and the memory block address pool may be an FIFO queue, a linked list, an array, or a ring queue, however, the present application is not limited thereto. The second packet queue may be a FIFO structure or a circular queue structure, however, the present application is not limited thereto.
3) Creating a Ring queue group (hereinafter referred to as Ring group) supporting priority scheduling (i.e. the plurality of first packet queues described above); in this example, the Ring set may include v rings, where v may be an integer greater than or equal to 1.
4) A packing thread is created to pack to the hardware driver.
Wherein the memory address replacement process occurs between the memory block address pool and the ring set used for packet forwarding.
In the exemplary embodiment, as shown in fig. 11, n-k Block first addresses of k +1 to n parts may be injected into the memory Block address pool. Putting blocks 1 to Block i, and a total of i Block initial addresses in Ring 0; putting blocks j to Block k, and putting k-j +1 Block initial addresses in Ring v; the Block head addresses in other Ring groups are injected in the same way as the Block head addresses in Ring 0 and Ring v, and the use states of all the Block head addresses injected in the Ring groups are idle states initially. Wherein i, j and k are integers which are more than 1. And in the whole Block initial address injection process, the Block initial addresses injected into all Ring and memory Block address pools are not repeated. The number of Block first addresses injected into each Ring and memory Block address pool may be the same or different, and the present application does not limit this. The sum of the number of all Block first addresses in the Ring group and the number of all Block first addresses in the memory Block address pool may be n.
Based on the memory block address pool, the second packet sending queue, the packet sending thread, and the ring group set as described above, the message forwarding method of the present exemplary embodiment may include steps 901 to 906.
In step 901, the application process P11 in the container 1 takes out the Block header address marked as an idle state stored in the Ring queue (e.g., Ring v) in the corresponding Ring group, stores the packet to be sent detected by the application P11 into the memory Block indicated by the Block header address, and puts the Block header address, the packet length, the offset information of the packet based on the Block header address, and other information of the buffered packet into the Ring queue (i.e., Ring v).
Step 902, the packet sending thread polls the ring v, and reads information such as a Block initial address, a message length, offset information of the message based on the Block initial address and the like from the ring v.
Step 903, after the packet sending thread reads information such as a Block initial address, a message length, and offset information of the message based on the Block initial address, which are cached in the message, from the ring v, the Block initial address in an idle state in the memory Block address pool is put into the ring v.
And 904, the packet sending thread puts the Block initial address, the message length, the offset information of the message based on the Block initial address and other information of the cached message into a second packet sending queue.
Step 905, frame management reads information such as a Block initial address, a message length, and offset information of the message based on the Block initial address, which are cached in the message, from the second packet sending queue, and acquires the message from the corresponding Block according to the information of the message.
Step 906, the message is sent to the outside through the network card.
Step 907, after the frame management finishes sending the message, the Block first address of the Block storing the message is placed back to the memory Block address pool for subsequent use.
It should be noted that the above-mentioned outsourcing process is only an example, and the present application is not limited thereto. The packet sending processes of different types of network cards are different. For example, after step 903, the packet sending thread may compose descriptors of information, such as a Block initial address for caching the packet, a packet length, offset information of the packet based on the Block initial address, a Queue identifier (e.g., Queue ID) of the second packet sending Queue, and a Pool identifier (e.g., Pool ID) of a memory Block address Pool to which the corresponding memory Block initial address needs to be released after the packet is sent, and call a network card driver interface to send the descriptor. After the message is sent, the network card driver returns the Block first address corresponding to the physical address of the cached message to the memory Block address pool.
Fig. 12 is a schematic diagram of a message forwarding apparatus according to an embodiment of the present application. As shown in fig. 12, the message forwarding apparatus provided in this embodiment includes: a first packet receiving module 1201, adapted to select memory block information stored in a memory block address pool, store a message received by input/output hardware (e.g., a network card) to a memory block indicated by the memory block information, obtain description information of the message, and place the description information of the message in a first packet receiving queue; a second packet receiving module 1202, adapted to read the description information from the first packet receiving queue through a packet receiving thread; putting memory block information which is stored in a second packet receiving queue and marked as an idle state into a memory block address pool, and putting description information read from the first packet receiving queue into the second packet receiving queue; a third packet receiving module 1203, adapted to read description information from the second packet receiving queue through the application process corresponding to the second packet receiving queue, obtain a packet according to the description information read from the second packet receiving queue, and mark, in the second packet receiving queue, memory block information of a memory block in which the obtained packet is located as an idle state; and the memory block information stored in the memory block address pool and the memory block information stored in the second packet receiving queue are not repeated.
In an exemplary embodiment, the second packet retrieving module 1202 may be further adapted to, when there is no memory block information marked as an idle state in the second packet retrieving queue, store, by the packet retrieving thread, the memory block information corresponding to the description information read from the first packet retrieving queue back to the memory block address pool.
In an exemplary embodiment, the second packet receiving module 1202 may be further adapted to read, by the packet receiving thread, the packet cached at the physical address indicated by the description information according to the description information read from the first packet receiving queue, and determine, by analyzing the read packet, the second packet receiving queue corresponding to the read packet.
In an exemplary embodiment, the second receive module 1202 may include a receive thread and a task (Job) (or, a channel management thread).
Fig. 13 is a schematic diagram of an example of a message forwarding apparatus according to an embodiment of the present application. In an exemplary embodiment, as shown in fig. 13, the message forwarding apparatus provided in this embodiment may further include: a second packet receiving queue creating and managing module 1204, adapted to receive a packet receiving request of an application process; according to the packet receiving request of the application process, one or more corresponding second packet receiving queues are created for the application process; and returning the creation information of the second packet receiving queue corresponding to the application process.
The second packet receiving queue creating and managing module 1204 may be configured to create a second packet receiving queue and provide interfaces for reading, writing, releasing (free), replacing, and the like of the packet. If the application process is in the container, a continuous segment of physical memory may be used to create the second set of receive queues due to differences in NameSpace (NameSpace), etc. If the application process is not in the container, a second set of receive queues (e.g., Ring set) may be created using a contiguous segment of physical memory, or using Linux shared memory, etc. In addition, each Ring group corresponds to an application process, so that the packet receiving of the application process can be increased by increasing the Ring groups and the memory.
In an exemplary embodiment, as shown in fig. 13, the message forwarding apparatus of this embodiment may further include: a memory block address pool creating module 1205 adapted to create a corresponding memory block address pool for the application process after receiving the packet receiving request of the application process; or, one or more memory block address pools are created according to the message type received by input and output hardware (network card). The memory block address pools can be created in multiple numbers according to service requirements. For example, it may be planned that some blocks have 1K bytes for storing short messages, and the first addresses of the memory blocks corresponding to these blocks are placed in one memory Block address pool, and it is planned that some blocks have 10K bytes for storing long messages, and the first addresses of the memory blocks corresponding to these blocks are placed in another memory Block address pool.
In an exemplary embodiment, as shown in fig. 13, the message forwarding apparatus of this embodiment may further include: the first packet receiving queue creating and managing module 1206 is adapted to create a corresponding first packet receiving queue for the application process after receiving a packet receiving request of the application process; or, one or more first packet receiving queues are created according to the type of input and output hardware (network card).
In an exemplary embodiment, as shown in fig. 13, the message forwarding apparatus of this embodiment may further include: the physical memory allocation management module 1207 is adapted to allocate, after receiving a packet receiving request of an application process, at least one memory slice with continuous physical addresses to the application process, cut a plurality of memory blocks from the memory slice, respectively inject memory block information corresponding to the plurality of memory blocks into a memory block address pool and a second packet receiving queue corresponding to the application process, and mark the memory block information stored in the second packet receiving queue to be in an idle state; or reserving at least one memory slice with continuous physical addresses, cutting a plurality of memory blocks from the memory slice after receiving a packet receiving request of the application process, respectively injecting memory block information corresponding to the memory blocks into a memory block address pool and a second packet receiving queue corresponding to the application process, and marking the memory block information stored in the second packet receiving queue to be in an idle state. The memory block information (for example, a memory block head address or an identifier) injected into the memory block address pool is not repeated with the memory block information injected into the second packet receiving queue. The physical memory allocation management module 1207 may be configured to allocate a segment of memory with consecutive physical addresses to an application process and a driver, and may support providing segment management when there are more application processes.
In addition, for the related description of the message forwarding apparatus provided in this embodiment, reference may be made to the description of the foregoing method embodiment, and therefore, the description is not repeated here.
Fig. 14 is a schematic diagram of a network device according to an embodiment of the present application. As shown in fig. 14, the network device 1400 (e.g., a router, a switch, etc.) provided in this embodiment includes: input-output hardware (e.g., network cards) 1403, processor 1402, and memory 1401; input output hardware 1403 is adapted to receive or send messages; the memory 1401 is adapted to store a message forwarding program which, when executed by the processor 1402, performs the steps of the message forwarding method described above, such as the steps shown in fig. 3. Those skilled in the art will appreciate that the structure shown in fig. 14 is merely a schematic diagram of a portion of the structure associated with the present application and does not constitute a limitation on the network device 1400 to which the present application is applied, and that the network device 1400 may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
The memory 1401 may be used to store software programs and modules of application software, such as program instructions or modules corresponding to the message forwarding method in this embodiment, and the processor 1402 executes various functional applications and data processing by running the software programs and modules stored in the memory 1401, for example, implementing the message forwarding method provided in this embodiment. The memory 1401 may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
In addition, for the description of the related implementation process of the network device provided in this embodiment, reference may be made to the related description of the message forwarding method and apparatus, and therefore, no further description is given here.
In addition, an embodiment of the present application further provides a computer-readable medium, which stores a message forwarding program, and when the message forwarding program is executed, the message forwarding program implements the steps of the message forwarding method, such as the steps shown in fig. 3.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.

Claims (21)

1. A message forwarding method comprises the following steps:
the method comprises the steps that memory block information stored in a memory block address pool is taken out from the memory block address pool, a message received by input and output hardware is stored in a memory block indicated by the memory block information, description information of the message is obtained, and the description information of the message is placed in a first packet receiving queue;
reading description information from the first packet receiving queue through a packet receiving thread;
storing one memory block information marked as an idle state stored in a second packet receiving queue into the memory block address pool through the packet receiving thread, and putting the description information read from the first packet receiving queue into the second packet receiving queue;
reading description information from the second packet receiving queue through an application process corresponding to the second packet receiving queue, acquiring a message according to the description information read from the second packet receiving queue, and marking memory block information in the second packet receiving queue, which is used for indicating a memory block in which the acquired message is located, as an idle state;
and the memory block information stored in the memory block address pool and the memory block information stored in the second packet receiving queue are not repeated.
2. The method according to claim 1, wherein the memory block information includes a memory block header address or a memory block identifier; the memory block is a segment of physical memory with continuous addresses and is used for caching messages received by the input and output hardware.
3. The method according to claim 2, wherein before the retrieving the memory block information stored therein from the memory block address pool, the method further comprises:
after receiving a packet receiving request of the application process, allocating at least one memory slice with continuous physical addresses to the application process, cutting a plurality of memory blocks from the memory slice, respectively storing memory block information corresponding to the plurality of memory blocks into the memory block address pool and the second packet receiving queue, and marking the memory block information stored in the second packet receiving queue to be in an idle state; alternatively, the first and second electrodes may be,
reserving at least one memory slice with continuous physical addresses, cutting a plurality of memory blocks from the memory slice after receiving a packet receiving request of the application process, respectively storing memory block information corresponding to the memory blocks into the memory block address pool and the second packet receiving queue, and marking the memory block information stored in the second packet receiving queue as an idle state.
4. The method of claim 1, wherein the description information of the packet comprises: caching the memory block head address of the memory block of the message, the length of the message and the offset information of the message based on the memory block head address.
5. The method of claim 1, wherein after the reading the description information from the first receive queue by the receive thread, the method further comprises:
and when the memory block information marked as the idle state is not in the second packet receiving queue, the memory block information corresponding to the description information is returned to the memory block address pool through the packet receiving thread.
6. The method of claim 1, wherein after the reading the description information from the first receive queue by the receive thread, the method further comprises:
reading a message cached in a physical address indicated by the description information according to the description information read from the first packet receiving queue through the packet receiving thread, and determining a second packet receiving queue corresponding to the read message by analyzing the read message;
the storing, by the packet receiving thread, memory block information that is stored in a second packet receiving queue and marked as an idle state in the memory block address pool, and placing description information read from the first packet receiving queue into the second packet receiving queue includes:
and storing one piece of memory block information marked as an idle state stored in a second packet receiving queue corresponding to the read message into the memory block address pool through the packet receiving thread, and putting the description information read from the first packet receiving queue into the second packet receiving queue.
7. The method according to claim 6, wherein the determining, by the packet receiving thread, a second packet receiving queue corresponding to the read packet by reading the packet cached at the physical address indicated by the description information according to the description information read from the first packet receiving queue and analyzing the read packet comprises:
mapping the description information read from the first packet receiving queue to a virtual address, reading and analyzing a message to obtain the characteristic information of the message;
determining an application process for receiving the message and the priority of the message according to the analyzed feature information of the message;
and determining a second packet receiving queue corresponding to the message according to the application process for receiving the message, the priority of the message and the corresponding relation between the second packet receiving queue corresponding to the application process and the priority.
8. The method according to claim 1, wherein before the retrieving the memory block information stored therein from the memory block address pool, the method further comprises:
receiving a packet receiving request of the application process; according to the packet receiving request of the application process, one or more corresponding second packet receiving queues are created for the application process; and returning the creation information of the second packet receiving queue corresponding to the application process.
9. The method according to claim 8, wherein the creating one or more corresponding second packet receiving queues for the application process according to the packet receiving requests of the application process comprises:
and creating a plurality of second packet receiving queues supporting priority scheduling for the application process according to the packet receiving requests of the application process, wherein any priority of the messages to be received by the application process corresponds to one or more second packet receiving queues.
10. The method according to claim 1, wherein before the retrieving the memory block information stored therein from the memory block address pool, the method further comprises:
after receiving a packet receiving request of the application process, creating a corresponding memory block address pool for the application process; or, creating one or more memory block address pools according to the message type received by the input/output hardware.
11. The method according to claim 1, wherein before the retrieving the memory block information stored therein from the memory block address pool, the method further comprises:
after receiving a packet receiving request of the application process, creating a corresponding first packet receiving queue for the application process; or creating one or more first packet receiving queues according to the type of the input and output hardware.
12. The method according to claim 1, wherein before the retrieving the memory block information stored therein from the memory block address pool, the method further comprises:
after receiving a packet receiving request of the application process, creating a corresponding packet receiving thread for the application process; or after receiving a packet receiving request of the application process, selecting one of the created packet receiving threads as a packet receiving thread corresponding to the application process.
13. The method of claim 1, further comprising: setting the affinity or the exclusivity of the packet receiving thread to the CPU resource of the central processing unit; wherein the affinity and the exclusivity both characterize that the packet receiving thread monopolizes the CPU resource.
14. The method of claim 1, further comprising: after reading a message according to the description information read from the first packet receiving queue through the packet receiving thread, updating the flow statistic count of the service flow to which the message belongs, and discarding the message when the flow statistic count in the speed limit duration meets a set condition; and setting the flow statistic count as an initial value after the speed limit duration is reached each time.
15. The method according to any one of claims 1 to 14, wherein a plurality of the application processes correspond to only one packet receiving thread, or a plurality of the application processes correspond to a plurality of packet receiving threads.
16. The method of any one of claims 1 to 14, wherein one or more of the application processes are located within a container.
17. The method of any of claims 1 to 14, wherein the contracting thread and the application process are both located within a container.
18. The method of any of claims 1 to 14, wherein the second receive queue is a circular queue.
19. The method of claim 1, further comprising:
the memory block information marked as an idle state and stored in the first packet sending queue is taken out from the first packet sending queue, a message to be sent by the application process is stored in a memory block indicated by the memory block information, description information of the message is obtained, and the description information of the message is put into the first packet sending queue;
reading the description information from the first packet sending queue through a packet sending thread, storing memory block information which is stored in a memory block address pool and marked as an idle state into the first packet sending queue, and putting the description information read from the first packet sending queue into a second packet sending queue;
reading the description information from the second packet sending queue, obtaining a message according to the description information read from the second packet sending queue, sending the obtained message through the input/output hardware, and after sending the obtained message, returning memory block information for indicating a memory block in which the obtained message is located to the memory block address pool.
20. A network device, comprising: input-output hardware, a processor, and a memory; the input and output hardware is suitable for receiving or sending messages; the memory is adapted to store a message forwarding program which, when executed by the processor, implements the steps of the message forwarding method of any of claims 1 to 19.
21. A computer-readable medium, in which a message forwarding program is stored, which when executed performs the steps of the message forwarding method according to any one of claims 1 to 19.
CN201811546772.XA 2018-12-18 2018-12-18 Message forwarding method and network equipment Active CN109783250B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811546772.XA CN109783250B (en) 2018-12-18 2018-12-18 Message forwarding method and network equipment
PCT/CN2019/126079 WO2020125652A1 (en) 2018-12-18 2019-12-17 Packet forwarding method and apparatus, network device, and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811546772.XA CN109783250B (en) 2018-12-18 2018-12-18 Message forwarding method and network equipment

Publications (2)

Publication Number Publication Date
CN109783250A CN109783250A (en) 2019-05-21
CN109783250B true CN109783250B (en) 2021-04-09

Family

ID=66497153

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811546772.XA Active CN109783250B (en) 2018-12-18 2018-12-18 Message forwarding method and network equipment

Country Status (2)

Country Link
CN (1) CN109783250B (en)
WO (1) WO2020125652A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783250B (en) * 2018-12-18 2021-04-09 中兴通讯股份有限公司 Message forwarding method and network equipment
CN110336702B (en) * 2019-07-11 2022-08-26 上海金融期货信息技术有限公司 System and implementation method of message middleware
CN112491979B (en) 2020-11-12 2022-12-02 苏州浪潮智能科技有限公司 Network card data packet cache management method, device, terminal and storage medium
CN113259006B (en) * 2021-07-14 2021-11-26 北京国科天迅科技有限公司 Optical fiber network communication system, method and device
CN114024923A (en) * 2021-10-30 2022-02-08 江苏信而泰智能装备有限公司 Multithreading message capturing method, electronic equipment and computer storage medium
CN114003366B (en) * 2021-11-09 2024-04-16 京东科技信息技术有限公司 Network card packet receiving processing method and device
CN114500400B (en) * 2022-01-04 2023-09-08 西安电子科技大学 Large-scale network real-time simulation method based on container technology
CN115801629B (en) * 2023-02-03 2023-06-23 天翼云科技有限公司 Bidirectional forwarding detection method and device, electronic equipment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150488A (en) * 2007-11-15 2008-03-26 曙光信息产业(北京)有限公司 A receiving method for zero copy network packet
CN101719872A (en) * 2009-12-11 2010-06-02 曙光信息产业(北京)有限公司 Zero-copy mode based method and device for sending and receiving multi-queue messages
CN105591979A (en) * 2015-12-15 2016-05-18 曙光信息产业(北京)有限公司 Message processing system and method
CN106789617A (en) * 2016-12-22 2017-05-31 东软集团股份有限公司 A kind of message forwarding method and device
CN106850565A (en) * 2016-12-29 2017-06-13 河北远东通信系统工程有限公司 A kind of network data transmission method of high speed
CN108243118A (en) * 2016-12-27 2018-07-03 华为技术有限公司 The method and physical host to E-Packet
CN108566387A (en) * 2018-03-27 2018-09-21 中国工商银行股份有限公司 Method, equipment and the system of data distribution are carried out based on udp protocol

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9575796B2 (en) * 2015-02-16 2017-02-21 Red Hat Isreal, Ltd. Virtual device timeout by memory offlining
CN104796337A (en) * 2015-04-10 2015-07-22 京信通信系统(广州)有限公司 Method and device for forwarding message
CN108132889B (en) * 2017-12-20 2020-07-10 东软集团股份有限公司 Memory management method and device, computer readable storage medium and electronic equipment
CN109783250B (en) * 2018-12-18 2021-04-09 中兴通讯股份有限公司 Message forwarding method and network equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101150488A (en) * 2007-11-15 2008-03-26 曙光信息产业(北京)有限公司 A receiving method for zero copy network packet
CN101719872A (en) * 2009-12-11 2010-06-02 曙光信息产业(北京)有限公司 Zero-copy mode based method and device for sending and receiving multi-queue messages
CN105591979A (en) * 2015-12-15 2016-05-18 曙光信息产业(北京)有限公司 Message processing system and method
CN106789617A (en) * 2016-12-22 2017-05-31 东软集团股份有限公司 A kind of message forwarding method and device
CN108243118A (en) * 2016-12-27 2018-07-03 华为技术有限公司 The method and physical host to E-Packet
CN106850565A (en) * 2016-12-29 2017-06-13 河北远东通信系统工程有限公司 A kind of network data transmission method of high speed
CN108566387A (en) * 2018-03-27 2018-09-21 中国工商银行股份有限公司 Method, equipment and the system of data distribution are carried out based on udp protocol

Also Published As

Publication number Publication date
WO2020125652A1 (en) 2020-06-25
CN109783250A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN109783250B (en) Message forwarding method and network equipment
US11916781B2 (en) System and method for facilitating efficient utilization of an output buffer in a network interface controller (NIC)
KR101583325B1 (en) Network interface apparatus and method for processing virtual packets
CN111327391B (en) Time division multiplexing method, device, system and storage medium
US8392565B2 (en) Network memory pools for packet destinations and virtual machines
CN105991470B (en) method and device for caching message by Ethernet equipment
CN107306232B (en) Network device, controller, queue management method and flow management chip
EP3286966B1 (en) Resource reallocation
CN110851371B (en) Message processing method and related equipment
JP5892500B2 (en) Message processing method and apparatus
KR101639797B1 (en) Network interface apparatus and method for processing virtual machine packets
CN106571978B (en) Data packet capturing method and device
US9292466B1 (en) Traffic control for prioritized virtual machines
CN115174490B (en) Data transmission method and network application terminal
US7760736B2 (en) Method, system, and computer program product for ethernet virtualization using an elastic FIFO memory to facilitate flow of broadcast traffic to virtual hosts
WO2020119682A1 (en) Load sharing method, control plane entity, and repeater
US9584446B2 (en) Memory buffer management method and system having multiple receive ring buffers
US20160364145A1 (en) System and Method for Managing a Non-Volatile Storage Resource as a Shared Resource in a Distributed System
CN111831403A (en) Service processing method and device
CN109167740B (en) Data transmission method and device
US7751400B2 (en) Method, system, and computer program product for ethernet virtualization using an elastic FIFO memory to facilitate flow of unknown traffic to virtual hosts
CN110932998A (en) Message processing method and device
CN113055493B (en) Data packet processing method, device, system, scheduling device and storage medium
CN110708255B (en) Message control method and node equipment
KR101773528B1 (en) Network interface apparatus and method for processing virtual machine packets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant