WO2022151475A1 - Message buffering method, memory allocator, and message forwarding system - Google Patents

Message buffering method, memory allocator, and message forwarding system Download PDF

Info

Publication number
WO2022151475A1
WO2022151475A1 PCT/CN2021/072495 CN2021072495W WO2022151475A1 WO 2022151475 A1 WO2022151475 A1 WO 2022151475A1 CN 2021072495 W CN2021072495 W CN 2021072495W WO 2022151475 A1 WO2022151475 A1 WO 2022151475A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
message
data structure
target
field
Prior art date
Application number
PCT/CN2021/072495
Other languages
French (fr)
Chinese (zh)
Inventor
曹雷
曲吉亮
王心力
敬勇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/072495 priority Critical patent/WO2022151475A1/en
Priority to CN202180003831.2A priority patent/CN115176453A/en
Publication of WO2022151475A1 publication Critical patent/WO2022151475A1/en

Links

Images

Definitions

  • the present application relates to the field of communication technologies, and in particular, to a message caching method, a memory allocator, and a message forwarding system.
  • the SKB structure includes a message field management data structure (sk_buff), a message sharing information data structure (skb_shared_info), and a packet buffer (packet buffer).
  • the message field management data structure stores message management information.
  • the message sharing information data structure stores the fragmentation information of the message.
  • the packet buffer is used to store the packet content.
  • the packet does not need to be fragmented. If the SKB structure is still used to transmit the message, many fields in the message sharing information data structure (such as the fields that carry fragmentation information) are not required, but memory still needs to be allocated for the above "unnecessary fields", resulting in memory resources. waste.
  • the embodiments of the present application provide a message caching method, a memory allocator, and a message forwarding system, which can save memory resources.
  • an embodiment of the present application provides a message buffering method, and the execution body of the method may be a memory allocator of a message forwarding system, or may be a chip applied to a memory allocator of a message forwarding system.
  • the message forwarding system also includes a modem and memory.
  • the method includes: the memory allocator receives the target message from the modem. Then, the memory allocator stores the target message in the data packet buffer area of the first memory slice. Wherein, the first memory slice further includes a first data structure.
  • the first data structure includes a first field and a second field, the first field indicates that the target message is not fragmented, the second field carries the second data structure, and the second field carries fragmentation information in the fragmented state of the target message field, the second data structure at least indicates the first address of the data packet buffer area.
  • the first memory slice is provided by the memory.
  • the first memory slice is used to store the target message that is not fragmented and processed. Since the second field is a field that carries fragmentation information when the first field indicates packet fragmentation, the second field is also a field that carries the second data structure when the first field indicates that the target packet is not fragmented. field. That is to say, when the target packet is not fragmented, there is no fragmentation information, and the second field is used to carry the second data structure.
  • the first address does not need to separately apply for a memory slice for the second data structure, thereby saving memory resources.
  • the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area. That is, the position of the first data structure in the first memory slice can be flexibly set.
  • the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
  • the unit of the preset value may be bits, bytes, or the like.
  • the default value can be any number of bytes from 50 to 100 bytes, or a certain number of bits to support the subsequent evolution of the first data structure. For example, when a new field is added to the first data structure, the new field It can be stored in the above-mentioned "storage space spaced between the data packet buffer area and the first data structure".
  • the packet caching method further includes: the memory allocator receives the first indication information from the network interface card NIC.
  • the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes a NIC. Then, the memory allocator releases the first memory slice according to the first indication information.
  • the memory allocator can also recover the first memory slice according to the first indication information to store other messages, thereby improving the utilization rate of memory resources.
  • the message caching method further includes: the memory allocator receives the second indication information from the central processing unit CPU.
  • the second indication information indicates that the first data structure is restored to the message sharing information data structure, the CPU is used to apply for the target memory slice, the target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the data The first address of the packet buffer area.
  • the packet forwarding system also includes the CPU. Then, the memory allocator deletes the second data structure according to the second indication information.
  • the memory allocator can also delete the second data structure according to the second indication information, so that the first data structure is restored to the message sharing information data structure to store the fragmented message.
  • the message buffering method further includes: the memory allocator receives third indication information from the CPU.
  • the third indication information indicates the release of the first memory slice, the CPU is used to apply for the target memory slice, the target memory slice is determined based on the socket cache SKB structure, and the SKB structure is used to store the information stored in the first memory slice, and the message
  • the forwarding system also includes a CPU. Then, the memory allocator releases the first memory slice according to the third indication information.
  • the memory allocator can also release the first memory slice, so that the recovered memory slice can store other messages, thereby improving the utilization of memory resources.
  • an embodiment of the present application provides a message buffering device, where the message buffering device is located in a message forwarding system.
  • the message forwarding system also includes a modem and memory.
  • the message buffering device includes: a communication unit and a processing unit. Among them, the communication unit is used to receive the target message from the modem.
  • the processing unit is used to store the target message in the data packet buffer area of the first memory slice, wherein the first memory slice further includes a first data structure; the first data structure includes a first field and a second field, and the first field Indicates that the target message is not fragmented, the second field carries a second data structure, the second field is a field that carries fragmentation information in a fragmented state of the target message, and the second data structure at least indicates the first address of the data packet buffer;
  • the first memory slice is provided by the memory.
  • the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area.
  • the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
  • the communication unit is further configured to receive the first indication information from the network interface card NIC.
  • the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes a NIC.
  • the processing unit is further configured to release the first memory slice according to the first indication information.
  • the communication unit is further configured to receive the second indication information from the central processing unit CPU.
  • the second indication information indicates that the first data structure is restored to the message sharing information data structure, the CPU is used to apply for the target memory slice, the target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the data The first address of the packet buffer area.
  • the packet forwarding system also includes the CPU.
  • the processing unit is further configured to delete the second data structure according to the second indication information.
  • the communication unit is further configured to receive third indication information from the CPU.
  • the third indication information indicates the release of the first memory slice
  • the CPU is used to apply for the target memory slice
  • the target memory slice is determined based on the socket cache SKB structure
  • the SKB structure is used to store the information stored in the first memory slice
  • the forwarding system also includes a CPU.
  • the processing unit is further configured to release the first memory slice according to the third indication information.
  • an embodiment of the present application provides a message buffering device, including a processor and an interface circuit, where the processor is configured to communicate with other devices through the interface circuit, and execute the reporting of the first aspect or any one of the first aspect. Text caching method.
  • the processor includes one or more.
  • an embodiment of the present application provides a message buffering device, including a processor that is connected to a memory and used to call a program stored in the memory to execute the first aspect or any one of the first aspects. Text caching method.
  • the memory may be located within the message buffering device, or may be located outside the message buffering device.
  • the processor includes one or more.
  • an embodiment of the present application provides a message caching device, including at least one processor and at least one memory, where the at least one processor is configured to execute the first aspect or the message caching method of any one of the first aspects.
  • an embodiment of the present application provides a memory allocator, which is applied to a message forwarding system.
  • the message forwarding system also includes a modem and memory.
  • the memory allocator is used to receive destination messages from the modem.
  • the memory allocator is further configured to store the target message in the data packet buffer area of the first memory slice, wherein the first memory slice further includes a first data structure; the first data structure includes a first field and a second field, the first The field indicates that the target packet is not fragmented, the second field carries the second data structure, the second field is the field that carries fragmentation information in the fragmented state of the target packet, and the second data structure at least indicates the first address of the packet buffer area ;
  • the first memory slice is provided by the memory.
  • the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area.
  • the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
  • the memory allocator is further configured to receive first indication information from the network interface card NIC, where the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes the NIC.
  • the memory allocator is further configured to release the first memory slice according to the first indication information.
  • the memory allocator is further configured to receive second indication information from the central processing unit CPU, wherein the second indication information indicates that the first data structure is restored to the message sharing information data structure, and the CPU is used to apply for
  • the target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the first address of the data packet buffer area;
  • the message forwarding system further includes a CPU.
  • the memory allocator is further configured to delete the second data structure according to the second indication information.
  • the memory allocator is further configured to receive third indication information from the CPU, where the third indication information indicates to release the first memory slice, the CPU is used to apply for the target memory slice, and the target memory slice is based on
  • the socket cache is determined by the SKB structure; the SKB structure is used to store the information stored in the first memory slice; the message forwarding system further includes a CPU.
  • the memory allocator is further configured to release the first memory slice according to the third indication information.
  • embodiments of the present application provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on a computer, the computer can execute the first aspect or any one of the first aspects.
  • a message caching method
  • the embodiments of the present application provide a computer program product including instructions, which, when running on a computer, enables the computer to execute the first aspect or the message caching method of any one of the first aspects.
  • an embodiment of the present application provides a circuit system, where the circuit system includes a processing circuit, and the processing circuit is configured to execute the message caching method according to any one of the first aspect or the first aspect.
  • an embodiment of the present application provides a chip, where the chip includes a logic circuit and an input and output interface.
  • the input and output interfaces are used for communication with modules other than the chip.
  • the chip may be a chip that implements the function of the memory allocator in the first aspect or any possible design of the first aspect.
  • the input and output interface s input target packets.
  • the logic circuit is used to run a computer program or instructions to implement the message buffering method in the first aspect or any possible design of the first aspect.
  • an embodiment of the present application provides a message forwarding system, where the system includes a modem, a memory allocator, and a memory.
  • the memory allocator is used to receive the target message from the modem.
  • the memory allocator is also used for storing the target message in the data packet buffer area of the first memory slice.
  • the first memory slice further includes a first data structure, the first data structure includes a first field and a second field, the first field indicates that the target message is not fragmented, the second field carries the second data structure, and the second field is A field carrying fragmentation information in the target packet fragmentation state, the second data structure at least indicates the first address of the data packet buffer area, and the first memory fragment is provided by the memory.
  • the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area.
  • the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
  • the memory allocator is further configured to receive first indication information from the network interface card NIC, where the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes the NIC.
  • the memory allocator is further configured to release the first memory slice according to the first indication information.
  • the memory allocator is further configured to receive second indication information from the central processing unit CPU, wherein the second indication information indicates that the first data structure is restored to the message sharing information data structure, and the CPU is used to apply for
  • the target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the first address of the data packet buffer area;
  • the message forwarding system further includes a CPU.
  • the memory allocator is further configured to delete the second data structure according to the second indication information.
  • the memory allocator is further configured to receive third indication information from the CPU, where the third indication information indicates to release the first memory slice, the CPU is used to apply for the target memory slice, and the target memory slice is based on
  • the socket cache is determined by the SKB structure; the SKB structure is used to store the information stored in the first memory slice; the message forwarding system further includes a CPU.
  • the memory allocator is further configured to release the first memory slice according to the third indication information.
  • FIG. 1 is a schematic diagram of a socket cache SKB structure provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a message forwarding process according to an embodiment of the present application
  • 3a is a schematic diagram of the hardware architecture of a message forwarding system provided by an embodiment of the present application.
  • 3b is a schematic diagram of the hardware architecture of still another message forwarding system provided by an embodiment of the application.
  • FIG. 4 is a schematic structural diagram of a memory chip according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a message caching method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a process of packet buffering provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a message forwarding process according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of still another packet forwarding process provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a message buffering apparatus provided by an embodiment of the present application.
  • each segment is called a packet fragment.
  • the packet is directly forwarded by hardware without being transmitted through the TCP/IP protocol stack, the packet does not need to be fragmented.
  • the buffer area is the storage space for buffering packets.
  • the cache area may be a storage space in a network interface card (NIC), or may be a storage space divided in other memories in the device where the network interface card is located, and is used to implement the function of buffering messages.
  • NIC network interface card
  • the SKB structure is a data structure in the transfer control protocol/internet protocol (TCP/IP) stack.
  • An SKB structure includes a message field management data structure (sk_buff), a message shared information data structure (skb_shared_info), and a data packet buffer area (packet buff), as shown in Figure 1.
  • sk_buff message field management data structure
  • skb_shared_info message shared information data structure
  • packet buff data packet buffer area
  • the message field management data structure is the main data structure, and the size is usually 512 bytes.
  • the fields in the message field management data structure can be, for example, but not limited to, the following fields:
  • the latter data structure pointer field indicates the address of the next message field management data structure of the current message field management data structure.
  • the front (prev) data structure pointer field indicates the address of the previous message field management data structure of the current message field management data structure.
  • the header field is used to store the first address of the data packet buffer area.
  • the tail (tail) field is used to store the tail address of the actually stored packet content in the data packet buffer area.
  • the end field is used to store the end address of the data packet buffer area.
  • the message field management data structure also indicates the first address of the MAC header information, the first address of the IP header information, and the first address of the TCP header information.
  • the data packet buffer area is used to store the content of the message. Among them, the size of the packet buffer is determined based on the data volume of the packet content.
  • a media access control (media access control, MAC) header field is used to store control information of the MAC layer.
  • IP header information field is used to store information such as a protocol version (version) number and an Internet header length (IHL).
  • the TCP header information field is used to store information such as the source port number and the destination port number.
  • the payload part is used to store the content of the message.
  • the message sharing information data structure stores the fragmentation information of the message, and the size is usually 360 bytes.
  • the introduction of the message sharing information data structure is as follows:
  • the fragmentation status indication field indicates the fragmentation status of the packet.
  • the fragmentation information field carries fragmentation information of the packet.
  • the fragmentation status indication field indicates that the packet in the data packet buffer is a fragmented packet
  • the fragmentation information field carries fragmentation information of the packet
  • the SKB structure further includes a MAA headroom, as shown in FIG. 1 .
  • the MAA header space is used to store the information of the MAA.
  • the modem After the modem (modem) receives the message, the modem applies for a memory chip to use the buffered message. Then, the modem sends the buffered message to the central processing unit to realize the local forwarding of the message.
  • the specific process is shown by the dotted arrow in FIG. 2 .
  • the modem sends buffered messages to other devices through a universal serial bus (USB) interface, so as to realize the transmission of messages between different devices.
  • the specific process is shown by the solid arrows in FIG. 2 .
  • the packets transmitted in Fig. 2 all adopt the SKB structure.
  • the size of the packet buffer in the SKB structure can be 1024KB or 2048KB.
  • the modem needs at least more than 50,000 message fields to manage the data structure.
  • an embodiment of the present application provides a message caching method, and the message caching method of the embodiment of the present application is applicable to various devices, such as mobile phones, tablet computers, desktops, laptops, super mobile personal computers ( Ultra-mobile personal computer, UMPC), handheld computer, netbook, personal digital assistant (personal digital assistant, PDA), server, network equipment, etc.
  • Ultra-mobile personal computer, UMPC Ultra-mobile personal computer
  • PDA personal digital assistant
  • server network equipment, etc.
  • FIG. 3a is a schematic diagram of a hardware architecture of a message forwarding system 300 according to an embodiment of the present application.
  • the message forwarding system 300 may include a memory 301 and a memory allocator 302 . There is a communication connection between the memory 301 and the memory allocator 302 .
  • the memory 301 is mainly used to provide local memory, such as a memory slice for storing messages.
  • the memory 301 may be a read-only memory (read only memory, ROM) or a random access memory (random access memory, RAM).
  • the RAM may be synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), or the like.
  • the memory allocator 302 manages local memory mainly by running or executing software programs and/or application modules. For example, after the modem receives the message, the memory allocator 302 determines the memory slice in which to store the message. For another example, after the network interface card forwards the message, the memory allocator 302 reclaims the memory slice that stores the message. Illustratively, memory allocator 302 may be implemented as a MAA.
  • the above-mentioned memory 301 and memory allocator 302 may be separate devices, or may be combined.
  • the software program and/or application module for managing the local memory may run in the memory 301 to implement the function of managing the local memory.
  • the memory 301 and the memory allocator 302 are separate devices" are used as an example for introduction.
  • FIG. 3b shows another schematic diagram of the hardware architecture of the message forwarding system 300 according to the embodiment of the present application.
  • the message forwarding system 300 further includes a forwarding engine 303 , a network interface card 304 , a modem 305 , a central processing unit 306 and a bus 307 .
  • the forwarding engine 303 is mainly used for forwarding packets. For example, the forwarding engine 303 forwards the packet to the network interface card 304, so as to realize the transmission of the packet between devices. Alternatively, the forwarding engine 303 forwards the message to the central processing unit 306, so as to realize the transmission of the message inside the device. Exemplarily, the forwarding engine 303 may be implemented as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or a network processor (NP).
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • NP network processor
  • the network interface card 304 is mainly used to convert the transmitted message into a format that can be recognized by other devices in the network, and then transmit the message to the corresponding device through the network medium.
  • the network interface card can also be described as a network card, a network interface controller (network interface controller, NIC), and the like.
  • the modem 305 is mainly used for sending and receiving various messages.
  • the modem 305 may be a modem that supports long term evolution (LTE) and new radio (NR) communication standards.
  • LTE long term evolution
  • NR new radio
  • the central processing unit 306 is mainly used to run the operating system layer and the application layer in the software layer.
  • the operating system layer includes operating system program codes and protocol stacks.
  • the operating system may be a Linux operating system.
  • a protocol stack refers to a collection of program codes that are divided according to different levels involved in a communication protocol and that process data at the corresponding level.
  • the protocol stack may be a TCP/IP protocol stack.
  • the data structure handled by the TCP/IP protocol stack is the SKB structure.
  • the application layer includes at least one application.
  • the bus 306 is mainly used to connect the memory 301 , the memory allocator 302 , the forwarding engine 303 , the network interface card 304 , the modem 305 and the central processing unit 306 .
  • the bus 306 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus or the like.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus 306 can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 3b, but it does not mean that there is only one bus or one type of bus.
  • FIG. 3a and FIG. 3b are only for illustrating the technical solutions of the embodiments of the present application more clearly, and do not constitute a limitation to the technical solutions provided by the embodiments of the present application.
  • Those of ordinary skill in the art know that with the evolution of the hardware architecture and the emergence of new business scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
  • the embodiment of the present application provides a message caching method, and the method uses a first memory slice to store an unfragmented target message.
  • the first memory slice includes a data packet buffer area, a first data structure and a second data structure.
  • the packet buffer is used to carry the target packet.
  • the first data structure includes a first field and a second field, the first field indicates that the target packet is not fragmented, and the second field carries the second data structure.
  • the second data structure at least indicates the first address of the data packet buffer in the first memory slice.
  • the first field can be described as a "fragmentation status indication field”
  • the second field can be described as a "fragmentation information field”
  • the second field is a field that carries fragmentation information when the first field indicates packet fragmentation ,As shown in Figure 1.
  • the second field is also a field that carries the second data structure when the first field indicates that the target packet is not fragmented, as shown in FIG. 4 . That is, when the target packet is not fragmented, there is no fragmentation information.
  • the second field is used to carry the second data structure, and no memory chips are applied for the second data structure, so as to save memory resources.
  • the second data structure may be a message field management data structure. For details, please refer to the introduction in the "SKB structure" section, which will not be repeated here.
  • the packet buffering method according to the embodiment of the present application is applied in the packet forwarding process.
  • the method includes the following steps:
  • the modem receives the target message.
  • the modem receives the target message from the access network device.
  • the target message received by the modem is a message that satisfies the LTE message format.
  • the target message received by the modem is a message that satisfies the NR message format.
  • the modem sends a target message to the memory allocator. Accordingly, the memory allocator receives the target message from the modem.
  • the target message is the message received in S501.
  • the memory allocator determines the first memory slice.
  • the first memory slice includes a data packet buffer area, a first data structure and a second data structure.
  • the first data structure includes a first field and a second field, the first field indicates the fragmentation state of the packet in the data packet buffer, and the second field is used to carry the second data structure, as shown in FIG. 4 .
  • the second field is a field that carries fragmentation information in the packet fragmentation state. The number of the second field may be one or more.
  • the location of the first data structure in the first memory slice is described as follows: the location of the first data structure in the first memory slice can be flexibly set.
  • the location of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area (not shown in FIG. 4 and FIG. 6 ), or after the data packet buffer area.
  • the first data structure may also be located in other locations of the first memory slice, which is not limited in this embodiment of the present application.
  • the storage space occupied by the data packet buffer area and the storage space occupied by the first data structure may be continuous or discontinuous.
  • the storage space between the data packet buffer area and the first data structure is greater than or equal to a preset value to support the subsequent evolution of the first data structure.
  • the new field when a new field is added to the first data structure, the new field It can be stored in the above-mentioned "storage space spaced between the data packet buffer area and the first data structure".
  • the unit of the preset value may be bits, bytes, or the like.
  • the preset value may be any number of bytes from 50 to 100 bytes, or may be a certain number of bits.
  • the memory allocator determines the first memory slice in the local memory of the memory according to the message format of the target message and the data size of the target message.
  • the size of the data packet buffer in the first memory slice may be 1024KB.
  • the memory allocator fills the first memory slice with the target message.
  • the memory allocation uses the target message to fill the data packet buffer area of the first memory slice.
  • the first field of the first memory slice indicates that the target packet is not fragmented.
  • the second field carries a second data structure, and the second data structure at least indicates the first address of the data packet buffer area in the first memory slice.
  • the second data structure also indicates the first address of the header information of the message, such as the first address of the MAC header information, the first address of the IP header information and the first address of the TCP header information, as shown in the memory 301 of FIG. 7 , filled with slashes shown in the box.
  • the memory allocator sends the first address of the header information to the forwarding engine.
  • the forwarding engine receives the first address of the header information from the memory allocator.
  • the header information may be, for example, but not limited to, MAC header information, IP header information, and TCP header information.
  • the first address of the header information may be indicated by the second data structure. For details, please refer to the relevant introduction in the "SKB structure" section, which will not be repeated here.
  • the forwarding engine determines the forwarding direction of the target packet according to the header information carried on the first address of the header information.
  • the forwarding engine reads the header information carried on the first address of the header information, for example, reads the MAC header information, and determines the forwarding direction of the target packet according to the MAC header information. If the target packet is a packet sent to other devices, the forwarding engine determines that the forwarding direction is the port of the network interface card, such as a gigabit media access control (gigabit media access control, GMAC) port, and the forwarding engine executes S507. If the target packet is a packet sent to the device, the forwarding engine determines that the forwarding direction is the central processing unit, and the forwarding engine executes S511.
  • the network interface card such as a gigabit media access control (gigabit media access control, GMAC) port
  • the forwarding engine sends the first address of the header information to the network interface card.
  • the network interface card receives the first address of the header information from the forwarding engine.
  • the network interface card obtains the target packet according to the first address of the header information.
  • the network interface card determines the storage address of the header information and the storage address of the data content according to the first address of the header information.
  • the network interface card reads the header information of the target message from the storage address of the header information, and reads the data content of the target message from the storage address of the data content. In this way, the network interface card acquires the target packet to be forwarded.
  • the network interface card sends the target message to the target device through the network medium.
  • the target device receives the target packet from the network interface card through the network medium.
  • the target device is the device corresponding to the destination address of the target packet.
  • this embodiment of the present application further includes S509 and S510:
  • the network interface card sends indication information 1 to the memory allocator. Accordingly, the memory allocator receives indication information 1 from the network interface card.
  • the indication information 1 indicates that the target packet has been forwarded.
  • the memory allocator releases the first memory slice according to the indication information 1.
  • the memory allocator deletes the information stored in the first memory slice according to the instruction information 1, and reclaims the first memory slice, so that the reclaimed memory resource stores other messages, so as to realize the management of the memory resource by the memory allocator.
  • the first memory slice after the deletion of the target message is shown as a box without diagonal lines in the memory 301 in FIG. 7 .
  • the forwarding engine sends a request message to the central processing unit.
  • the central processor receives the request message from the forwarding engine.
  • the request message requests the target memory slice, so that the information stored in the first memory slice satisfies the SKB structure.
  • the request message can also carry the first address of the header information, so that after the CPU applies for the target memory slice, according to the first address of the header information, copy the target message or part of the information of the target message to use the SKB structure to store. target message.
  • SKB structure For the introduction of the SKB structure, reference may be made to the relevant description of FIG. 1 , which will not be repeated here.
  • the central processing unit determines the target memory slice according to the request message.
  • the central processing unit may determine that the target memory slice is the second memory slice.
  • the second memory slice is used to store the message field management data structure, and does not store the data packet buffer area and the message sharing information data structure.
  • the message field management data structure is used to store the information of the second data structure. That is to say, the target memory slice that satisfies the SKB structure is composed of two memory slices, that is, the data packet buffer area and the message sharing information data structure are distributed in the first memory slice, and the message field management data structure is distributed in the second memory slice. piece.
  • the central processing unit may determine that the target memory slice is the third memory slice.
  • the third memory slice includes a data packet buffer area, a message field management data structure, and a message sharing information data structure.
  • the message field management data structure is used to store information of the second data structure. That is to say, the target memory slice that satisfies the SKB structure is a memory slice, that is, the third memory slice.
  • the central processing unit fills the target memory slice with target information.
  • the target information is information stored in the second data structure.
  • the central processor reads the information stored in the second data structure (or the second field) in the first memory slice according to the header information carried in the request message, and then stores the information in the second memory slice.
  • the target information is the information stored in the first memory slice.
  • the central processor reads the information stored in the first memory chip according to the header information carried in the request message, and then stores the information in the third memory chip.
  • the information stored in the data packet buffer area of the first memory slice is copied to the data packet buffer area of the third memory slice, and the first data structure is restored to the message sharing information data structure, which is stored in the third memory slice, and the second data structure The stored information is copied to the message field management data structure of the third memory slice.
  • the embodiment of the present application further includes S514 and S515:
  • the central processing unit sends instruction information 2 to the memory allocator.
  • the memory allocator receives the instruction information 2 from the central processing unit.
  • the central processing unit after the central processing unit copies the information stored in the second data structure to the second memory slice, the central processing unit sends indication information 2 to the memory allocator.
  • the indication information 2 indicates that the first data structure is restored to the message sharing information data structure.
  • the central processing unit after the central processing unit copies the information of the first memory slice to the third memory slice, the central processing unit sends the indication information 2 to the memory allocator.
  • the indication information 2 indicates to release the first memory slice.
  • the memory allocator processes the first memory slice according to the indication information 2.
  • the memory allocator deletes the second data structure according to the indication information 2, so that the second field carries the fragmentation information.
  • the first data structure is implemented as a message sharing information data structure.
  • the memory allocator releases the first memory slice according to the indication information 2, that is, deletes the information stored in the first memory slice, thereby reclaiming the first memory slice, so that the reclaimed memory
  • the resource stores other target messages, so as to realize the management of memory resources by the memory allocator.
  • the data packet buffer area after the deletion of the target message is shown as a block without diagonal lines in the memory 301 in FIG. 8 .
  • the first memory slice is used to store the unfragmented target message. Since the second field is a field that carries fragmentation information when the first field indicates packet fragmentation, the second field is also a field that carries the second data structure when the first field indicates that the target packet is not fragmented. field. That is to say, when the target packet is not fragmented, there is no fragmentation information, and the second field is used to carry the second data structure.
  • the first address does not need to separately apply for a memory slice for the second data structure, thereby saving memory resources.
  • an embodiment of the present application further provides a message buffering device, and the message buffering device may be the memory allocator in the above method embodiments, or a component that can be used for the memory allocator.
  • the message buffering apparatus includes corresponding hardware structures and/or software modules for executing each function.
  • the present application can be implemented in hardware or a combination of hardware and computer software with the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • FIG. 9 shows a schematic block diagram of a packet buffering apparatus provided in an embodiment of the present application.
  • the message buffering apparatus 900 may exist in the form of software, or may be a device, or a component in a device (such as a chip system).
  • the message buffering device 900 includes: a communication unit 901 and a processing unit 902 .
  • the communication unit 901 is an interface circuit of the message buffer device 900, and is used for receiving signals from or sending signals to other devices.
  • the communication unit 901 is an interface circuit used by the chip to receive signals from other chips or devices, or an interface circuit used by the chip to send signals to other chips or devices Interface Circuit.
  • the communication unit 901 may include a communication unit for communicating with the memory and a communication unit for communicating with other devices, and these communication units may be integrated together or independently implemented.
  • the communication unit 901 may be used to support the message buffering apparatus 900 to perform S502, S509, and S514 in FIG. 5, and/or be used for Additional procedures for the protocol described herein.
  • the processing unit 902 may be configured to support the message buffering apparatus 900 to perform S503, S504, S510, S515 in FIG. 5, and/or other processes for the solutions described herein.
  • the processing apparatus 900 may further include a storage unit for storing program codes and data of the processing apparatus 900, and the data may include but not limited to original data or intermediate data.
  • the processing unit 902 may be a processor or a controller, for example, a CPU, a general-purpose processor, a DSP, an ASIC, an FPGA, or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure.
  • a processor may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
  • the storage unit may be a memory.
  • the memory may be the above-mentioned memory providing the first memory chip, or may be different from the above-mentioned memory providing the first memory chip.
  • the processing unit 902 in the message buffer device 900 is implemented as a memory allocator
  • the storage unit in the message buffer device 900 is implemented as a memory
  • the communication unit 901 in the message buffer device 900 is implemented as a communication interface
  • the involved message forwarding system may be as shown in Fig. 3a or as shown in Fig. 3b.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line, DSL) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that a computer can access, or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, digital video disc (DVD)), or semiconductor media (eg, solid state disk (SSD)) )Wait.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network devices. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each functional unit may exist independently, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, or can be implemented in the form of hardware plus software functional units.
  • the present application can be implemented by means of software plus necessary general-purpose hardware, and of course hardware can also be used, but in many cases the former is a better implementation manner .
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that make contributions to the prior art.
  • the computer software products are stored in a readable storage medium, such as a floppy disk of a computer. , a hard disk or an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the various embodiments of the present application.

Abstract

The present application relates to the technical field of communications, and provides a message buffering method, a memory allocator, and a message forwarding system, being capable of saving a memory resource. The method comprises: a memory allocator receives a target message from a modulator-demodulator, and then the memory allocator stores the target message in a data packet buffer region of a first memory chip, wherein the first memory chip further comprises a first data structure; the first data structure comprises a first field and a second field; the first field indicates that the target message is not fragmented; the second field carries a second data structure; the second field is a field carrying fragmentation information in a state where the target message is fragmented; the second data structure at least indicates a first address of the data packet buffer region; and the first memory chip is provided by a memory.

Description

报文缓存方法、内存分配器及报文转发系统Message caching method, memory allocator and message forwarding system 技术领域technical field
本申请涉及通信技术领域,尤其涉及一种报文缓存方法、内存分配器及报文转发系统。The present application relates to the field of communication technologies, and in particular, to a message caching method, a memory allocator, and a message forwarding system.
背景技术Background technique
在报文经过传输控制协议/网际协议(transfer control protocol/internet protocol,TCP/IP)栈传输的情况下,报文通常采用套接字缓存(socket buffer,SKB)结构传输。其中,SKB结构包括报文字段管理数据结构(sk_buff)、报文共享信息数据结构(skb_shared_info)和数据包缓存区(packet buffer)。报文字段管理数据结构存储报文管理信息。报文共享信息数据结构存储报文的分片信息。数据包缓存区用于存储报文内容。When a message is transmitted through a transfer control protocol/internet protocol (TCP/IP) stack, the message is usually transmitted using a socket buffer (SKB) structure. The SKB structure includes a message field management data structure (sk_buff), a message sharing information data structure (skb_shared_info), and a packet buffer (packet buffer). The message field management data structure stores message management information. The message sharing information data structure stores the fragmentation information of the message. The packet buffer is used to store the packet content.
然而,若报文直接通过硬件转发,不经过TCP/IP协议栈传输,则报文不需要分片处理。若仍采用SKB结构传输报文,则报文共享信息数据结构中很多字段(如承载分片信息的字段)是不需要的,但仍需为上述“不需要的字段”分配内存,导致内存资源浪费。However, if the packet is directly forwarded by hardware without going through the TCP/IP protocol stack, the packet does not need to be fragmented. If the SKB structure is still used to transmit the message, many fields in the message sharing information data structure (such as the fields that carry fragmentation information) are not required, but memory still needs to be allocated for the above "unnecessary fields", resulting in memory resources. waste.
发明内容SUMMARY OF THE INVENTION
本申请实施例提供一种报文缓存方法、内存分配器及报文转发系统,能够节省内存资源。The embodiments of the present application provide a message caching method, a memory allocator, and a message forwarding system, which can save memory resources.
为达到上述目的,本申请实施例采用如下技术方案:In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
第一方面,本申请实施例提供一种报文缓存方法,该方法的执行主体可以是报文转发系统的内存分配器,也可以是应用于报文转发系统的内存分配器中的芯片。报文转发系统还包括调制解调器和存储器。该方法包括:内存分配器接收来自调制解调器的目标报文。然后,内存分配器将目标报文存储于第一内存片的数据包缓存区。其中,第一内存片还包括第一数据结构。第一数据结构包括第一字段和第二字段,第一字段指示目标报文未分片,第二字段承载第二数据结构,第二字段是在目标报文分片状态下承载分片信息的字段,第二数据结构至少指示数据包缓存区的首地址。第一内存片由存储器提供。In a first aspect, an embodiment of the present application provides a message buffering method, and the execution body of the method may be a memory allocator of a message forwarding system, or may be a chip applied to a memory allocator of a message forwarding system. The message forwarding system also includes a modem and memory. The method includes: the memory allocator receives the target message from the modem. Then, the memory allocator stores the target message in the data packet buffer area of the first memory slice. Wherein, the first memory slice further includes a first data structure. The first data structure includes a first field and a second field, the first field indicates that the target message is not fragmented, the second field carries the second data structure, and the second field carries fragmentation information in the fragmented state of the target message field, the second data structure at least indicates the first address of the data packet buffer area. The first memory slice is provided by the memory.
本申请实施例提供的报文缓存方法,采用第一内存片存储未分片处理的目标报文。由于第二字段是在第一字段指示报文分片的情况下,承载分片信息的字段,第二字段也是在第一字段指示目标报文未分片的情况下,承载第二数据结构的字段。也就是说,在目标报文未分片的情况下,不存在分片信息,第二字段用于承载第二数据结构,通过第二数据结构至少指示数据包缓存区在第一内存片中的首地址,无需为第二数据结构单独申请内存片,从而节省内存资源。In the message caching method provided by the embodiment of the present application, the first memory slice is used to store the target message that is not fragmented and processed. Since the second field is a field that carries fragmentation information when the first field indicates packet fragmentation, the second field is also a field that carries the second data structure when the first field indicates that the target packet is not fragmented. field. That is to say, when the target packet is not fragmented, there is no fragmentation information, and the second field is used to carry the second data structure. The first address does not need to separately apply for a memory slice for the second data structure, thereby saving memory resources.
在一种可能的设计中,第一数据结构在第一内存片中的位置包括以下其中一项:数据包缓存区之前、或数据包缓存区之后。也就是说,第一数据结构在第一内存片中的位置可以灵活设置。In a possible design, the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area. That is, the position of the first data structure in the first memory slice can be flexibly set.
在一种可能的设计中,数据包缓存区与第一数据结构之间间隔的存储空间大于或等于预设值。其中,预设值的单位可以是比特、字节等。预设值可以是50~100字节中任意数量的字节,也可以是一定数量的比特,以支持第一数据结构后续演进,如第一数据结构增加新的字段的情况下,新的字段可以存储于上述“数据包缓存区与第一数据结构之间间隔的 存储空间”。In a possible design, the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value. The unit of the preset value may be bits, bytes, or the like. The default value can be any number of bytes from 50 to 100 bytes, or a certain number of bits to support the subsequent evolution of the first data structure. For example, when a new field is added to the first data structure, the new field It can be stored in the above-mentioned "storage space spaced between the data packet buffer area and the first data structure".
在一种可能的设计中,本申请实施例报文缓存方法还包括:内存分配器接收来自网络接口卡NIC的第一指示信息。其中,第一指示信息指示目标报文已转发,报文转发系统还包括NIC。然后,内存分配器根据第一指示信息,释放第一内存片。In a possible design, the packet caching method according to the embodiment of the present application further includes: the memory allocator receives the first indication information from the network interface card NIC. The first indication information indicates that the target message has been forwarded, and the message forwarding system further includes a NIC. Then, the memory allocator releases the first memory slice according to the first indication information.
也就是说,在目报文转发之后,内存分配器还能够根据第一指示信息收回第一内存片,以存储其他的报文,从而提高内存资源的利用率。That is to say, after the target message is forwarded, the memory allocator can also recover the first memory slice according to the first indication information to store other messages, thereby improving the utilization rate of memory resources.
在一种可能的设计中,本申请实施例报文缓存方法还包括:内存分配器接收来自中央处理器CPU的第二指示信息。其中,第二指示信息指示第一数据结构恢复为报文共享信息数据结构,CPU用于申请目标内存片,目标内存片用于存储报文字段管理数据结构,报文字段管理数据结构至少指示数据包缓存区的首地址,报文转发系统还包括CPU。然后,内存分配器根据第二指示信息,删除第二数据结构。In a possible design, the message caching method according to the embodiment of the present application further includes: the memory allocator receives the second indication information from the central processing unit CPU. The second indication information indicates that the first data structure is restored to the message sharing information data structure, the CPU is used to apply for the target memory slice, the target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the data The first address of the packet buffer area. The packet forwarding system also includes the CPU. Then, the memory allocator deletes the second data structure according to the second indication information.
也就是说,内存分配器还能够根据第二指示信息,来删除第二数据结构,从而使得第一数据结构恢复为报文共享信息数据结构,以存储分片的报文。That is to say, the memory allocator can also delete the second data structure according to the second indication information, so that the first data structure is restored to the message sharing information data structure to store the fragmented message.
在一种可能的设计中,本申请实施例报文缓存方法还包括:内存分配器接收来自CPU的第三指示信息。其中,第三指示信息指示释放第一内存片,CPU用于申请目标内存片,目标内存片是基于套接字缓存SKB结构确定的,SKB结构用于存储第一内存片存储的信息,报文转发系统还包括CPU。然后,内存分配器根据第三指示信息,释放第一内存片。In a possible design, the message buffering method according to the embodiment of the present application further includes: the memory allocator receives third indication information from the CPU. The third indication information indicates the release of the first memory slice, the CPU is used to apply for the target memory slice, the target memory slice is determined based on the socket cache SKB structure, and the SKB structure is used to store the information stored in the first memory slice, and the message The forwarding system also includes a CPU. Then, the memory allocator releases the first memory slice according to the third indication information.
也就是说,若CPU申请的目标内存片是SKB结构的内存片,则内存分配器还能够释放第一内存片,以使收回的内存片存储其他报文,从而提高内存资源的利用率。That is to say, if the target memory slice applied by the CPU is a memory slice of the SKB structure, the memory allocator can also release the first memory slice, so that the recovered memory slice can store other messages, thereby improving the utilization of memory resources.
第二方面,本申请实施例提供一种报文缓存装置,该报文缓存装置位于报文转发系统中。报文转发系统还包括调制解调器和存储器。该报文缓存装置包括:通信单元和处理单元。其中,通信单元,用于接收来自调制解调器的目标报文。处理单元,用于将目标报文存储于第一内存片的数据包缓存区,其中,第一内存片还包括第一数据结构;第一数据结构包括第一字段和第二字段,第一字段指示目标报文未分片,第二字段承载第二数据结构,第二字段是在目标报文分片状态下承载分片信息的字段,第二数据结构至少指示数据包缓存区的首地址;第一内存片由存储器提供。In a second aspect, an embodiment of the present application provides a message buffering device, where the message buffering device is located in a message forwarding system. The message forwarding system also includes a modem and memory. The message buffering device includes: a communication unit and a processing unit. Among them, the communication unit is used to receive the target message from the modem. The processing unit is used to store the target message in the data packet buffer area of the first memory slice, wherein the first memory slice further includes a first data structure; the first data structure includes a first field and a second field, and the first field Indicates that the target message is not fragmented, the second field carries a second data structure, the second field is a field that carries fragmentation information in a fragmented state of the target message, and the second data structure at least indicates the first address of the data packet buffer; The first memory slice is provided by the memory.
在一种可能的设计中,第一数据结构在第一内存片中的位置包括以下其中一项:数据包缓存区之前、或数据包缓存区之后。In a possible design, the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area.
在一种可能的设计中,数据包缓存区与第一数据结构之间间隔的存储空间大于或等于预设值。In a possible design, the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
在一种可能的设计中,通信单元,还用于接收来自网络接口卡NIC的第一指示信息。其中,第一指示信息指示目标报文已转发,报文转发系统还包括NIC。处理单元,还用于根据第一指示信息,释放第一内存片。In a possible design, the communication unit is further configured to receive the first indication information from the network interface card NIC. The first indication information indicates that the target message has been forwarded, and the message forwarding system further includes a NIC. The processing unit is further configured to release the first memory slice according to the first indication information.
在一种可能的设计中,通信单元,还用于接收来自中央处理器CPU的第二指示信息。其中,第二指示信息指示第一数据结构恢复为报文共享信息数据结构,CPU用于申请目标内存片,目标内存片用于存储报文字段管理数据结构,报文字段管理数据结构至少指示数据包缓存区的首地址,报文转发系统还包括CPU。处理单元,还用于根据第二指示信息,删除第二数据结构。In a possible design, the communication unit is further configured to receive the second indication information from the central processing unit CPU. The second indication information indicates that the first data structure is restored to the message sharing information data structure, the CPU is used to apply for the target memory slice, the target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the data The first address of the packet buffer area. The packet forwarding system also includes the CPU. The processing unit is further configured to delete the second data structure according to the second indication information.
在一种可能的设计中,通信单元,还用于接收来自CPU的第三指示信息。其中, 第三指示信息指示释放第一内存片,CPU用于申请目标内存片,目标内存片是基于套接字缓存SKB结构确定的,SKB结构用于存储第一内存片存储的信息,报文转发系统还包括CPU。处理单元,还用于根据第三指示信息,释放第一内存片。In a possible design, the communication unit is further configured to receive third indication information from the CPU. The third indication information indicates the release of the first memory slice, the CPU is used to apply for the target memory slice, the target memory slice is determined based on the socket cache SKB structure, and the SKB structure is used to store the information stored in the first memory slice, and the message The forwarding system also includes a CPU. The processing unit is further configured to release the first memory slice according to the third indication information.
第三方面,本申请实施例提供一种报文缓存装置,包括处理器和接口电路,处理器用于通过接口电路与其它装置通信,并执行以上第一方面或第一方面中任一项的报文缓存方法。该处理器包括一个或多个。In a third aspect, an embodiment of the present application provides a message buffering device, including a processor and an interface circuit, where the processor is configured to communicate with other devices through the interface circuit, and execute the reporting of the first aspect or any one of the first aspect. Text caching method. The processor includes one or more.
第四方面,本申请实施例提供一种报文缓存装置,包括处理器,用于与存储器相连,用于调用存储器中存储的程序,以执行第一方面或第一方面中任一项的报文缓存方法。该存储器可以位于该报文缓存装置之内,也可以位于该报文缓存装置之外。且该处理器包括一个或多个。In a fourth aspect, an embodiment of the present application provides a message buffering device, including a processor that is connected to a memory and used to call a program stored in the memory to execute the first aspect or any one of the first aspects. Text caching method. The memory may be located within the message buffering device, or may be located outside the message buffering device. And the processor includes one or more.
第五方面,本申请实施例提供一种报文缓存装置,包括至少一个处理器和至少一个存储器,至少一个处理器用于执行以上第一方面或第一方面中任一项的报文缓存方法。In a fifth aspect, an embodiment of the present application provides a message caching device, including at least one processor and at least one memory, where the at least one processor is configured to execute the first aspect or the message caching method of any one of the first aspects.
第六方面,本申请实施例提供一种内存分配器,应用于报文转发系统。报文转发系统还包括调制解调器和存储器。内存分配器用于接收来自调制解调器的目标报文。内存分配器还用于将目标报文存储于第一内存片的数据包缓存区,其中,第一内存片还包括第一数据结构;第一数据结构包括第一字段和第二字段,第一字段指示目标报文未分片,第二字段承载第二数据结构,第二字段是在目标报文分片状态下承载分片信息的字段,第二数据结构至少指示数据包缓存区的首地址;第一内存片由存储器提供。In a sixth aspect, an embodiment of the present application provides a memory allocator, which is applied to a message forwarding system. The message forwarding system also includes a modem and memory. The memory allocator is used to receive destination messages from the modem. The memory allocator is further configured to store the target message in the data packet buffer area of the first memory slice, wherein the first memory slice further includes a first data structure; the first data structure includes a first field and a second field, the first The field indicates that the target packet is not fragmented, the second field carries the second data structure, the second field is the field that carries fragmentation information in the fragmented state of the target packet, and the second data structure at least indicates the first address of the packet buffer area ; The first memory slice is provided by the memory.
在一种可能的设计中,第一数据结构在第一内存片中的位置包括以下其中一项:数据包缓存区之前、或数据包缓存区之后。In a possible design, the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area.
在一种可能的设计中,数据包缓存区与第一数据结构之间间隔的存储空间大于或等于预设值。In a possible design, the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
在一种可能的设计中,内存分配器还用于接收来自网络接口卡NIC的第一指示信息,其中,第一指示信息指示目标报文已转发,报文转发系统还包括NIC。内存分配器还用于根据第一指示信息,释放第一内存片。In a possible design, the memory allocator is further configured to receive first indication information from the network interface card NIC, where the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes the NIC. The memory allocator is further configured to release the first memory slice according to the first indication information.
在一种可能的设计中,内存分配器还用于接收来自中央处理器CPU的第二指示信息,其中,第二指示信息指示第一数据结构恢复为报文共享信息数据结构,CPU用于申请目标内存片,目标内存片用于存储报文字段管理数据结构,报文字段管理数据结构至少指示数据包缓存区的首地址;报文转发系统还包括CPU。内存分配器还用于根据第二指示信息,删除第二数据结构。In a possible design, the memory allocator is further configured to receive second indication information from the central processing unit CPU, wherein the second indication information indicates that the first data structure is restored to the message sharing information data structure, and the CPU is used to apply for The target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the first address of the data packet buffer area; the message forwarding system further includes a CPU. The memory allocator is further configured to delete the second data structure according to the second indication information.
在一种可能的设计中,内存分配器,还用于接收来自CPU的第三指示信息,其中,第三指示信息指示释放第一内存片,CPU用于申请目标内存片,目标内存片是基于套接字缓存SKB结构确定的;SKB结构用于存储第一内存片存储的信息;报文转发系统还包括CPU。内存分配器,还用于根据第三指示信息,释放第一内存片。In a possible design, the memory allocator is further configured to receive third indication information from the CPU, where the third indication information indicates to release the first memory slice, the CPU is used to apply for the target memory slice, and the target memory slice is based on The socket cache is determined by the SKB structure; the SKB structure is used to store the information stored in the first memory slice; the message forwarding system further includes a CPU. The memory allocator is further configured to release the first memory slice according to the third indication information.
第七方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机可以执行上述第一方面或第一方面中任一项的报文缓存方法。In a seventh aspect, embodiments of the present application provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on a computer, the computer can execute the first aspect or any one of the first aspects. A message caching method.
第八方面,本申请实施例提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机可以执行上述第一方面或第一方面中任一项的报文缓存方法。In an eighth aspect, the embodiments of the present application provide a computer program product including instructions, which, when running on a computer, enables the computer to execute the first aspect or the message caching method of any one of the first aspects.
第九方面,本申请实施例提供一种电路系统,电路系统包括处理电路,处理电路被配置为执行如上述第一方面或第一方面中任一项的报文缓存方法。In a ninth aspect, an embodiment of the present application provides a circuit system, where the circuit system includes a processing circuit, and the processing circuit is configured to execute the message caching method according to any one of the first aspect or the first aspect.
第十方面,本申请实施例提供一种芯片,芯片包括逻辑电路和输入输出接口。其中,输入输出接口用于与芯片之外的模块通信,例如,该芯片可以为实现上述第一方面或第一方面任一种可能的设计中的内存分配器功能的芯片。输入输出接口输入目标报文。逻辑电路用于运行计算机程序或指令,以实现上述第一方面或第一方面任一种可能的设计中的报文缓存方法。In a tenth aspect, an embodiment of the present application provides a chip, where the chip includes a logic circuit and an input and output interface. The input and output interfaces are used for communication with modules other than the chip. For example, the chip may be a chip that implements the function of the memory allocator in the first aspect or any possible design of the first aspect. The input and output interfaces input target packets. The logic circuit is used to run a computer program or instructions to implement the message buffering method in the first aspect or any possible design of the first aspect.
第十一方面,本申请实施例提供一种报文转发系统,该系统包括调制解调器、内存分配器和存储器。其中,内存分配器用于接收来自调制解调器的目标报文。内存分配器还用于将目标报文存储于第一内存片的数据包缓存区。其中,第一内存片还包括第一数据结构,第一数据结构包括第一字段和第二字段,第一字段指示目标报文未分片,第二字段承载第二数据结构,第二字段是在目标报文分片状态下承载分片信息的字段,第二数据结构至少指示数据包缓存区的首地址,第一内存片由存储器提供。In an eleventh aspect, an embodiment of the present application provides a message forwarding system, where the system includes a modem, a memory allocator, and a memory. Among them, the memory allocator is used to receive the target message from the modem. The memory allocator is also used for storing the target message in the data packet buffer area of the first memory slice. The first memory slice further includes a first data structure, the first data structure includes a first field and a second field, the first field indicates that the target message is not fragmented, the second field carries the second data structure, and the second field is A field carrying fragmentation information in the target packet fragmentation state, the second data structure at least indicates the first address of the data packet buffer area, and the first memory fragment is provided by the memory.
在一种可能的设计中,第一数据结构在第一内存片中的位置包括以下其中一项:数据包缓存区之前、或数据包缓存区之后。In a possible design, the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area.
在一种可能的设计中,数据包缓存区与第一数据结构之间间隔的存储空间大于或等于预设值。In a possible design, the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
在一种可能的设计中,内存分配器还用于接收来自网络接口卡NIC的第一指示信息,其中,第一指示信息指示目标报文已转发,报文转发系统还包括NIC。内存分配器还用于根据第一指示信息,释放第一内存片。In a possible design, the memory allocator is further configured to receive first indication information from the network interface card NIC, where the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes the NIC. The memory allocator is further configured to release the first memory slice according to the first indication information.
在一种可能的设计中,内存分配器还用于接收来自中央处理器CPU的第二指示信息,其中,第二指示信息指示第一数据结构恢复为报文共享信息数据结构,CPU用于申请目标内存片,目标内存片用于存储报文字段管理数据结构,报文字段管理数据结构至少指示数据包缓存区的首地址;报文转发系统还包括CPU。内存分配器还用于根据第二指示信息,删除第二数据结构。In a possible design, the memory allocator is further configured to receive second indication information from the central processing unit CPU, wherein the second indication information indicates that the first data structure is restored to the message sharing information data structure, and the CPU is used to apply for The target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the first address of the data packet buffer area; the message forwarding system further includes a CPU. The memory allocator is further configured to delete the second data structure according to the second indication information.
在一种可能的设计中,内存分配器,还用于接收来自CPU的第三指示信息,其中,第三指示信息指示释放第一内存片,CPU用于申请目标内存片,目标内存片是基于套接字缓存SKB结构确定的;SKB结构用于存储第一内存片存储的信息;报文转发系统还包括CPU。内存分配器,还用于根据第三指示信息,释放第一内存片。In a possible design, the memory allocator is further configured to receive third indication information from the CPU, where the third indication information indicates to release the first memory slice, the CPU is used to apply for the target memory slice, and the target memory slice is based on The socket cache is determined by the SKB structure; the SKB structure is used to store the information stored in the first memory slice; the message forwarding system further includes a CPU. The memory allocator is further configured to release the first memory slice according to the third indication information.
其中,第二方面至第十一方面中任一种设计方式所带来的技术效果可参见第一方面中不同设计方式所带来的技术效果,此处不再赘述。Wherein, for the technical effect brought by any one of the design methods in the second aspect to the eleventh aspect, reference may be made to the technical effect brought by the different design methods in the first aspect, which will not be repeated here.
附图说明Description of drawings
图1为本申请实施例提供的一种套接字缓存SKB结构的示意图;1 is a schematic diagram of a socket cache SKB structure provided by an embodiment of the present application;
图2为本申请实施例提供的一种报文转发的过程示意图;FIG. 2 is a schematic diagram of a message forwarding process according to an embodiment of the present application;
图3a为本申请实施例提供的一种报文转发系统的硬件架构示意图;3a is a schematic diagram of the hardware architecture of a message forwarding system provided by an embodiment of the present application;
图3b为本申请实施例提供的再一种报文转发系统的硬件架构示意图;3b is a schematic diagram of the hardware architecture of still another message forwarding system provided by an embodiment of the application;
图4为本申请实施例提供的一种内存片的结构示意图;FIG. 4 is a schematic structural diagram of a memory chip according to an embodiment of the present application;
图5为本申请实施例提供的一种报文缓存方法的流程示意图;FIG. 5 is a schematic flowchart of a message caching method provided by an embodiment of the present application;
图6为本申请实施例提供的一种报文缓存的过程示意图;FIG. 6 is a schematic diagram of a process of packet buffering provided by an embodiment of the present application;
图7为本申请实施例提供的一种报文转发的过程示意图;FIG. 7 is a schematic diagram of a message forwarding process according to an embodiment of the present application;
图8为本申请实施例提供的再一种报文转发的过程示意图;FIG. 8 is a schematic diagram of still another packet forwarding process provided by an embodiment of the present application;
图9为本申请实施例提供的一种报文缓存装置的结构示意图。FIG. 9 is a schematic structural diagram of a message buffering apparatus provided by an embodiment of the present application.
具体实施方式Detailed ways
本申请的说明书以及附图中的术语“第一”和“第二”等是用于区别不同的对象,或者用于区别对同一对象的不同处理,而不是用于描述对象的特定顺序。此外,本申请的描述中所提到的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括其他没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。需要说明的是,本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。The terms "first" and "second" in the description and drawings of the present application are used to distinguish different objects, or to distinguish different processing of the same object, rather than to describe a specific order of the objects. Furthermore, references to the terms "comprising" and "having" in the description of this application, and any variations thereof, are intended to cover non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or units is not limited to the listed steps or units, but optionally also includes other unlisted steps or units, or optionally also Include other steps or units inherent to these processes, methods, products or devices. It should be noted that, in the embodiments of the present application, words such as "exemplary" or "for example" are used to represent examples, illustrations, or illustrations. Any embodiments or designs described in the embodiments of the present application as "exemplary" or "such as" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present the related concepts in a specific manner.
首先,介绍本申请实施例所涉及的技术术语:First, the technical terms involved in the embodiments of the present application are introduced:
1、报文分片1. Packet Fragmentation
将一个报文经过分片处理,切分成若干段之后,每段被称为一个报文分片。示例性的,在报文直接通过硬件转发,不经过TCP/IP协议栈传输的情况下,报文无需分片处理。After a packet is fragmented and divided into several segments, each segment is called a packet fragment. Exemplarily, in the case that the packet is directly forwarded by hardware without being transmitted through the TCP/IP protocol stack, the packet does not need to be fragmented.
2、缓存区(buffer)2, the buffer (buffer)
缓存区,用于缓存报文的存储空间。其中,缓存区可以是网络接口卡(network interface card,NIC)中的存储空间,也可以是网络接口卡所在设备中其它存储器中划分的存储空间,用于实现缓存报文的功能。The buffer area is the storage space for buffering packets. The cache area may be a storage space in a network interface card (NIC), or may be a storage space divided in other memories in the device where the network interface card is located, and is used to implement the function of buffering messages.
3、套接字缓存(socket buffer,SKB)结构3. Socket buffer (SKB) structure
SKB结构,是传输控制协议/网际协议(transfer control protocol/internet protocol,TCP/IP)栈中的一种数据结构。一个SKB结构包括报文字段管理数据结构(sk_buff)、报文共享信息数据结构(skb_shared_info)和数据包缓存区(packet buff),如图1所示。SKB结构中各部分的介绍如下:The SKB structure is a data structure in the transfer control protocol/internet protocol (TCP/IP) stack. An SKB structure includes a message field management data structure (sk_buff), a message shared information data structure (skb_shared_info), and a data packet buffer area (packet buff), as shown in Figure 1. The introduction of each part in the SKB structure is as follows:
3-1、报文字段管理数据结构是主数据结构,大小通常为512字节(byte)。报文字段管理数据结构中的字段可以例如但不限于如下字段:3-1. The message field management data structure is the main data structure, and the size is usually 512 bytes. The fields in the message field management data structure can be, for example, but not limited to, the following fields:
第一、后(next)数据结构指针字段。其中,后数据结构指针字段指示当前报文字段管理数据结构的后一个报文字段管理数据结构的地址。First, after (next) data structure pointer field. The latter data structure pointer field indicates the address of the next message field management data structure of the current message field management data structure.
第二、前(prev)数据结构指针字段。其中,前数据结构指针字段指示当前报文字段管理数据结构的前一个报文字段管理数据结构的地址。Second, the front (prev) data structure pointer field. Wherein, the previous data structure pointer field indicates the address of the previous message field management data structure of the current message field management data structure.
第三、头(head)字段。其中,头字段用于存储数据包缓存区的首地址。Third, the head (head) field. The header field is used to store the first address of the data packet buffer area.
第四、尾(tail)字段。其中,尾字段用于存储数据包缓存区中实际存储报文内容的尾地址。Fourth, the tail (tail) field. Among them, the tail field is used to store the tail address of the actually stored packet content in the data packet buffer area.
第五、结束(end)字段。其中,结束字段用于存储数据包缓存区的结束地址。Fifth, the end (end) field. The end field is used to store the end address of the data packet buffer area.
需要说明的是,头字段、尾字段和结束字段大约占用13个字节。报文字段管理数据结构还指示MAC头信息的首地址、IP头信息的首地址、TCP头信息的首地址。It should be noted that the header field, trailer field and end field occupy about 13 bytes. The message field management data structure also indicates the first address of the MAC header information, the first address of the IP header information, and the first address of the TCP header information.
3-2、数据包缓存区用于存储报文内容。其中,数据包缓存区的大小是基于报文内 容的数据量确定的。3-2. The data packet buffer area is used to store the content of the message. Among them, the size of the packet buffer is determined based on the data volume of the packet content.
第一、媒体介入控制(media access control,MAC)头信息(header)字段。其中,MAC头信息字段用于存储MAC层的控制信息。First, a media access control (media access control, MAC) header field. The MAC header information field is used to store control information of the MAC layer.
第二、网际协议(internet protocol,IP)头信息(header)字段。其中,IP头信息字段用于存储协议版本(version)号、因特网报头长度(internet header length,IHL)等信息。Second, an internet protocol (IP) header field. The IP header information field is used to store information such as a protocol version (version) number and an Internet header length (IHL).
第三、传输控制(transfer control protocol,TCP)头信息(header)字段。其中,TCP头信息字段用于存储源端口号、目标端口号等信息。Third, the transfer control protocol (TCP) header field. The TCP header information field is used to store information such as the source port number and the destination port number.
第四、载荷(payload)部分。其中,载荷部分用于存储报文内容。Fourth, the payload part. Among them, the payload part is used to store the content of the message.
3-3、报文共享信息数据结构存储报文的分片信息,大小通常为360字节。报文共享信息数据结构的介绍如下:3-3. The message sharing information data structure stores the fragmentation information of the message, and the size is usually 360 bytes. The introduction of the message sharing information data structure is as follows:
第一、分片状态指示字段。其中,分片状态指示字段指示报文的分片状态。First, the fragmentation status indication field. The fragmentation status indication field indicates the fragmentation status of the packet.
第二、分片信息字段。其中,分片信息字段承载报文的分片信息。Second, the fragmentation information field. The fragmentation information field carries fragmentation information of the packet.
示例性的,在报文被分片的情况下,分片状态指示字段指示数据包缓存区中的报文是分片报文,分片信息字段承载报文的分片信息。Exemplarily, when the packet is fragmented, the fragmentation status indication field indicates that the packet in the data packet buffer is a fragmented packet, and the fragmentation information field carries fragmentation information of the packet.
可选的,在内存分配器实现为内存分配加速器(memory allocate accelerator,MAA)的情况下,SKB结构还包括MAA头空间(headroom),如图1所示。其中,MAA头空间用于存储MAA的信息。Optionally, when the memory allocator is implemented as a memory allocate accelerator (MAA), the SKB structure further includes a MAA headroom, as shown in FIG. 1 . Among them, the MAA header space is used to store the information of the MAA.
参见图2,调制解调器(modem)接收报文之后,调制解调器申请内存片,以采用缓存报文。然后,调制解调器向中央处理器发送缓存的报文,以实现报文的本地转发,具体过程如图2中的虚线箭头所示。或者,调制解调器通过通用串行总线(universal serial bus,USB)接口向其他设备发送缓存的报文,以实现报文在不同设备之间的传输,具体过程如图2中的实线箭头所示。其中,图2中传输的报文均采用SKB结构。SKB结构中数据包缓存区的大小可以是1024KB,也可以是2048KB。在调制解调器处理数据的峰值为8.1Gbps的情况下,以每个报文包括的数据量大小为1.5KB计算,调制解调器至少需要5万多个报文字段管理数据结构。Referring to Figure 2, after the modem (modem) receives the message, the modem applies for a memory chip to use the buffered message. Then, the modem sends the buffered message to the central processing unit to realize the local forwarding of the message. The specific process is shown by the dotted arrow in FIG. 2 . Alternatively, the modem sends buffered messages to other devices through a universal serial bus (USB) interface, so as to realize the transmission of messages between different devices. The specific process is shown by the solid arrows in FIG. 2 . Among them, the packets transmitted in Fig. 2 all adopt the SKB structure. The size of the packet buffer in the SKB structure can be 1024KB or 2048KB. In the case that the peak value of the data processed by the modem is 8.1Gbps, and the data amount included in each message is 1.5KB, the modem needs at least more than 50,000 message fields to manage the data structure.
在引入转发引擎(forward engine)的情况下,大多数报文通过转发引擎转发至网络接口卡,再通过网络介质向其他设备传输。也就是说,在报文转发过程中,报文不经过中央处理器中的Linux操作系统、TCP/IP协议栈,报文不需要分片处理。因此,在采用SKB结构传输未分片处理的报文的情况下,报文共享信息数据结构中的部分字段(如承载分片信息的字段)是不需要的,导致“内存资源浪费”。With the introduction of a forwarding engine, most packets are forwarded to the network interface card through the forwarding engine, and then transmitted to other devices through the network medium. That is to say, in the process of packet forwarding, the packet does not pass through the Linux operating system and the TCP/IP protocol stack in the central processing unit, and the packet does not need to be fragmented. Therefore, in the case of using the SKB structure to transmit unfragmented packets, some fields in the packet sharing information data structure (such as fields carrying fragmentation information) are unnecessary, resulting in "waste of memory resources".
有鉴于此,本申请实施例提供一种报文缓存方法,本申请实施例报文缓存方法适用于各种设备,如手机、平板电脑、桌面型、膝上型笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、手持计算机、上网本、个人数字助理(personal digital assistant,PDA)、服务器、网络设备等。In view of this, an embodiment of the present application provides a message caching method, and the message caching method of the embodiment of the present application is applicable to various devices, such as mobile phones, tablet computers, desktops, laptops, super mobile personal computers ( Ultra-mobile personal computer, UMPC), handheld computer, netbook, personal digital assistant (personal digital assistant, PDA), server, network equipment, etc.
参见图3a,图3a为本申请实施例报文转发系统300的一种硬件架构示意图。为了便于说明,图3a仅示出了报文转发系统300的主要部件。该报文转发系统300可以包括存储器301和内存分配器302。存储器301和内存分配器302之间通信连接。Referring to FIG. 3a, FIG. 3a is a schematic diagram of a hardware architecture of a message forwarding system 300 according to an embodiment of the present application. For ease of illustration, FIG. 3a only shows the main components of the message forwarding system 300 . The message forwarding system 300 may include a memory 301 and a memory allocator 302 . There is a communication connection between the memory 301 and the memory allocator 302 .
其中,存储器301,主要用于提供本地内存,如存储报文的内存片。示例性的, 存储器301可以是只读存储器(read only memory,ROM),也可以是随机存储器(random access memory,RAM)。其中,RAM可以是同步动态随机存储器(synchronous dynamic random access memory,SDRAM)、双倍速率同步动态随机存储器(double data rate synchronous dynamic random access memory,DDR SDRAM)等。Among them, the memory 301 is mainly used to provide local memory, such as a memory slice for storing messages. Exemplarily, the memory 301 may be a read-only memory (read only memory, ROM) or a random access memory (random access memory, RAM). The RAM may be synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), or the like.
内存分配器302,主要通过运行或执行软件程序和/或应用模块,来实现对本地内存的管理。例如,在调制解调器接收报文之后,内存分配器302确定存储报文的内存片。再如,在网络接口卡转发报文之后,内存分配器302收回存储报文的内存片。示例性的,内存分配器302可以实现为MAA。The memory allocator 302 manages local memory mainly by running or executing software programs and/or application modules. For example, after the modem receives the message, the memory allocator 302 determines the memory slice in which to store the message. For another example, after the network interface card forwards the message, the memory allocator 302 reclaims the memory slice that stores the message. Illustratively, memory allocator 302 may be implemented as a MAA.
需要说明的是,上述存储器301和内存分配器302可以是分立的器件,也可以合设。例如,在存储器301内部包括内存分配器302的情况下,管理本地内存的软件程序和/或应用模块可以在存储器301的内部运行,实现管理本地内存的功能。在本申请实施例中,以“存储器301和内存分配器302是分立的器件”为例,进行介绍。It should be noted that, the above-mentioned memory 301 and memory allocator 302 may be separate devices, or may be combined. For example, in the case where the memory allocator 302 is included in the memory 301, the software program and/or application module for managing the local memory may run in the memory 301 to implement the function of managing the local memory. In the embodiments of the present application, "the memory 301 and the memory allocator 302 are separate devices" are used as an example for introduction.
可选的,作为一种可能的实现形式,在报文转发系统300转发报文的情况下,图3b示出了本申请实施例报文转发系统300的另一种硬件架构示意图。该报文转发系统300还包括转发引擎303、网络接口卡304、调制解调器305、中央处理器306和总线307。Optionally, as a possible implementation form, in the case that the message forwarding system 300 forwards a message, FIG. 3b shows another schematic diagram of the hardware architecture of the message forwarding system 300 according to the embodiment of the present application. The message forwarding system 300 further includes a forwarding engine 303 , a network interface card 304 , a modem 305 , a central processing unit 306 and a bus 307 .
其中,转发引擎303,主要用于转发报文。例如,转发引擎303向网络接口卡304转发报文,以实现报文在设备之间的传输。或者,转发引擎303向中央处理器306转发报文,以实现报文在设备内部的传输。示例性的,转发引擎303可以实现为专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)、或网络处理器(network processor,NP)。The forwarding engine 303 is mainly used for forwarding packets. For example, the forwarding engine 303 forwards the packet to the network interface card 304, so as to realize the transmission of the packet between devices. Alternatively, the forwarding engine 303 forwards the message to the central processing unit 306, so as to realize the transmission of the message inside the device. Exemplarily, the forwarding engine 303 may be implemented as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or a network processor (NP).
网络接口卡304,主要用于将传输的报文转换为网络中其他设备能够识别的格式,再通过网络介质传输至相应的设备。其中,网络接口卡,也可以描述为网卡、网络接口控制器(network interface controller,NIC)等。The network interface card 304 is mainly used to convert the transmitted message into a format that can be recognized by other devices in the network, and then transmit the message to the corresponding device through the network medium. Among them, the network interface card can also be described as a network card, a network interface controller (network interface controller, NIC), and the like.
调制解调器305,主要用于发送和接收各种报文。示例性的,调制解调器305可以是支持长期演进(long term evolution,LTE)、新无线(new radio,NR)通信制式的调制解调器。The modem 305 is mainly used for sending and receiving various messages. Exemplarily, the modem 305 may be a modem that supports long term evolution (LTE) and new radio (NR) communication standards.
中央处理器306,主要用于运行软件层中的操作系统层和应用程序层。其中,操作系统层包括操作系统程序代码和协议栈。操作系统可以是Linux操作系统。协议栈是指按照通信协议所涉及的不同层级划分,并处理对应层级数据的程序代码的集合。协议栈可以是TCP/IP协议栈。TCP/IP协议栈处理的数据结构是SKB结构。应用程序层包括至少一个应用程序。The central processing unit 306 is mainly used to run the operating system layer and the application layer in the software layer. The operating system layer includes operating system program codes and protocol stacks. The operating system may be a Linux operating system. A protocol stack refers to a collection of program codes that are divided according to different levels involved in a communication protocol and that process data at the corresponding level. The protocol stack may be a TCP/IP protocol stack. The data structure handled by the TCP/IP protocol stack is the SKB structure. The application layer includes at least one application.
总线306,主要用于连接存储器301、内存分配器302、转发引擎303、网络接口卡304、调制解调器305和中央处理器306。总线306可以是外设部件互连标准(peripheral component interconnect,PCI)总线或扩展工业标准结构(extended industry standard architecture,EISA)总线等。所述总线306可以分为地址总线、数据总线、控制总线等。为了便于表示,图3b中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。The bus 306 is mainly used to connect the memory 301 , the memory allocator 302 , the forwarding engine 303 , the network interface card 304 , the modem 305 and the central processing unit 306 . The bus 306 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus or the like. The bus 306 can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 3b, but it does not mean that there is only one bus or one type of bus.
可以理解的,上述图3a和图3b所示的硬件架构,仅仅是为了更加清楚的说明本 申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定。本领域普通技术人员可知,随着硬件架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。It can be understood that the above-mentioned hardware architectures shown in FIG. 3a and FIG. 3b are only for illustrating the technical solutions of the embodiments of the present application more clearly, and do not constitute a limitation to the technical solutions provided by the embodiments of the present application. Those of ordinary skill in the art know that with the evolution of the hardware architecture and the emergence of new business scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
下面对本申请实施例提供的报文缓存方法进行具体阐述。The packet caching method provided by the embodiment of the present application is specifically described below.
需要说明的是,本申请下述实施例中各个元件之间的消息名字或消息中各参数的名字等只是一个示例,具体实现中也可以是其他的名字,在此统一说明,以下不再赘述。It should be noted that the names of the messages between the elements or the names of the parameters in the messages in the following embodiments of the present application are just an example, and other names may also be used in the specific implementation, which are described here in a unified manner, and will not be repeated below. .
本申请实施例提供一种报文缓存方法,该方法采用第一内存片存储未分片的目标报文。其中,第一内存片包括数据包缓存区、第一数据结构和第二数据结构。数据包缓存区用于承载目标报文。第一数据结构包括第一字段和第二字段,第一字段指示目标报文未分片,第二字段承载第二数据结构。第二数据结构至少指示数据包缓存区在第一内存片中的首地址。第一字段可以描述为“分片状态指示字段”,第二字段可以描述为“分片信息字段”,第二字段是在第一字段指示报文分片的情况下,承载分片信息的字段,如图1所示。第二字段也是在第一字段指示目标报文未分片的情况下,承载第二数据结构的字段,如图4所示。也就是说,在目标报文未分片的情况下,不存在分片信息。第二字段用于承载第二数据结构,不再为第二数据结构申请内存片,以节省内存资源。其中,第二数据结构可以是报文字段管理数据结构,具体参见“SKB结构”部分的介绍,此处不再赘述。The embodiment of the present application provides a message caching method, and the method uses a first memory slice to store an unfragmented target message. Wherein, the first memory slice includes a data packet buffer area, a first data structure and a second data structure. The packet buffer is used to carry the target packet. The first data structure includes a first field and a second field, the first field indicates that the target packet is not fragmented, and the second field carries the second data structure. The second data structure at least indicates the first address of the data packet buffer in the first memory slice. The first field can be described as a "fragmentation status indication field", the second field can be described as a "fragmentation information field", and the second field is a field that carries fragmentation information when the first field indicates packet fragmentation ,As shown in Figure 1. The second field is also a field that carries the second data structure when the first field indicates that the target packet is not fragmented, as shown in FIG. 4 . That is, when the target packet is not fragmented, there is no fragmentation information. The second field is used to carry the second data structure, and no memory chips are applied for the second data structure, so as to save memory resources. The second data structure may be a message field management data structure. For details, please refer to the introduction in the "SKB structure" section, which will not be repeated here.
示例性的,本申请实施例报文缓存方法应用在报文转发过程中。参见图5,该方法包括如下步骤:Exemplarily, the packet buffering method according to the embodiment of the present application is applied in the packet forwarding process. Referring to Figure 5, the method includes the following steps:
S501、调制解调器接收目标报文。S501. The modem receives the target message.
示例性的,调制解调器接收来自接入网设备的目标报文。在调制解调器和接入网设备支持LTE通信制式的情况下,调制解调器接收的目标报文是满足LTE报文格式的报文。在调制解调器和接入网设备支持NR通信制式的情况下,调制解调器接收的目标报文是满足NR报文格式的报文。Exemplarily, the modem receives the target message from the access network device. In the case where the modem and the access network device support the LTE communication standard, the target message received by the modem is a message that satisfies the LTE message format. In the case where the modem and the access network device support the NR communication standard, the target message received by the modem is a message that satisfies the NR message format.
S502、调制解调器向内存分配器发送目标报文。相应的,内存分配器接收来自调制解调器的目标报文。S502, the modem sends a target message to the memory allocator. Accordingly, the memory allocator receives the target message from the modem.
其中,目标报文即S501中接收的报文。The target message is the message received in S501.
S503、内存分配器确定第一内存片。S503, the memory allocator determines the first memory slice.
其中,第一内存片包括数据包缓存区、第一数据结构和第二数据结构。第一数据结构包括第一字段和第二字段,第一字段指示数据包缓存区中报文的分片状态,第二字段用于承载第二数据结构,如图4所示。其中,第二字段是在报文分片状态下承载分片信息的字段。第二字段的数量可以是一个,也可以是多个。Wherein, the first memory slice includes a data packet buffer area, a first data structure and a second data structure. The first data structure includes a first field and a second field, the first field indicates the fragmentation state of the packet in the data packet buffer, and the second field is used to carry the second data structure, as shown in FIG. 4 . The second field is a field that carries fragmentation information in the packet fragmentation state. The number of the second field may be one or more.
其中,第一数据结构在第一内存片中的位置介绍如下:第一数据结构在第一内存片中的位置可以灵活设置。示例性的,第一数据结构在第一内存片中的位置包括以下其中一项:数据包缓存区之前(图4、图6未示出)、或数据包缓存区之后。当然,第一数据结构也可以在第一内存片的其他位置,本申请实施例对此不作限定。另外,数据包缓存区占用的存储空间与第一数据结构占用的存储空间之间可以是连续的,也可以是不连续的。例如,数据包缓存区与第一数据结构之间间隔的存储空间大于或等于预设值,以支持第一数据结构 后续演进,如第一数据结构中增加新的字段的情况下,新的字段可以存储于上述“数据包缓存区与第一数据结构之间间隔的存储空间”。其中,预设值的单位可以是比特、字节等。预设值可以是50~100字节中任意数量的字节,也可以是一定数量的比特。The location of the first data structure in the first memory slice is described as follows: the location of the first data structure in the first memory slice can be flexibly set. Exemplarily, the location of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area (not shown in FIG. 4 and FIG. 6 ), or after the data packet buffer area. Certainly, the first data structure may also be located in other locations of the first memory slice, which is not limited in this embodiment of the present application. In addition, the storage space occupied by the data packet buffer area and the storage space occupied by the first data structure may be continuous or discontinuous. For example, the storage space between the data packet buffer area and the first data structure is greater than or equal to a preset value to support the subsequent evolution of the first data structure. For example, when a new field is added to the first data structure, the new field It can be stored in the above-mentioned "storage space spaced between the data packet buffer area and the first data structure". The unit of the preset value may be bits, bytes, or the like. The preset value may be any number of bytes from 50 to 100 bytes, or may be a certain number of bits.
示例性的,参见图6,内存分配器根据目标报文的报文格式和目标报文的数据量大小,在存储器的本地内存中确定第一内存片。第一内存片中数据包缓存区的大小可以是1024KB。Exemplarily, referring to FIG. 6 , the memory allocator determines the first memory slice in the local memory of the memory according to the message format of the target message and the data size of the target message. The size of the data packet buffer in the first memory slice may be 1024KB.
S504、内存分配器采用目标报文填充第一内存片。S504, the memory allocator fills the first memory slice with the target message.
示例性的,参见图7的存储器301中斜线填充的方框,内存分配采用目标报文填充第一内存片的数据包缓存区。第一内存片的第一字段指示目标报文未分片。第二字段上承载第二数据结构,第二数据结构至少指示数据包缓存区在第一内存片中的首地址。另外,第二数据结构还指示报文的头部信息的首地址,如MAC头信息的首地址、IP头信息的首地址和TCP头信息的首地址,如图7的存储器301中斜线填充的方框所示。Exemplarily, referring to the blocks filled with diagonal lines in the memory 301 in FIG. 7 , the memory allocation uses the target message to fill the data packet buffer area of the first memory slice. The first field of the first memory slice indicates that the target packet is not fragmented. The second field carries a second data structure, and the second data structure at least indicates the first address of the data packet buffer area in the first memory slice. In addition, the second data structure also indicates the first address of the header information of the message, such as the first address of the MAC header information, the first address of the IP header information and the first address of the TCP header information, as shown in the memory 301 of FIG. 7 , filled with slashes shown in the box.
S505、内存分配器向转发引擎发送头部信息的首地址。相应的,转发引擎接收来自内存分配器的头部信息的首地址。S505, the memory allocator sends the first address of the header information to the forwarding engine. Correspondingly, the forwarding engine receives the first address of the header information from the memory allocator.
其中,头部信息可以例如但不限于MAC头信息、IP头信息、TCP头信息。头部信息的首地址可由第二数据结构来指示,具体参见“SKB结构”部分的相关介绍,此处不再赘述。The header information may be, for example, but not limited to, MAC header information, IP header information, and TCP header information. The first address of the header information may be indicated by the second data structure. For details, please refer to the relevant introduction in the "SKB structure" section, which will not be repeated here.
S506、转发引擎根据头部信息的首地址上承载的头部信息,确定目标报文的转发方向。S506: The forwarding engine determines the forwarding direction of the target packet according to the header information carried on the first address of the header information.
示例性的,转发引擎读取头部信息的首地址上承载的头部信息,如读取MAC头信息,根据MAC头信息确定目标报文的转发方向。若目标报文是发送给其他设备的报文,则转发引擎确定转发方向是网络接口卡的端口,如千兆位媒体存取控制(gigabit media access control,GMAC)端口,转发引擎执行S507。若目标报文是发送给本设备的报文,则转发引擎确定转发方向是中央处理器,转发引擎执行S511。Exemplarily, the forwarding engine reads the header information carried on the first address of the header information, for example, reads the MAC header information, and determines the forwarding direction of the target packet according to the MAC header information. If the target packet is a packet sent to other devices, the forwarding engine determines that the forwarding direction is the port of the network interface card, such as a gigabit media access control (gigabit media access control, GMAC) port, and the forwarding engine executes S507. If the target packet is a packet sent to the device, the forwarding engine determines that the forwarding direction is the central processing unit, and the forwarding engine executes S511.
作为一种可能的示例,“采用网络接口卡转发目标报文”的情况如图5中“示例一”的虚线框和图7所示,各步骤的介绍如下:As a possible example, the situation of "using the network interface card to forward the target message" is shown in the dotted box of "Example 1" in Figure 5 and Figure 7, and the introduction of each step is as follows:
S507a、转发引擎向网络接口卡发送头部信息的首地址。相应的,网络接口卡接收来自转发引擎的头部信息的首地址。S507a, the forwarding engine sends the first address of the header information to the network interface card. Correspondingly, the network interface card receives the first address of the header information from the forwarding engine.
其中,头部信息的首地址的介绍可以参见S505的相关说明,此处不再赘述。For the introduction of the first address of the header information, reference may be made to the relevant description of S505, which will not be repeated here.
S507b、网络接口卡根据头部信息的首地址,获取目标报文。S507b, the network interface card obtains the target packet according to the first address of the header information.
示例性的,网络接口卡根据头部信息的首地址,确定头部信息的存储地址和数据内容的存储地址。网络接口卡在头部信息的存储地址上读取目标报文的头部信息,在数据内容的存储地址上读取目标报文的数据内容。如此,网络接口卡获取到待转发的目标报文。Exemplarily, the network interface card determines the storage address of the header information and the storage address of the data content according to the first address of the header information. The network interface card reads the header information of the target message from the storage address of the header information, and reads the data content of the target message from the storage address of the data content. In this way, the network interface card acquires the target packet to be forwarded.
S508、网络接口卡通过网络介质向目标设备发送目标报文。相应的,目标设备通过网络介质接收来自网络接口卡的目标报文。S508, the network interface card sends the target message to the target device through the network medium. Correspondingly, the target device receives the target packet from the network interface card through the network medium.
其中,目标设备是目标报文的目的地址对应的设备。The target device is the device corresponding to the destination address of the target packet.
可选的,在网络接口卡执行S508之后,为了进一步提高内存资源的重复利用率,本申请实施例还包括S509和S510:Optionally, after the network interface card executes S508, in order to further improve the repeated utilization of memory resources, this embodiment of the present application further includes S509 and S510:
S509、网络接口卡向内存分配器发送指示信息1。相应的,内存分配器接收来自网络 接口卡的指示信息1。S509, the network interface card sends indication information 1 to the memory allocator. Accordingly, the memory allocator receives indication information 1 from the network interface card.
其中,指示信息1指示目标报文已转发。The indication information 1 indicates that the target packet has been forwarded.
S510、内存分配器根据指示信息1,释放第一内存片。S510, the memory allocator releases the first memory slice according to the indication information 1.
示例性的,内存分配器根据指示信息1,删除第一内存片存储的信息,收回第一内存片,以使收回的内存资源存储其他的报文,从而实现内存分配器对内存资源的管理。其中,目标报文删除后的第一内存片,如图7的存储器301中无斜线填充的方框所示。Exemplarily, the memory allocator deletes the information stored in the first memory slice according to the instruction information 1, and reclaims the first memory slice, so that the reclaimed memory resource stores other messages, so as to realize the management of the memory resource by the memory allocator. Wherein, the first memory slice after the deletion of the target message is shown as a box without diagonal lines in the memory 301 in FIG. 7 .
作为另一种可能的示例,“向本设备的中央处理器转发目标报文”的情况如图5中“示例二”的虚线框和图8所示,各步骤的介绍如下:As another possible example, the situation of "forwarding the target message to the central processing unit of the device" is shown in the dotted box in "Example 2" in Figure 5 and Figure 8, and the steps are described as follows:
S511、转发引擎向中央处理器发送请求消息。相应的,中央处理器接收来自转发引擎的请求消息。S511. The forwarding engine sends a request message to the central processing unit. Correspondingly, the central processor receives the request message from the forwarding engine.
其中,请求消息请求目标内存片,以使第一内存片中存储的信息满足SKB结构。The request message requests the target memory slice, so that the information stored in the first memory slice satisfies the SKB structure.
可选的,请求消息还可以携带头部信息的首地址,以使CPU申请目标内存片之后,根据头部信息的首地址,拷贝目标报文或目标报文的部分信息,以采用SKB结构存储目标报文。其中,SKB结构的介绍可以参见图1的相关说明,此处不再赘述。Optionally, the request message can also carry the first address of the header information, so that after the CPU applies for the target memory slice, according to the first address of the header information, copy the target message or part of the information of the target message to use the SKB structure to store. target message. For the introduction of the SKB structure, reference may be made to the relevant description of FIG. 1 , which will not be repeated here.
S512、中央处理器根据请求消息,确定目标内存片。S512, the central processing unit determines the target memory slice according to the request message.
例如,作为第一种可能的示例,由于第一内存片中包括了数据包缓存区、第一字段和第二字段,所以,中央处理器可以确定目标内存片为第二内存片。其中,第二内存片用于存储报文字段管理数据结构,且不存储数据包缓存区和报文共享信息数据结构。采用报文字段管理数据结构存储第二数据结构的信息。也就是说,满足SKB结构的目标内存片是由两个内存片构成的,即数据包缓存区和报文共享信息数据结构分布于第一内存片,报文字段管理数据结构分布于第二内存片。For example, as a first possible example, since the first memory slice includes a data packet buffer area, a first field and a second field, the central processing unit may determine that the target memory slice is the second memory slice. Wherein, the second memory slice is used to store the message field management data structure, and does not store the data packet buffer area and the message sharing information data structure. The message field management data structure is used to store the information of the second data structure. That is to say, the target memory slice that satisfies the SKB structure is composed of two memory slices, that is, the data packet buffer area and the message sharing information data structure are distributed in the first memory slice, and the message field management data structure is distributed in the second memory slice. piece.
再如,作为第二种可能的示例,中央处理器可以确定目标内存片为第三内存片。其中,第三内存片包括数据包缓存区、报文字段管理数据结构和报文共享信息数据结构。报文字段管理数据结构用于存储第二数据结构的信息。也就是说,满足SKB结构的目标内存片是一个内存片,即第三内存片。For another example, as a second possible example, the central processing unit may determine that the target memory slice is the third memory slice. The third memory slice includes a data packet buffer area, a message field management data structure, and a message sharing information data structure. The message field management data structure is used to store information of the second data structure. That is to say, the target memory slice that satisfies the SKB structure is a memory slice, that is, the third memory slice.
S513、中央处理器采用目标信息填充目标内存片。S513, the central processing unit fills the target memory slice with target information.
例如,作为第一种可能的示例,在目标内存片实现为第二内存片的情况下,目标信息为第二数据结构存储的信息。此种情况下,中央处理器根据请求消息中携带的头部信息,读取第一内存片中第二数据结构(或第二字段)上存储的信息,再存储至第二内存片。For example, as a first possible example, when the target memory slice is implemented as a second memory slice, the target information is information stored in the second data structure. In this case, the central processor reads the information stored in the second data structure (or the second field) in the first memory slice according to the header information carried in the request message, and then stores the information in the second memory slice.
再如,作为第二种可能的示例,在目标内存片实现为第三内存片的情况下,目标信息为第一内存片存储的信息。此种情况下,中央处理器根据请求消息中携带的头部信息,读取第一内存片中存储的信息,再存储至第三内存片。例如,第一内存片的数据包缓存区存储的信息拷贝至第三内存片的数据包缓存区,第一数据结构恢复为报文共享信息数据结构,存储至第三内存片,第二数据结构存储的信息拷贝至第三内存片的报文字段管理数据结构。For another example, as a second possible example, when the target memory slice is implemented as a third memory slice, the target information is the information stored in the first memory slice. In this case, the central processor reads the information stored in the first memory chip according to the header information carried in the request message, and then stores the information in the third memory chip. For example, the information stored in the data packet buffer area of the first memory slice is copied to the data packet buffer area of the third memory slice, and the first data structure is restored to the message sharing information data structure, which is stored in the third memory slice, and the second data structure The stored information is copied to the message field management data structure of the third memory slice.
如此,在中央处理器沿用TCP/IP协议栈的情况下,仍可以采用SKB结构对报文进行接收与处理,无需适配第一内存片的数据结构。在“转发引擎向本设备的中央处理器转发目标报文”的情况下,为了进一步提高内存资源的重复利用率,本申请实施例还包括S514和S515:In this way, in the case where the central processor continues to use the TCP/IP protocol stack, the SKB structure can still be used to receive and process the message, and there is no need to adapt the data structure of the first memory slice. In the case of "the forwarding engine forwards the target message to the central processing unit of the device", in order to further improve the repeated utilization of memory resources, the embodiment of the present application further includes S514 and S515:
S514、中央处理器向内存分配器发送指示信息2。相应的,内存分配器接收来自中央处理器的指示信息2。S514, the central processing unit sends instruction information 2 to the memory allocator. Correspondingly, the memory allocator receives the instruction information 2 from the central processing unit.
例如,在上述第一种可能的示例中,中央处理器将第二数据结构存储的信息拷贝至第二内存片之后,中央处理器向内存分配器发送指示信息2。指示信息2指示第一数据结构恢复为报文共享信息数据结构。For example, in the above-mentioned first possible example, after the central processing unit copies the information stored in the second data structure to the second memory slice, the central processing unit sends indication information 2 to the memory allocator. The indication information 2 indicates that the first data structure is restored to the message sharing information data structure.
再如,在上述第二种可能的示例中,中央处理器将第一内存片的信息拷贝至第三内存片之后,中央处理器向内存分配器发送指示信息2。指示信息2指示释放第一内存片。For another example, in the above-mentioned second possible example, after the central processing unit copies the information of the first memory slice to the third memory slice, the central processing unit sends the indication information 2 to the memory allocator. The indication information 2 indicates to release the first memory slice.
S515、内存分配器根据指示信息2,处理第一内存片。S515, the memory allocator processes the first memory slice according to the indication information 2.
示例性的,在上述第一种可能的示例中,内存分配器根据指示信息2,删除第二数据结构,以使第二字段承载分片信息。此种情况下,第一数据结构实现为报文共享信息数据结构。Exemplarily, in the above-mentioned first possible example, the memory allocator deletes the second data structure according to the indication information 2, so that the second field carries the fragmentation information. In this case, the first data structure is implemented as a message sharing information data structure.
示例性的,在上述第二种可能的示例中,内存分配器根据指示信息2,释放第一内存片,即删除第一内存片存储的信息,从而收回第一内存片,以使收回的内存资源存储其他目标报文,从而实现内存分配器对内存资源的管理。其中,目标报文删除后的数据包缓存区,如图8的存储器301中无斜线填充的方框所示。Exemplarily, in the second possible example above, the memory allocator releases the first memory slice according to the indication information 2, that is, deletes the information stored in the first memory slice, thereby reclaiming the first memory slice, so that the reclaimed memory The resource stores other target messages, so as to realize the management of memory resources by the memory allocator. Wherein, the data packet buffer area after the deletion of the target message is shown as a block without diagonal lines in the memory 301 in FIG. 8 .
本申请实施例提供的报文缓存方法,采用第一内存片存储未分片的目标报文。由于第二字段是在第一字段指示报文分片的情况下,承载分片信息的字段,第二字段也是在第一字段指示目标报文未分片的情况下,承载第二数据结构的字段。也就是说,在目标报文未分片的情况下,不存在分片信息,第二字段用于承载第二数据结构,通过第二数据结构至少指示数据包缓存区在第一内存片中的首地址,无需为第二数据结构单独申请内存片,从而节省内存资源。In the message caching method provided by the embodiment of the present application, the first memory slice is used to store the unfragmented target message. Since the second field is a field that carries fragmentation information when the first field indicates packet fragmentation, the second field is also a field that carries the second data structure when the first field indicates that the target packet is not fragmented. field. That is to say, when the target packet is not fragmented, there is no fragmentation information, and the second field is used to carry the second data structure. The first address does not need to separately apply for a memory slice for the second data structure, thereby saving memory resources.
上述主要从各个装置之间交互的角度对本申请实施例提供的方案进行了介绍。相应的,本申请实施例还提供了报文缓存装置,该报文缓存装置可以为上述方法实施例中的内存分配器,或者可用于内存分配器的部件。可以理解的是,该报文缓存装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。The foregoing mainly introduces the solutions provided by the embodiments of the present application from the perspective of interaction between various devices. Correspondingly, an embodiment of the present application further provides a message buffering device, and the message buffering device may be the memory allocator in the above method embodiments, or a component that can be used for the memory allocator. It can be understood that, in order to realize the above-mentioned functions, the message buffering apparatus includes corresponding hardware structures and/or software modules for executing each function. Those skilled in the art should easily realize that the present application can be implemented in hardware or a combination of hardware and computer software with the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
图9示出了本申请实施例中提供的报文缓存装置的一种示意性框图。该报文缓存装置900可以以软件的形式存在,也可以为设备,或者设备中的组件(比如芯片系统)。该报文缓存装置900包括:通信单元901和处理单元902。FIG. 9 shows a schematic block diagram of a packet buffering apparatus provided in an embodiment of the present application. The message buffering apparatus 900 may exist in the form of software, or may be a device, or a component in a device (such as a chip system). The message buffering device 900 includes: a communication unit 901 and a processing unit 902 .
通信单元901是该报文缓存装置900的一种接口电路,用于从其它装置接收或向其它装置发送信号。例如,当该报文缓存装置900以芯片的方式实现时,该通信单元901是该芯片用于从其它芯片或装置接收信号的接口电路,或者是该芯片用于向其它芯片或装置发送信号的接口电路。The communication unit 901 is an interface circuit of the message buffer device 900, and is used for receiving signals from or sending signals to other devices. For example, when the message buffering device 900 is implemented in the form of a chip, the communication unit 901 is an interface circuit used by the chip to receive signals from other chips or devices, or an interface circuit used by the chip to send signals to other chips or devices Interface Circuit.
通信单元901可以包括用于与存储器通信的通信单元和用于与其它设备通信的通信单元,这些通信单元可以集成在一起,也可以独立实现。The communication unit 901 may include a communication unit for communicating with the memory and a communication unit for communicating with other devices, and these communication units may be integrated together or independently implemented.
当报文缓存装置900用于实现“内存资源管理”的功能时,示例性的,通信单元901 可以用于支持报文缓存装置900执行图5中的S502、S509、S514,和/或用于本文所描述的方案的其它过程。处理单元902可以用于支持报文缓存装置900执行图5中的S503、S504、S510、S515,和/或用于本文所描述的方案的其它过程。When the message buffering apparatus 900 is used to implement the function of "memory resource management", exemplarily, the communication unit 901 may be used to support the message buffering apparatus 900 to perform S502, S509, and S514 in FIG. 5, and/or be used for Additional procedures for the protocol described herein. The processing unit 902 may be configured to support the message buffering apparatus 900 to perform S503, S504, S510, S515 in FIG. 5, and/or other processes for the solutions described herein.
可选的,处理装置900还可以包括存储单元,用于存储处理装置900的程序代码和数据,数据可以包括不限于原始数据或者中间数据等。Optionally, the processing apparatus 900 may further include a storage unit for storing program codes and data of the processing apparatus 900, and the data may include but not limited to original data or intermediate data.
其中,处理单元902可以是处理器或控制器,例如可以是CPU,通用处理器,DSP,ASIC,FPGA或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。The processing unit 902 may be a processor or a controller, for example, a CPU, a general-purpose processor, a DSP, an ASIC, an FPGA, or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure. A processor may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
存储单元可以是存储器。该存储器可以是上述提供第一内存片的存储器,也可以是不同于上述提供第一内存片的存储器。The storage unit may be a memory. The memory may be the above-mentioned memory providing the first memory chip, or may be different from the above-mentioned memory providing the first memory chip.
当报文缓存装置900中的处理单元902实现为内存分配器,报文缓存装置900中的存储单元实现为存储器,报文缓存装置900中的通信单元901实现为通信接口时,本申请实施例所涉及的报文转发系统可以为图3a所示,或图3b所示。When the processing unit 902 in the message buffer device 900 is implemented as a memory allocator, the storage unit in the message buffer device 900 is implemented as a memory, and the communication unit 901 in the message buffer device 900 is implemented as a communication interface, the embodiment of the present application The involved message forwarding system may be as shown in Fig. 3a or as shown in Fig. 3b.
本领域普通技术人员可以理解:在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包括一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。Those of ordinary skill in the art can understand that: in the above-mentioned embodiments, all or part of them may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, it can be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated. The computer may be a general purpose computer, special purpose computer, computer network, or other programmable device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line, DSL) or wireless (eg, infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that a computer can access, or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media. The available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, digital video disc (DVD)), or semiconductor media (eg, solid state disk (SSD)) )Wait.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络设备上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network devices. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个功能单元独立存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既 可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each functional unit may exist independently, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware, or can be implemented in the form of hardware plus software functional units.
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘,硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。From the description of the above embodiments, those skilled in the art can clearly understand that the present application can be implemented by means of software plus necessary general-purpose hardware, and of course hardware can also be used, but in many cases the former is a better implementation manner . Based on this understanding, the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that make contributions to the prior art. The computer software products are stored in a readable storage medium, such as a floppy disk of a computer. , a hard disk or an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the various embodiments of the present application.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific embodiments of the present application, but the protection scope of the present application is not limited thereto, and changes or substitutions within the technical scope disclosed in the present application should all be covered within the protection scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (18)

  1. 一种报文缓存方法,其特征在于,应用于报文转发系统的内存分配器;所述报文转发系统还包括调制解调器和存储器;所述方法包括:A message buffering method, characterized in that it is applied to a memory allocator of a message forwarding system; the message forwarding system further comprises a modem and a memory; the method comprises:
    所述内存分配器接收来自所述调制解调器的目标报文;the memory allocator receives the target message from the modem;
    所述内存分配器将所述目标报文存储于第一内存片的数据包缓存区,其中,所述第一内存片还包括第一数据结构;所述第一数据结构包括第一字段和第二字段,所述第一字段指示所述目标报文未分片,所述第二字段承载第二数据结构,所述第二字段是在所述目标报文分片状态下承载分片信息的字段,所述第二数据结构至少指示所述数据包缓存区的首地址;所述第一内存片由所述存储器提供。The memory allocator stores the target message in the data packet buffer area of the first memory slice, wherein the first memory slice further includes a first data structure; the first data structure includes a first field and a first data structure. Two fields, the first field indicates that the target packet is not fragmented, the second field carries the second data structure, and the second field carries fragmentation information in the fragmented state of the target packet field, the second data structure at least indicates the first address of the data packet buffer area; the first memory slice is provided by the memory.
  2. 根据权利要求1所述的方法,其特征在于,所述第一数据结构在所述第一内存片中的位置包括以下其中一项:The method according to claim 1, wherein the position of the first data structure in the first memory slice comprises one of the following:
    所述数据包缓存区之前、或所述数据包缓存区之后。before the data packet buffer, or after the data packet buffer.
  3. 根据权利要求1或2所述的方法,其特征在于,所述数据包缓存区与所述第一数据结构之间间隔的存储空间大于或等于预设值。The method according to claim 1 or 2, wherein a storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
  4. 根据权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 3, wherein the method further comprises:
    所述内存分配器接收来自网络接口卡NIC的第一指示信息,其中,所述第一指示信息指示所述目标报文已转发,所述报文转发系统还包括所述NIC;The memory allocator receives first indication information from a network interface card NIC, wherein the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes the NIC;
    所述内存分配器根据所述第一指示信息,释放所述第一内存片。The memory allocator releases the first memory slice according to the first indication information.
  5. 根据权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 3, wherein the method further comprises:
    所述内存分配器接收来自中央处理器CPU的第二指示信息,其中,所述第二指示信息指示所述第一数据结构恢复为报文共享信息数据结构,所述CPU用于申请目标内存片,所述目标内存片用于存储报文字段管理数据结构,所述报文字段管理数据结构至少指示所述数据包缓存区的首地址;所述报文转发系统还包括所述CPU;The memory allocator receives second indication information from the central processing unit CPU, wherein the second indication information indicates that the first data structure is restored to a message sharing information data structure, and the CPU is used to apply for a target memory slice , the target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the first address of the data packet buffer area; the message forwarding system also includes the CPU;
    所述内存分配器根据所述第二指示信息,删除所述第二数据结构。The memory allocator deletes the second data structure according to the second indication information.
  6. 根据权利要求1至3任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 3, wherein the method further comprises:
    所述内存分配器接收来自CPU的第三指示信息,其中,所述第三指示信息指示释放所述第一内存片,所述CPU用于申请目标内存片,所述目标内存片是基于套接字缓存SKB结构确定的;所述SKB结构用于存储所述第一内存片存储的信息;所述报文转发系统还包括所述CPU;The memory allocator receives third indication information from the CPU, wherein the third indication information indicates to release the first memory slice, the CPU is used to apply for a target memory slice, and the target memory slice is based on a socket The word cache SKB structure is determined; the SKB structure is used to store the information stored in the first memory chip; the message forwarding system also includes the CPU;
    所述内存分配器根据所述第三指示信息,释放所述第一内存片。The memory allocator releases the first memory slice according to the third indication information.
  7. 一种内存分配器,其特征在于,应用于报文转发系统;所述报文转发系统还包括调制解调器和存储器;A memory allocator, characterized in that it is applied to a message forwarding system; the message forwarding system further comprises a modem and a memory;
    所述内存分配器,用于接收来自所述调制解调器的目标报文;the memory allocator for receiving the target message from the modem;
    所述内存分配器,还用于将所述目标报文存储于第一内存片的数据包缓存区,其中,所述第一内存片还包括第一数据结构;所述第一数据结构包括第一字段和第二字段,所述第一字段指示所述目标报文未分片,所述第二字段承载第二数据结构,所述第二字段是在所述目标报文分片状态下承载分片信息的字段,所述第二数据结构至少指示所述数据包缓存区的首地址;所述第一内存片由所述存储器提供。The memory allocator is further configured to store the target message in the data packet buffer area of the first memory slice, wherein the first memory slice further includes a first data structure; the first data structure includes a first data structure. A field and a second field, the first field indicates that the target packet is not fragmented, the second field carries a second data structure, and the second field is carried in the fragmented state of the target packet A field of fragmentation information, the second data structure at least indicates the first address of the data packet buffer area; the first memory fragment is provided by the memory.
  8. 根据权利要求7所述的内存分配器,其特征在于,所述第一数据结构在所述第一 内存片中的位置包括以下其中一项:The memory allocator according to claim 7, wherein the position of the first data structure in the first memory slice includes one of the following:
    所述数据包缓存区之前、或所述数据包缓存区之后。before the data packet buffer, or after the data packet buffer.
  9. 根据权利要求7或8所述的内存分配器,其特征在于,所述数据包缓存区与所述第一数据结构之间间隔的存储空间大于或等于预设值。The memory allocator according to claim 7 or 8, wherein a storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
  10. 根据权利要求7至9任一项所述的内存分配器,其特征在于,The memory allocator according to any one of claims 7 to 9, wherein,
    所述内存分配器,还用于接收来自网络接口卡NIC的第一指示信息,其中,所述第一指示信息指示所述目标报文已转发,所述报文转发系统还包括所述NIC;The memory allocator is further configured to receive first indication information from a network interface card NIC, wherein the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes the NIC;
    所述内存分配器,还用于根据所述第一指示信息,释放所述第一内存片。The memory allocator is further configured to release the first memory slice according to the first indication information.
  11. 根据权利要求7至9任一项所述的内存分配器,其特征在于,The memory allocator according to any one of claims 7 to 9, wherein,
    所述内存分配器,还用于接收来自中央处理器CPU的第二指示信息,其中,所述第二指示信息指示所述第一数据结构恢复为报文共享信息数据结构,所述CPU用于申请目标内存片,所述目标内存片用于存储报文字段管理数据结构,所述报文字段管理数据结构至少指示所述数据包缓存区的首地址;所述报文转发系统还包括所述CPU;The memory allocator is further configured to receive second indication information from the central processing unit CPU, wherein the second indication information indicates that the first data structure is restored to the message sharing information data structure, and the CPU is used for Applying for a target memory slice, the target memory slice is used to store a message field management data structure, and the message field management data structure at least indicates the first address of the data packet buffer area; the message forwarding system further includes the CPU;
    所述内存分配器,还用于根据所述第二指示信息删除所述第二数据结构。The memory allocator is further configured to delete the second data structure according to the second indication information.
  12. 根据权利要求7至9任一项所述的内存分配器,其特征在于,The memory allocator according to any one of claims 7 to 9, wherein,
    所述内存分配器,还用于接收来自CPU的第三指示信息,其中,所述第三指示信息指示释放所述第一内存片,所述CPU用于申请目标内存片,所述目标内存片是基于套接字缓存SKB结构确定的;所述SKB结构用于存储所述第一内存片存储的信息;所述报文转发系统还包括所述CPU;The memory allocator is further configured to receive third indication information from the CPU, wherein the third indication information indicates to release the first memory slice, the CPU is used to apply for a target memory slice, and the target memory slice is determined based on the socket cache SKB structure; the SKB structure is used to store the information stored in the first memory chip; the message forwarding system also includes the CPU;
    所述内存分配器,还用于根据所述第三指示信息,释放所述第一内存片。The memory allocator is further configured to release the first memory slice according to the third indication information.
  13. 一种报文转发系统,其特征在于,包括调制解调器、内存分配器和存储器;A message forwarding system, comprising a modem, a memory allocator and a memory;
    所述内存分配器,用于接收来自所述调制解调器的目标报文;the memory allocator for receiving the target message from the modem;
    所述内存分配器,还用于将所述目标报文存储于第一内存片的数据包缓存区,其中,所述第一内存片还包括第一数据结构;所述第一数据结构包括第一字段和第二字段,所述第一字段指示所述目标报文未分片,所述第二字段承载第二数据结构,所述第二字段是在所述目标报文分片状态下承载分片信息的字段,所述第二数据结构至少指示所述数据包缓存区的首地址;所述第一内存片由所述存储器提供。The memory allocator is further configured to store the target message in the data packet buffer area of the first memory slice, wherein the first memory slice further includes a first data structure; the first data structure includes a first data structure. A field and a second field, the first field indicates that the target packet is not fragmented, the second field carries a second data structure, and the second field is carried in the fragmented state of the target packet A field of fragmentation information, the second data structure at least indicates the first address of the data packet buffer area; the first memory fragment is provided by the memory.
  14. 根据权利要求13所述的报文转发系统,其特征在于,所述第一数据结构在所述第一内存片中的位置包括以下其中一项:The message forwarding system according to claim 13, wherein the position of the first data structure in the first memory slice includes one of the following:
    所述数据包缓存区之前、或所述数据包缓存区之后。before the data packet buffer, or after the data packet buffer.
  15. 根据权利要求13或14所述的报文转发系统,其特征在于,所述数据包缓存区与所述第一数据结构之间间隔的存储空间大于或等于预设值。The message forwarding system according to claim 13 or 14, characterized in that, a storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
  16. 根据权利要求13至15任一项所述的报文转发系统,其特征在于,The message forwarding system according to any one of claims 13 to 15, wherein,
    所述内存分配器,还用于接收来自网络接口卡NIC的第一指示信息,其中,所述第一指示信息指示所述目标报文已转发,所述报文转发系统还包括所述NIC;The memory allocator is further configured to receive first indication information from a network interface card NIC, wherein the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes the NIC;
    所述内存分配器,还用于根据所述第一指示信息,释放所述第一内存片。The memory allocator is further configured to release the first memory slice according to the first indication information.
  17. 根据权利要求13至15任一项所述的报文转发系统,其特征在于,The message forwarding system according to any one of claims 13 to 15, wherein,
    所述内存分配器,还用于接收来自中央处理器CPU的第二指示信息,其中,所述第二指示信息指示所述第一数据结构恢复为报文共享信息数据结构,所述CPU用于申请目标内 存片,所述目标内存片用于存储报文字段管理数据结构,所述报文字段管理数据结构至少指示所述数据包缓存区的首地址;所述报文转发系统还包括所述CPU;The memory allocator is further configured to receive second indication information from the central processing unit CPU, wherein the second indication information indicates that the first data structure is restored to the message sharing information data structure, and the CPU is used for Applying for a target memory slice, the target memory slice is used to store a message field management data structure, and the message field management data structure at least indicates the first address of the data packet buffer area; the message forwarding system further includes the CPU;
    所述内存分配器,还用于根据所述第二指示信息,删除所述第二数据结构。The memory allocator is further configured to delete the second data structure according to the second indication information.
  18. 根据权利要求13至15任一项所述的报文转发系统,其特征在于,The message forwarding system according to any one of claims 13 to 15, wherein,
    所述内存分配器,还用于接收来自CPU的第三指示信息,其中,所述第三指示信息指示释放所述第一内存片,所述CPU用于申请目标内存片,所述目标内存片是基于套接字缓存SKB结构确定的;所述SKB结构用于存储所述第一内存片存储的信息;所述报文转发系统还包括所述CPU;The memory allocator is further configured to receive third indication information from the CPU, wherein the third indication information indicates to release the first memory slice, the CPU is used to apply for a target memory slice, and the target memory slice is determined based on the socket cache SKB structure; the SKB structure is used to store the information stored in the first memory chip; the message forwarding system also includes the CPU;
    所述内存分配器,还用于根据所述第三指示信息,释放所述第一内存片。The memory allocator is further configured to release the first memory slice according to the third indication information.
PCT/CN2021/072495 2021-01-18 2021-01-18 Message buffering method, memory allocator, and message forwarding system WO2022151475A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/072495 WO2022151475A1 (en) 2021-01-18 2021-01-18 Message buffering method, memory allocator, and message forwarding system
CN202180003831.2A CN115176453A (en) 2021-01-18 2021-01-18 Message caching method, memory distributor and message forwarding system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/072495 WO2022151475A1 (en) 2021-01-18 2021-01-18 Message buffering method, memory allocator, and message forwarding system

Publications (1)

Publication Number Publication Date
WO2022151475A1 true WO2022151475A1 (en) 2022-07-21

Family

ID=82446823

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/072495 WO2022151475A1 (en) 2021-01-18 2021-01-18 Message buffering method, memory allocator, and message forwarding system

Country Status (2)

Country Link
CN (1) CN115176453A (en)
WO (1) WO2022151475A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116049122A (en) * 2022-08-12 2023-05-02 荣耀终端有限公司 Log information transmission control method, electronic device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110855610A (en) * 2019-09-30 2020-02-28 视联动力信息技术股份有限公司 Data packet processing method and device and storage medium
US20200296059A1 (en) * 2016-03-11 2020-09-17 Purdue Research Foundation Computer remote indirect memory access system
CN112231101A (en) * 2020-10-16 2021-01-15 北京中科网威信息技术有限公司 Memory allocation method and device and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200296059A1 (en) * 2016-03-11 2020-09-17 Purdue Research Foundation Computer remote indirect memory access system
CN110855610A (en) * 2019-09-30 2020-02-28 视联动力信息技术股份有限公司 Data packet processing method and device and storage medium
CN112231101A (en) * 2020-10-16 2021-01-15 北京中科网威信息技术有限公司 Memory allocation method and device and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116049122A (en) * 2022-08-12 2023-05-02 荣耀终端有限公司 Log information transmission control method, electronic device and storage medium
CN116049122B (en) * 2022-08-12 2023-11-21 荣耀终端有限公司 Log information transmission control method, electronic device and storage medium

Also Published As

Publication number Publication date
CN115176453A (en) 2022-10-11

Similar Documents

Publication Publication Date Title
EP3694165B1 (en) Managing congestion in a network
US11290544B2 (en) Data transmission methods applied to a proxy server or a backend server, and data transmission system
US11023411B2 (en) Programmed input/output mode
WO2021013046A1 (en) Communication method and network card
WO2019129167A1 (en) Method for processing data packet and network card
WO2023005773A1 (en) Message forwarding method and apparatus based on remote direct data storage, and network card and device
WO2020063298A1 (en) Method for processing tcp message, toe assembly, and network device
WO2019169556A1 (en) Packet sending method and apparatus, and storage device
JP2005044353A (en) State migration in multiple nic rdma enabled devices
US20120063449A1 (en) Configurable network socket aggregation to enable segmentation offload
US10609125B2 (en) Method and system for transmitting communication data
CN111459417B (en) Non-lock transmission method and system for NVMeoF storage network
WO2024037296A1 (en) Protocol family-based quic data transmission method and device
US10769096B2 (en) Apparatus and circuit for processing data
JP5304674B2 (en) Data conversion apparatus, data conversion method and program
WO2022151475A1 (en) Message buffering method, memory allocator, and message forwarding system
WO2022206759A1 (en) File sending method, device and computer readable storage medium
WO2017032152A1 (en) Method for writing data into storage device and storage device
US11394776B2 (en) Systems and methods for transport layer processing of server message block protocol messages
CN104022961A (en) Data transmission method, apparatus and system
WO2023010730A1 (en) Data packet parsing method and server
WO2019015487A1 (en) Data retransmission method, rlc entity and mac entity
CN113326151A (en) Inter-process communication method, device, equipment, system and storage medium
WO2023202241A1 (en) Communication method and related product
WO2023125430A1 (en) Traffic management apparatus, packet caching method, chip, and network device

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21918676

Country of ref document: EP

Kind code of ref document: A1