WO2022151475A1 - Procédé de mise en mémoire tampon de messages, dispositif d'attribution de mémoire et système de transfert de message - Google Patents

Procédé de mise en mémoire tampon de messages, dispositif d'attribution de mémoire et système de transfert de message Download PDF

Info

Publication number
WO2022151475A1
WO2022151475A1 PCT/CN2021/072495 CN2021072495W WO2022151475A1 WO 2022151475 A1 WO2022151475 A1 WO 2022151475A1 CN 2021072495 W CN2021072495 W CN 2021072495W WO 2022151475 A1 WO2022151475 A1 WO 2022151475A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
message
data structure
target
field
Prior art date
Application number
PCT/CN2021/072495
Other languages
English (en)
Chinese (zh)
Inventor
曹雷
曲吉亮
王心力
敬勇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202180003831.2A priority Critical patent/CN115176453A/zh
Priority to PCT/CN2021/072495 priority patent/WO2022151475A1/fr
Publication of WO2022151475A1 publication Critical patent/WO2022151475A1/fr

Links

Images

Definitions

  • the present application relates to the field of communication technologies, and in particular, to a message caching method, a memory allocator, and a message forwarding system.
  • the SKB structure includes a message field management data structure (sk_buff), a message sharing information data structure (skb_shared_info), and a packet buffer (packet buffer).
  • the message field management data structure stores message management information.
  • the message sharing information data structure stores the fragmentation information of the message.
  • the packet buffer is used to store the packet content.
  • the packet does not need to be fragmented. If the SKB structure is still used to transmit the message, many fields in the message sharing information data structure (such as the fields that carry fragmentation information) are not required, but memory still needs to be allocated for the above "unnecessary fields", resulting in memory resources. waste.
  • the embodiments of the present application provide a message caching method, a memory allocator, and a message forwarding system, which can save memory resources.
  • an embodiment of the present application provides a message buffering method, and the execution body of the method may be a memory allocator of a message forwarding system, or may be a chip applied to a memory allocator of a message forwarding system.
  • the message forwarding system also includes a modem and memory.
  • the method includes: the memory allocator receives the target message from the modem. Then, the memory allocator stores the target message in the data packet buffer area of the first memory slice. Wherein, the first memory slice further includes a first data structure.
  • the first data structure includes a first field and a second field, the first field indicates that the target message is not fragmented, the second field carries the second data structure, and the second field carries fragmentation information in the fragmented state of the target message field, the second data structure at least indicates the first address of the data packet buffer area.
  • the first memory slice is provided by the memory.
  • the first memory slice is used to store the target message that is not fragmented and processed. Since the second field is a field that carries fragmentation information when the first field indicates packet fragmentation, the second field is also a field that carries the second data structure when the first field indicates that the target packet is not fragmented. field. That is to say, when the target packet is not fragmented, there is no fragmentation information, and the second field is used to carry the second data structure.
  • the first address does not need to separately apply for a memory slice for the second data structure, thereby saving memory resources.
  • the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area. That is, the position of the first data structure in the first memory slice can be flexibly set.
  • the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
  • the unit of the preset value may be bits, bytes, or the like.
  • the default value can be any number of bytes from 50 to 100 bytes, or a certain number of bits to support the subsequent evolution of the first data structure. For example, when a new field is added to the first data structure, the new field It can be stored in the above-mentioned "storage space spaced between the data packet buffer area and the first data structure".
  • the packet caching method further includes: the memory allocator receives the first indication information from the network interface card NIC.
  • the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes a NIC. Then, the memory allocator releases the first memory slice according to the first indication information.
  • the memory allocator can also recover the first memory slice according to the first indication information to store other messages, thereby improving the utilization rate of memory resources.
  • the message caching method further includes: the memory allocator receives the second indication information from the central processing unit CPU.
  • the second indication information indicates that the first data structure is restored to the message sharing information data structure, the CPU is used to apply for the target memory slice, the target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the data The first address of the packet buffer area.
  • the packet forwarding system also includes the CPU. Then, the memory allocator deletes the second data structure according to the second indication information.
  • the memory allocator can also delete the second data structure according to the second indication information, so that the first data structure is restored to the message sharing information data structure to store the fragmented message.
  • the message buffering method further includes: the memory allocator receives third indication information from the CPU.
  • the third indication information indicates the release of the first memory slice, the CPU is used to apply for the target memory slice, the target memory slice is determined based on the socket cache SKB structure, and the SKB structure is used to store the information stored in the first memory slice, and the message
  • the forwarding system also includes a CPU. Then, the memory allocator releases the first memory slice according to the third indication information.
  • the memory allocator can also release the first memory slice, so that the recovered memory slice can store other messages, thereby improving the utilization of memory resources.
  • an embodiment of the present application provides a message buffering device, where the message buffering device is located in a message forwarding system.
  • the message forwarding system also includes a modem and memory.
  • the message buffering device includes: a communication unit and a processing unit. Among them, the communication unit is used to receive the target message from the modem.
  • the processing unit is used to store the target message in the data packet buffer area of the first memory slice, wherein the first memory slice further includes a first data structure; the first data structure includes a first field and a second field, and the first field Indicates that the target message is not fragmented, the second field carries a second data structure, the second field is a field that carries fragmentation information in a fragmented state of the target message, and the second data structure at least indicates the first address of the data packet buffer;
  • the first memory slice is provided by the memory.
  • the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area.
  • the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
  • the communication unit is further configured to receive the first indication information from the network interface card NIC.
  • the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes a NIC.
  • the processing unit is further configured to release the first memory slice according to the first indication information.
  • the communication unit is further configured to receive the second indication information from the central processing unit CPU.
  • the second indication information indicates that the first data structure is restored to the message sharing information data structure, the CPU is used to apply for the target memory slice, the target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the data The first address of the packet buffer area.
  • the packet forwarding system also includes the CPU.
  • the processing unit is further configured to delete the second data structure according to the second indication information.
  • the communication unit is further configured to receive third indication information from the CPU.
  • the third indication information indicates the release of the first memory slice
  • the CPU is used to apply for the target memory slice
  • the target memory slice is determined based on the socket cache SKB structure
  • the SKB structure is used to store the information stored in the first memory slice
  • the forwarding system also includes a CPU.
  • the processing unit is further configured to release the first memory slice according to the third indication information.
  • an embodiment of the present application provides a message buffering device, including a processor and an interface circuit, where the processor is configured to communicate with other devices through the interface circuit, and execute the reporting of the first aspect or any one of the first aspect. Text caching method.
  • the processor includes one or more.
  • an embodiment of the present application provides a message buffering device, including a processor that is connected to a memory and used to call a program stored in the memory to execute the first aspect or any one of the first aspects. Text caching method.
  • the memory may be located within the message buffering device, or may be located outside the message buffering device.
  • the processor includes one or more.
  • an embodiment of the present application provides a message caching device, including at least one processor and at least one memory, where the at least one processor is configured to execute the first aspect or the message caching method of any one of the first aspects.
  • an embodiment of the present application provides a memory allocator, which is applied to a message forwarding system.
  • the message forwarding system also includes a modem and memory.
  • the memory allocator is used to receive destination messages from the modem.
  • the memory allocator is further configured to store the target message in the data packet buffer area of the first memory slice, wherein the first memory slice further includes a first data structure; the first data structure includes a first field and a second field, the first The field indicates that the target packet is not fragmented, the second field carries the second data structure, the second field is the field that carries fragmentation information in the fragmented state of the target packet, and the second data structure at least indicates the first address of the packet buffer area ;
  • the first memory slice is provided by the memory.
  • the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area.
  • the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
  • the memory allocator is further configured to receive first indication information from the network interface card NIC, where the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes the NIC.
  • the memory allocator is further configured to release the first memory slice according to the first indication information.
  • the memory allocator is further configured to receive second indication information from the central processing unit CPU, wherein the second indication information indicates that the first data structure is restored to the message sharing information data structure, and the CPU is used to apply for
  • the target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the first address of the data packet buffer area;
  • the message forwarding system further includes a CPU.
  • the memory allocator is further configured to delete the second data structure according to the second indication information.
  • the memory allocator is further configured to receive third indication information from the CPU, where the third indication information indicates to release the first memory slice, the CPU is used to apply for the target memory slice, and the target memory slice is based on
  • the socket cache is determined by the SKB structure; the SKB structure is used to store the information stored in the first memory slice; the message forwarding system further includes a CPU.
  • the memory allocator is further configured to release the first memory slice according to the third indication information.
  • embodiments of the present application provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer-readable storage medium runs on a computer, the computer can execute the first aspect or any one of the first aspects.
  • a message caching method
  • the embodiments of the present application provide a computer program product including instructions, which, when running on a computer, enables the computer to execute the first aspect or the message caching method of any one of the first aspects.
  • an embodiment of the present application provides a circuit system, where the circuit system includes a processing circuit, and the processing circuit is configured to execute the message caching method according to any one of the first aspect or the first aspect.
  • an embodiment of the present application provides a chip, where the chip includes a logic circuit and an input and output interface.
  • the input and output interfaces are used for communication with modules other than the chip.
  • the chip may be a chip that implements the function of the memory allocator in the first aspect or any possible design of the first aspect.
  • the input and output interface s input target packets.
  • the logic circuit is used to run a computer program or instructions to implement the message buffering method in the first aspect or any possible design of the first aspect.
  • an embodiment of the present application provides a message forwarding system, where the system includes a modem, a memory allocator, and a memory.
  • the memory allocator is used to receive the target message from the modem.
  • the memory allocator is also used for storing the target message in the data packet buffer area of the first memory slice.
  • the first memory slice further includes a first data structure, the first data structure includes a first field and a second field, the first field indicates that the target message is not fragmented, the second field carries the second data structure, and the second field is A field carrying fragmentation information in the target packet fragmentation state, the second data structure at least indicates the first address of the data packet buffer area, and the first memory fragment is provided by the memory.
  • the position of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area, or after the data packet buffer area.
  • the storage space spaced between the data packet buffer and the first data structure is greater than or equal to a preset value.
  • the memory allocator is further configured to receive first indication information from the network interface card NIC, where the first indication information indicates that the target message has been forwarded, and the message forwarding system further includes the NIC.
  • the memory allocator is further configured to release the first memory slice according to the first indication information.
  • the memory allocator is further configured to receive second indication information from the central processing unit CPU, wherein the second indication information indicates that the first data structure is restored to the message sharing information data structure, and the CPU is used to apply for
  • the target memory slice is used to store the message field management data structure, and the message field management data structure at least indicates the first address of the data packet buffer area;
  • the message forwarding system further includes a CPU.
  • the memory allocator is further configured to delete the second data structure according to the second indication information.
  • the memory allocator is further configured to receive third indication information from the CPU, where the third indication information indicates to release the first memory slice, the CPU is used to apply for the target memory slice, and the target memory slice is based on
  • the socket cache is determined by the SKB structure; the SKB structure is used to store the information stored in the first memory slice; the message forwarding system further includes a CPU.
  • the memory allocator is further configured to release the first memory slice according to the third indication information.
  • FIG. 1 is a schematic diagram of a socket cache SKB structure provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a message forwarding process according to an embodiment of the present application
  • 3a is a schematic diagram of the hardware architecture of a message forwarding system provided by an embodiment of the present application.
  • 3b is a schematic diagram of the hardware architecture of still another message forwarding system provided by an embodiment of the application.
  • FIG. 4 is a schematic structural diagram of a memory chip according to an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a message caching method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a process of packet buffering provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a message forwarding process according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of still another packet forwarding process provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a message buffering apparatus provided by an embodiment of the present application.
  • each segment is called a packet fragment.
  • the packet is directly forwarded by hardware without being transmitted through the TCP/IP protocol stack, the packet does not need to be fragmented.
  • the buffer area is the storage space for buffering packets.
  • the cache area may be a storage space in a network interface card (NIC), or may be a storage space divided in other memories in the device where the network interface card is located, and is used to implement the function of buffering messages.
  • NIC network interface card
  • the SKB structure is a data structure in the transfer control protocol/internet protocol (TCP/IP) stack.
  • An SKB structure includes a message field management data structure (sk_buff), a message shared information data structure (skb_shared_info), and a data packet buffer area (packet buff), as shown in Figure 1.
  • sk_buff message field management data structure
  • skb_shared_info message shared information data structure
  • packet buff data packet buffer area
  • the message field management data structure is the main data structure, and the size is usually 512 bytes.
  • the fields in the message field management data structure can be, for example, but not limited to, the following fields:
  • the latter data structure pointer field indicates the address of the next message field management data structure of the current message field management data structure.
  • the front (prev) data structure pointer field indicates the address of the previous message field management data structure of the current message field management data structure.
  • the header field is used to store the first address of the data packet buffer area.
  • the tail (tail) field is used to store the tail address of the actually stored packet content in the data packet buffer area.
  • the end field is used to store the end address of the data packet buffer area.
  • the message field management data structure also indicates the first address of the MAC header information, the first address of the IP header information, and the first address of the TCP header information.
  • the data packet buffer area is used to store the content of the message. Among them, the size of the packet buffer is determined based on the data volume of the packet content.
  • a media access control (media access control, MAC) header field is used to store control information of the MAC layer.
  • IP header information field is used to store information such as a protocol version (version) number and an Internet header length (IHL).
  • the TCP header information field is used to store information such as the source port number and the destination port number.
  • the payload part is used to store the content of the message.
  • the message sharing information data structure stores the fragmentation information of the message, and the size is usually 360 bytes.
  • the introduction of the message sharing information data structure is as follows:
  • the fragmentation status indication field indicates the fragmentation status of the packet.
  • the fragmentation information field carries fragmentation information of the packet.
  • the fragmentation status indication field indicates that the packet in the data packet buffer is a fragmented packet
  • the fragmentation information field carries fragmentation information of the packet
  • the SKB structure further includes a MAA headroom, as shown in FIG. 1 .
  • the MAA header space is used to store the information of the MAA.
  • the modem After the modem (modem) receives the message, the modem applies for a memory chip to use the buffered message. Then, the modem sends the buffered message to the central processing unit to realize the local forwarding of the message.
  • the specific process is shown by the dotted arrow in FIG. 2 .
  • the modem sends buffered messages to other devices through a universal serial bus (USB) interface, so as to realize the transmission of messages between different devices.
  • the specific process is shown by the solid arrows in FIG. 2 .
  • the packets transmitted in Fig. 2 all adopt the SKB structure.
  • the size of the packet buffer in the SKB structure can be 1024KB or 2048KB.
  • the modem needs at least more than 50,000 message fields to manage the data structure.
  • an embodiment of the present application provides a message caching method, and the message caching method of the embodiment of the present application is applicable to various devices, such as mobile phones, tablet computers, desktops, laptops, super mobile personal computers ( Ultra-mobile personal computer, UMPC), handheld computer, netbook, personal digital assistant (personal digital assistant, PDA), server, network equipment, etc.
  • Ultra-mobile personal computer, UMPC Ultra-mobile personal computer
  • PDA personal digital assistant
  • server network equipment, etc.
  • FIG. 3a is a schematic diagram of a hardware architecture of a message forwarding system 300 according to an embodiment of the present application.
  • the message forwarding system 300 may include a memory 301 and a memory allocator 302 . There is a communication connection between the memory 301 and the memory allocator 302 .
  • the memory 301 is mainly used to provide local memory, such as a memory slice for storing messages.
  • the memory 301 may be a read-only memory (read only memory, ROM) or a random access memory (random access memory, RAM).
  • the RAM may be synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), or the like.
  • the memory allocator 302 manages local memory mainly by running or executing software programs and/or application modules. For example, after the modem receives the message, the memory allocator 302 determines the memory slice in which to store the message. For another example, after the network interface card forwards the message, the memory allocator 302 reclaims the memory slice that stores the message. Illustratively, memory allocator 302 may be implemented as a MAA.
  • the above-mentioned memory 301 and memory allocator 302 may be separate devices, or may be combined.
  • the software program and/or application module for managing the local memory may run in the memory 301 to implement the function of managing the local memory.
  • the memory 301 and the memory allocator 302 are separate devices" are used as an example for introduction.
  • FIG. 3b shows another schematic diagram of the hardware architecture of the message forwarding system 300 according to the embodiment of the present application.
  • the message forwarding system 300 further includes a forwarding engine 303 , a network interface card 304 , a modem 305 , a central processing unit 306 and a bus 307 .
  • the forwarding engine 303 is mainly used for forwarding packets. For example, the forwarding engine 303 forwards the packet to the network interface card 304, so as to realize the transmission of the packet between devices. Alternatively, the forwarding engine 303 forwards the message to the central processing unit 306, so as to realize the transmission of the message inside the device. Exemplarily, the forwarding engine 303 may be implemented as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or a network processor (NP).
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • NP network processor
  • the network interface card 304 is mainly used to convert the transmitted message into a format that can be recognized by other devices in the network, and then transmit the message to the corresponding device through the network medium.
  • the network interface card can also be described as a network card, a network interface controller (network interface controller, NIC), and the like.
  • the modem 305 is mainly used for sending and receiving various messages.
  • the modem 305 may be a modem that supports long term evolution (LTE) and new radio (NR) communication standards.
  • LTE long term evolution
  • NR new radio
  • the central processing unit 306 is mainly used to run the operating system layer and the application layer in the software layer.
  • the operating system layer includes operating system program codes and protocol stacks.
  • the operating system may be a Linux operating system.
  • a protocol stack refers to a collection of program codes that are divided according to different levels involved in a communication protocol and that process data at the corresponding level.
  • the protocol stack may be a TCP/IP protocol stack.
  • the data structure handled by the TCP/IP protocol stack is the SKB structure.
  • the application layer includes at least one application.
  • the bus 306 is mainly used to connect the memory 301 , the memory allocator 302 , the forwarding engine 303 , the network interface card 304 , the modem 305 and the central processing unit 306 .
  • the bus 306 may be a peripheral component interconnect (PCI) bus or an extended industry standard architecture (EISA) bus or the like.
  • PCI peripheral component interconnect
  • EISA extended industry standard architecture
  • the bus 306 can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in FIG. 3b, but it does not mean that there is only one bus or one type of bus.
  • FIG. 3a and FIG. 3b are only for illustrating the technical solutions of the embodiments of the present application more clearly, and do not constitute a limitation to the technical solutions provided by the embodiments of the present application.
  • Those of ordinary skill in the art know that with the evolution of the hardware architecture and the emergence of new business scenarios, the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
  • the embodiment of the present application provides a message caching method, and the method uses a first memory slice to store an unfragmented target message.
  • the first memory slice includes a data packet buffer area, a first data structure and a second data structure.
  • the packet buffer is used to carry the target packet.
  • the first data structure includes a first field and a second field, the first field indicates that the target packet is not fragmented, and the second field carries the second data structure.
  • the second data structure at least indicates the first address of the data packet buffer in the first memory slice.
  • the first field can be described as a "fragmentation status indication field”
  • the second field can be described as a "fragmentation information field”
  • the second field is a field that carries fragmentation information when the first field indicates packet fragmentation ,As shown in Figure 1.
  • the second field is also a field that carries the second data structure when the first field indicates that the target packet is not fragmented, as shown in FIG. 4 . That is, when the target packet is not fragmented, there is no fragmentation information.
  • the second field is used to carry the second data structure, and no memory chips are applied for the second data structure, so as to save memory resources.
  • the second data structure may be a message field management data structure. For details, please refer to the introduction in the "SKB structure" section, which will not be repeated here.
  • the packet buffering method according to the embodiment of the present application is applied in the packet forwarding process.
  • the method includes the following steps:
  • the modem receives the target message.
  • the modem receives the target message from the access network device.
  • the target message received by the modem is a message that satisfies the LTE message format.
  • the target message received by the modem is a message that satisfies the NR message format.
  • the modem sends a target message to the memory allocator. Accordingly, the memory allocator receives the target message from the modem.
  • the target message is the message received in S501.
  • the memory allocator determines the first memory slice.
  • the first memory slice includes a data packet buffer area, a first data structure and a second data structure.
  • the first data structure includes a first field and a second field, the first field indicates the fragmentation state of the packet in the data packet buffer, and the second field is used to carry the second data structure, as shown in FIG. 4 .
  • the second field is a field that carries fragmentation information in the packet fragmentation state. The number of the second field may be one or more.
  • the location of the first data structure in the first memory slice is described as follows: the location of the first data structure in the first memory slice can be flexibly set.
  • the location of the first data structure in the first memory slice includes one of the following items: before the data packet buffer area (not shown in FIG. 4 and FIG. 6 ), or after the data packet buffer area.
  • the first data structure may also be located in other locations of the first memory slice, which is not limited in this embodiment of the present application.
  • the storage space occupied by the data packet buffer area and the storage space occupied by the first data structure may be continuous or discontinuous.
  • the storage space between the data packet buffer area and the first data structure is greater than or equal to a preset value to support the subsequent evolution of the first data structure.
  • the new field when a new field is added to the first data structure, the new field It can be stored in the above-mentioned "storage space spaced between the data packet buffer area and the first data structure".
  • the unit of the preset value may be bits, bytes, or the like.
  • the preset value may be any number of bytes from 50 to 100 bytes, or may be a certain number of bits.
  • the memory allocator determines the first memory slice in the local memory of the memory according to the message format of the target message and the data size of the target message.
  • the size of the data packet buffer in the first memory slice may be 1024KB.
  • the memory allocator fills the first memory slice with the target message.
  • the memory allocation uses the target message to fill the data packet buffer area of the first memory slice.
  • the first field of the first memory slice indicates that the target packet is not fragmented.
  • the second field carries a second data structure, and the second data structure at least indicates the first address of the data packet buffer area in the first memory slice.
  • the second data structure also indicates the first address of the header information of the message, such as the first address of the MAC header information, the first address of the IP header information and the first address of the TCP header information, as shown in the memory 301 of FIG. 7 , filled with slashes shown in the box.
  • the memory allocator sends the first address of the header information to the forwarding engine.
  • the forwarding engine receives the first address of the header information from the memory allocator.
  • the header information may be, for example, but not limited to, MAC header information, IP header information, and TCP header information.
  • the first address of the header information may be indicated by the second data structure. For details, please refer to the relevant introduction in the "SKB structure" section, which will not be repeated here.
  • the forwarding engine determines the forwarding direction of the target packet according to the header information carried on the first address of the header information.
  • the forwarding engine reads the header information carried on the first address of the header information, for example, reads the MAC header information, and determines the forwarding direction of the target packet according to the MAC header information. If the target packet is a packet sent to other devices, the forwarding engine determines that the forwarding direction is the port of the network interface card, such as a gigabit media access control (gigabit media access control, GMAC) port, and the forwarding engine executes S507. If the target packet is a packet sent to the device, the forwarding engine determines that the forwarding direction is the central processing unit, and the forwarding engine executes S511.
  • the network interface card such as a gigabit media access control (gigabit media access control, GMAC) port
  • the forwarding engine sends the first address of the header information to the network interface card.
  • the network interface card receives the first address of the header information from the forwarding engine.
  • the network interface card obtains the target packet according to the first address of the header information.
  • the network interface card determines the storage address of the header information and the storage address of the data content according to the first address of the header information.
  • the network interface card reads the header information of the target message from the storage address of the header information, and reads the data content of the target message from the storage address of the data content. In this way, the network interface card acquires the target packet to be forwarded.
  • the network interface card sends the target message to the target device through the network medium.
  • the target device receives the target packet from the network interface card through the network medium.
  • the target device is the device corresponding to the destination address of the target packet.
  • this embodiment of the present application further includes S509 and S510:
  • the network interface card sends indication information 1 to the memory allocator. Accordingly, the memory allocator receives indication information 1 from the network interface card.
  • the indication information 1 indicates that the target packet has been forwarded.
  • the memory allocator releases the first memory slice according to the indication information 1.
  • the memory allocator deletes the information stored in the first memory slice according to the instruction information 1, and reclaims the first memory slice, so that the reclaimed memory resource stores other messages, so as to realize the management of the memory resource by the memory allocator.
  • the first memory slice after the deletion of the target message is shown as a box without diagonal lines in the memory 301 in FIG. 7 .
  • the forwarding engine sends a request message to the central processing unit.
  • the central processor receives the request message from the forwarding engine.
  • the request message requests the target memory slice, so that the information stored in the first memory slice satisfies the SKB structure.
  • the request message can also carry the first address of the header information, so that after the CPU applies for the target memory slice, according to the first address of the header information, copy the target message or part of the information of the target message to use the SKB structure to store. target message.
  • SKB structure For the introduction of the SKB structure, reference may be made to the relevant description of FIG. 1 , which will not be repeated here.
  • the central processing unit determines the target memory slice according to the request message.
  • the central processing unit may determine that the target memory slice is the second memory slice.
  • the second memory slice is used to store the message field management data structure, and does not store the data packet buffer area and the message sharing information data structure.
  • the message field management data structure is used to store the information of the second data structure. That is to say, the target memory slice that satisfies the SKB structure is composed of two memory slices, that is, the data packet buffer area and the message sharing information data structure are distributed in the first memory slice, and the message field management data structure is distributed in the second memory slice. piece.
  • the central processing unit may determine that the target memory slice is the third memory slice.
  • the third memory slice includes a data packet buffer area, a message field management data structure, and a message sharing information data structure.
  • the message field management data structure is used to store information of the second data structure. That is to say, the target memory slice that satisfies the SKB structure is a memory slice, that is, the third memory slice.
  • the central processing unit fills the target memory slice with target information.
  • the target information is information stored in the second data structure.
  • the central processor reads the information stored in the second data structure (or the second field) in the first memory slice according to the header information carried in the request message, and then stores the information in the second memory slice.
  • the target information is the information stored in the first memory slice.
  • the central processor reads the information stored in the first memory chip according to the header information carried in the request message, and then stores the information in the third memory chip.
  • the information stored in the data packet buffer area of the first memory slice is copied to the data packet buffer area of the third memory slice, and the first data structure is restored to the message sharing information data structure, which is stored in the third memory slice, and the second data structure The stored information is copied to the message field management data structure of the third memory slice.
  • the embodiment of the present application further includes S514 and S515:
  • the central processing unit sends instruction information 2 to the memory allocator.
  • the memory allocator receives the instruction information 2 from the central processing unit.
  • the central processing unit after the central processing unit copies the information stored in the second data structure to the second memory slice, the central processing unit sends indication information 2 to the memory allocator.
  • the indication information 2 indicates that the first data structure is restored to the message sharing information data structure.
  • the central processing unit after the central processing unit copies the information of the first memory slice to the third memory slice, the central processing unit sends the indication information 2 to the memory allocator.
  • the indication information 2 indicates to release the first memory slice.
  • the memory allocator processes the first memory slice according to the indication information 2.
  • the memory allocator deletes the second data structure according to the indication information 2, so that the second field carries the fragmentation information.
  • the first data structure is implemented as a message sharing information data structure.
  • the memory allocator releases the first memory slice according to the indication information 2, that is, deletes the information stored in the first memory slice, thereby reclaiming the first memory slice, so that the reclaimed memory
  • the resource stores other target messages, so as to realize the management of memory resources by the memory allocator.
  • the data packet buffer area after the deletion of the target message is shown as a block without diagonal lines in the memory 301 in FIG. 8 .
  • the first memory slice is used to store the unfragmented target message. Since the second field is a field that carries fragmentation information when the first field indicates packet fragmentation, the second field is also a field that carries the second data structure when the first field indicates that the target packet is not fragmented. field. That is to say, when the target packet is not fragmented, there is no fragmentation information, and the second field is used to carry the second data structure.
  • the first address does not need to separately apply for a memory slice for the second data structure, thereby saving memory resources.
  • an embodiment of the present application further provides a message buffering device, and the message buffering device may be the memory allocator in the above method embodiments, or a component that can be used for the memory allocator.
  • the message buffering apparatus includes corresponding hardware structures and/or software modules for executing each function.
  • the present application can be implemented in hardware or a combination of hardware and computer software with the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • FIG. 9 shows a schematic block diagram of a packet buffering apparatus provided in an embodiment of the present application.
  • the message buffering apparatus 900 may exist in the form of software, or may be a device, or a component in a device (such as a chip system).
  • the message buffering device 900 includes: a communication unit 901 and a processing unit 902 .
  • the communication unit 901 is an interface circuit of the message buffer device 900, and is used for receiving signals from or sending signals to other devices.
  • the communication unit 901 is an interface circuit used by the chip to receive signals from other chips or devices, or an interface circuit used by the chip to send signals to other chips or devices Interface Circuit.
  • the communication unit 901 may include a communication unit for communicating with the memory and a communication unit for communicating with other devices, and these communication units may be integrated together or independently implemented.
  • the communication unit 901 may be used to support the message buffering apparatus 900 to perform S502, S509, and S514 in FIG. 5, and/or be used for Additional procedures for the protocol described herein.
  • the processing unit 902 may be configured to support the message buffering apparatus 900 to perform S503, S504, S510, S515 in FIG. 5, and/or other processes for the solutions described herein.
  • the processing apparatus 900 may further include a storage unit for storing program codes and data of the processing apparatus 900, and the data may include but not limited to original data or intermediate data.
  • the processing unit 902 may be a processor or a controller, for example, a CPU, a general-purpose processor, a DSP, an ASIC, an FPGA, or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. It may implement or execute the various exemplary logical blocks, modules and circuits described in connection with this disclosure.
  • a processor may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
  • the storage unit may be a memory.
  • the memory may be the above-mentioned memory providing the first memory chip, or may be different from the above-mentioned memory providing the first memory chip.
  • the processing unit 902 in the message buffer device 900 is implemented as a memory allocator
  • the storage unit in the message buffer device 900 is implemented as a memory
  • the communication unit 901 in the message buffer device 900 is implemented as a communication interface
  • the involved message forwarding system may be as shown in Fig. 3a or as shown in Fig. 3b.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line, DSL) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that a computer can access, or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • the available media may be magnetic media (eg, floppy disk, hard disk, magnetic tape), optical media (eg, digital video disc (DVD)), or semiconductor media (eg, solid state disk (SSD)) )Wait.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network devices. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each functional unit may exist independently, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, or can be implemented in the form of hardware plus software functional units.
  • the present application can be implemented by means of software plus necessary general-purpose hardware, and of course hardware can also be used, but in many cases the former is a better implementation manner .
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that make contributions to the prior art.
  • the computer software products are stored in a readable storage medium, such as a floppy disk of a computer. , a hard disk or an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the various embodiments of the present application.

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente demande se rapporte au domaine technique des communications et concerne un procédé de mise en mémoire tampon de messages, un dispositif d'attribution de mémoire et un système de transfert de messages, permettant d'économiser une ressource de mémoire. Le procédé comprend les étapes suivantes : un dispositif d'attribution de mémoire reçoit un message cible provenant d'un modulateur-démodulateur, puis le dispositif d'attribution de mémoire stocke le message cible dans une région de tampon de paquets de données d'une première puce de mémoire, la première puce de mémoire comprenant en outre une première structure de données ; la première structure de données comprend un premier champ et un second champ ; le premier champ indique que le message cible n'est pas fragmenté ; le second champ porte une seconde structure de données ; le second champ est un champ transportant des informations de fragmentation dans un état dans lequel le message cible est fragmenté ; la seconde structure de données indique au moins une première adresse de la région de mémoire tampon de paquets de données ; et la première puce de mémoire est fournie par une mémoire.
PCT/CN2021/072495 2021-01-18 2021-01-18 Procédé de mise en mémoire tampon de messages, dispositif d'attribution de mémoire et système de transfert de message WO2022151475A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202180003831.2A CN115176453A (zh) 2021-01-18 2021-01-18 报文缓存方法、内存分配器及报文转发系统
PCT/CN2021/072495 WO2022151475A1 (fr) 2021-01-18 2021-01-18 Procédé de mise en mémoire tampon de messages, dispositif d'attribution de mémoire et système de transfert de message

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/072495 WO2022151475A1 (fr) 2021-01-18 2021-01-18 Procédé de mise en mémoire tampon de messages, dispositif d'attribution de mémoire et système de transfert de message

Publications (1)

Publication Number Publication Date
WO2022151475A1 true WO2022151475A1 (fr) 2022-07-21

Family

ID=82446823

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/072495 WO2022151475A1 (fr) 2021-01-18 2021-01-18 Procédé de mise en mémoire tampon de messages, dispositif d'attribution de mémoire et système de transfert de message

Country Status (2)

Country Link
CN (1) CN115176453A (fr)
WO (1) WO2022151475A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116049122A (zh) * 2022-08-12 2023-05-02 荣耀终端有限公司 日志信息传输控制方法、电子设备和存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110855610A (zh) * 2019-09-30 2020-02-28 视联动力信息技术股份有限公司 一种数据包的处理方法、装置及存储介质
US20200296059A1 (en) * 2016-03-11 2020-09-17 Purdue Research Foundation Computer remote indirect memory access system
CN112231101A (zh) * 2020-10-16 2021-01-15 北京中科网威信息技术有限公司 内存分配方法、装置及可读存储介质

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200296059A1 (en) * 2016-03-11 2020-09-17 Purdue Research Foundation Computer remote indirect memory access system
CN110855610A (zh) * 2019-09-30 2020-02-28 视联动力信息技术股份有限公司 一种数据包的处理方法、装置及存储介质
CN112231101A (zh) * 2020-10-16 2021-01-15 北京中科网威信息技术有限公司 内存分配方法、装置及可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116049122A (zh) * 2022-08-12 2023-05-02 荣耀终端有限公司 日志信息传输控制方法、电子设备和存储介质
CN116049122B (zh) * 2022-08-12 2023-11-21 荣耀终端有限公司 日志信息传输控制方法、电子设备和存储介质

Also Published As

Publication number Publication date
CN115176453A (zh) 2022-10-11

Similar Documents

Publication Publication Date Title
EP3694165B1 (fr) Gestion de la congestion dans un réseau
US11290544B2 (en) Data transmission methods applied to a proxy server or a backend server, and data transmission system
US11023411B2 (en) Programmed input/output mode
WO2021013046A1 (fr) Procédé de communication et carte réseau
WO2019129167A1 (fr) Procédé de traitement de paquet de données et carte réseau
WO2023005773A1 (fr) Procédé et appareil de transfert de message basés sur un stockage direct des données à distance, et carte réseau et dispositif
WO2020063298A1 (fr) Procédé de traitement de message tcp, ensemble toe, et dispositif de réseau
WO2019169556A1 (fr) Procédé et appareil d'envoi de paquets, et dispositif de stockage
JP2005044353A (ja) 複数のnicrdma対応デバイスにおける状態の移行
US20120063449A1 (en) Configurable network socket aggregation to enable segmentation offload
US10609125B2 (en) Method and system for transmitting communication data
WO2024037296A1 (fr) Procédé et dispositif de transmission de données quic basés sur une famille de protocoles
CN111459417B (zh) 一种面向NVMeoF存储网络的无锁传输方法及系统
US10769096B2 (en) Apparatus and circuit for processing data
JP5304674B2 (ja) データ変換装置、データ変換方法及びプログラム
WO2022151475A1 (fr) Procédé de mise en mémoire tampon de messages, dispositif d'attribution de mémoire et système de transfert de message
WO2022206759A1 (fr) Procédé d'envoi de fichier, dispositif et support de stockage lisible par ordinateur
WO2017032152A1 (fr) Procédé pour écrire des données dans un dispositif de stockage et dispositif de stockage
US11394776B2 (en) Systems and methods for transport layer processing of server message block protocol messages
CN104022961A (zh) 一种数据传输方法、装置及系统
WO2023010730A1 (fr) Procédé et serveur d'analyse de paquets de données
WO2019015487A1 (fr) Procédé de retransmission de données, entité rlc, et entité mac
CN113326151A (zh) 进程间通信方法、装置、设备、系统及存储介质
WO2023202241A1 (fr) Procédé de communication et produit associé
WO2023125430A1 (fr) Appareil de gestion de trafic, procédé de mise en cache de paquets, puce et dispositif de réseau

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21918676

Country of ref document: EP

Kind code of ref document: A1