WO2022251998A1 - 支持多协议栈的通信方法及系统 - Google Patents

支持多协议栈的通信方法及系统 Download PDF

Info

Publication number
WO2022251998A1
WO2022251998A1 PCT/CN2021/097148 CN2021097148W WO2022251998A1 WO 2022251998 A1 WO2022251998 A1 WO 2022251998A1 CN 2021097148 W CN2021097148 W CN 2021097148W WO 2022251998 A1 WO2022251998 A1 WO 2022251998A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
network port
port controller
memory
protocol stack
Prior art date
Application number
PCT/CN2021/097148
Other languages
English (en)
French (fr)
Inventor
屈明广
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/097148 priority Critical patent/WO2022251998A1/zh
Priority to CN202180090912.0A priority patent/CN116803067A/zh
Publication of WO2022251998A1 publication Critical patent/WO2022251998A1/zh

Links

Images

Definitions

  • the embodiments of the present application relate to the communication field, and in particular to a communication method and system supporting multiple protocol stacks.
  • an automatic driving system usually includes but is not limited to multiple sensors, hardware for communication interconnection bus and core processor, and an automatic driving software system running on the core processor.
  • various sensor data need to be input to the automatic driving software system through the corresponding communication bus.
  • data can optionally be classified into two types: data plane communication and management plane communication.
  • Data plane communication data generated by various sensors outside SoC (System on Chip), such as Lidar (lidar) and Radar (radar, which can also be millimeter-wave radar).
  • SoC System on Chip
  • Lidar lidar
  • Radar Radar
  • This type of data can be directly used in various algorithms in the autonomous driving software system. Due to the large data volume of this type of data, the bandwidth requirement is generally high (for example, a bandwidth of more than 2 Gbps is required).
  • Management plane communication The SoC also needs to run various management plane services such as device configuration, device status monitoring, and image compression transmission. Such services have relatively low requirements for communication performance. However, this type of communication needs to be compatible with POSIX (Portable Operating System Interface, standard portable operating system interface) user programming interface and network device management functions.
  • POSIX Portable Operating System Interface, standard portable operating system interface
  • the real-time requirements of the automatic driving system are very high. Therefore, if the external sensor data cannot be transmitted to the automatic driving algorithm within a certain time, it will directly threaten the safety and reliability of automatic driving.
  • the Ethernet protocol stack also called the standard protocol stack
  • the Ethernet port driver are used to receive the communication data on the data plane, each data packet needs to pass through the thick Ethernet protocol stack, and data copying is also required. And the complex packet processing overhead of the Ethernet protocol stack. Therefore, the Ethernet protocol stack cannot meet the hard index requirements of the automatic driving system for the deterministic transmission delay of the communication data on the data plane.
  • embodiments of the present application provide a communication method and system supporting multiple protocol stacks.
  • the network port controller can hand over different types of messages to corresponding protocol stacks for processing according to the types of the messages.
  • the network port controller can hand over different types of messages to corresponding protocol stacks for processing according to the types of the messages.
  • the network port controller can hand over different types of messages to corresponding protocol stacks for processing according to the types of the messages.
  • the management plane message In order to realize the fast processing of the data plane message while ensuring the compatibility of the management plane message.
  • the embodiment of the present application provides a communication system supporting multiple protocol stacks.
  • the communication system includes a network port controller, an Ethernet protocol stack and a data plane protocol stack.
  • the network port controller is configured to determine that the received first packet is a management plane packet; and output the first packet to the Ethernet protocol stack.
  • the Ethernet protocol stack is configured to output the first message to the first application in response to the received first message.
  • the network port controller is further configured to determine that the received second message is a data plane message, and store the second message in the first memory.
  • the data plane protocol stack is configured to parse the second message in the first memory to obtain the position information of the specified field of the second message in the first memory; and output the position information of the specified field to the second application , so that the second application acquires the specified field from the first memory according to the location information of the specified field.
  • the communication system in the embodiment of the present application can hand over different types of messages to corresponding protocol stacks for processing according to the types of the messages.
  • the management plane packets are sent and received by the standard Ethernet protocol stack. For the data plane message, it is handed over to the data plane protocol stack for sending and receiving processing. In this way, the transmission requirements of different packets can be met.
  • the management plane message it is not sensitive to delay, but its compatibility needs to be satisfied. Therefore, processing the management plane message through the Ethernet protocol stack can meet the compatibility requirement of the Ethernet message.
  • the data packet messages which are sensitive to delay, in order to make data plane messages reach the application quickly, the data packet messages are processed through the data plane protocol stack, and the data plane protocol stack simplifies the data plane messages Processing, which can meet the delay requirements of data plane packets.
  • the communication system in the embodiment of the present application can realize that the same network port controller supports multiple protocol stacks, thereby improving the utilization rate of network port resources, and the way of supporting dual communication stacks to work concurrently through the same network port can ensure Compatibility while meeting the high performance requirements of the system.
  • the processing of the management message by the Ethernet protocol stack requires at least two data copies.
  • Ethernet protocol stack is compatible with POSIX user programming interfaces and network device management functions.
  • the data plane protocol stack may be the UIO protocol stack in the following embodiments.
  • the specified field is optionally a data field.
  • the location information optionally includes the start address and length information of the data field.
  • the communication system may include multiple data plane protocol stacks, and each data plane protocol stack corresponds to one or more applications.
  • the first message and the second message may come from the same external device, such as a radar, or may come from different external devices, which is not limited in this application.
  • the network port controller includes the correspondence between the feature field and the message type, and the network port controller is specifically used to: determine the corresponding relationship with the first message based on the correspondence between the feature field and the message type
  • the packet type corresponding to the characteristic field of is a management plane packet.
  • the network port controller in the embodiment of the present application can split the packets according to the type of the packets, that is, hand over the management plane messages to the Ethernet protocol stack for processing, and hand over the data plane messages to the data plane protocol stack for processing.
  • the network port controller may include a hardware flow table, and the hardware flow table may record the correspondence between the aforementioned feature fields and packet types.
  • the network port controller includes the correspondence between the feature field and the message type, and the network port controller is specifically used to: determine the corresponding relationship with the second message based on the correspondence between the feature field and the message type
  • the packet type corresponding to the characteristic field of is a data plane packet.
  • the network port controller in the embodiment of the present application can split the packets according to the type of the packets, that is, hand over the management plane messages to the Ethernet protocol stack for processing, and hand over the data plane messages to the data plane protocol stack for processing.
  • the network port controller includes a first hardware queue, the first hardware queue corresponds to the management plane protocol stack, and the network port controller is specifically used to determine that the type of the first message is a management plane message After that, put the received first message into the first hardware queue.
  • the network port controller in the embodiment of the present application can place the message in the corresponding queue through the queues corresponding to different protocol stacks.
  • the multiplexing of the same network port is realized, that is, the same network port can support the sending and receiving of packets of multiple protocol stacks.
  • the network port controller may place the management plane packets in the first hardware queue, and hand over the packets in the first hardware queue to the Ethernet protocol stack bound to the first hardware queue for processing.
  • the network port controller is specifically configured to output the first packet in the first hardware queue to the Ethernet protocol stack. In this way, the network port controller can hand over the packets of the first hardware queue to the Ethernet protocol stack bound to the first hardware queue for processing.
  • the network port controller includes a second hardware queue, the second hardware queue corresponds to the data plane protocol stack, and the network port controller is specifically used to determine that the type of the second message is a data plane message After that, put the received second message into the second hardware queue.
  • the network port controller in the embodiment of the present application can place the message in the corresponding queue through the queues corresponding to different protocol stacks.
  • the multiplexing of the same network port is realized, that is, the same network port can support the sending and receiving of messages of multiple protocol stacks.
  • the network port controller may place the data plane packets in the second hardware queue, and hand over the packets in the second hardware queue to the data plane protocol stack bound to the second hardware queue for processing.
  • the network port controller may include multiple hardware queues corresponding to multiple data plane protocol stacks.
  • the network port controller is specifically configured to output at least one message in the second hardware queue to the first memory, where the at least one message includes the second message.
  • the network port controller can also send and receive packets in batches. That is, the network port controller can hand over multiple packets in the queue to the data plane protocol stack for processing, thereby reducing the overhead when each packet traverses the protocol stack.
  • the network port controller in the embodiment of the present application can also implement the data zero-copy function, that is, output at least one message to the memory, so that the application can directly read data from the memory to avoid the overhead required for data copying .
  • the network port controller is further configured to write the position information of each message in the at least one message in the first memory into the second memory; and, to the data plane protocol The stack reports an interrupt.
  • the data plane protocol stack is specifically configured to, in response to the received interrupt, acquire the location information of each message in the at least one message from the second memory. Based on the obtained position information of each message in the at least one message, read at least one message in the first memory, and determine the position information of the designated field of each message in the at least one message.
  • the interrupt pass-through mechanism can be implemented in the embodiment of the present application, and the interrupt can be directly transparently transmitted to the user mode thread at one time, so as to avoid the performance loss caused by the uncertain delay of interrupt scheduling.
  • the data plane protocol stack can process one message at a time, that is, send the position information of the specified field corresponding to the message to the application one by one, and each time the application receives information of a specified field, it reads it from the memory once .
  • the data plane protocol stack can process multiple messages at one time, that is, send the location information of the specified field of each message to the application together, and the application can read multiple messages in the memory at the same time.
  • the specified fields of the text thereby further reducing system overhead.
  • the embodiment of the present application provides a communication method supporting multiple protocol stacks.
  • the method is applied to a communication system supporting multiple protocol stacks.
  • the communication system includes a network port controller, an Ethernet protocol stack, and a data plane protocol stack; the network port controller determines that the first message received is a management plane message; the network port control The device outputs the first message to the Ethernet protocol stack; the Ethernet protocol stack outputs the first message to the first application in response to the received first message; the network port controller determines that the received second message is The data plane message; the network port controller saves the second message in the first memory; the data plane protocol stack analyzes the second message in the first memory, and obtains the specified field of the second message in the first memory The location information in the data plane protocol stack outputs the location information of the specified field to the second application, so that the second application obtains the specified field from the first memory according to the location information of the specified field.
  • the network port controller includes the corresponding relationship between the characteristic field and the message type, and the network port controller determines that the first received message is a management plane message, including: based on the characteristic field and the message type According to the corresponding relationship, it is determined that the packet type corresponding to the feature field of the first packet is a management plane packet.
  • the network port controller includes the corresponding relationship between the characteristic field and the message type, and the network port controller determines that the received second message is a data plane message, including: based on the characteristic field and the message type According to the corresponding relationship, it is determined that the packet type corresponding to the feature field of the second packet is a data plane packet.
  • the network port controller includes a first hardware queue, and the first hardware queue corresponds to the management plane protocol stack. After the network port controller determines that the received first message is a management plane message, it further includes : Put the received first message into the first hardware queue.
  • the network port controller outputting the first packet to the Ethernet protocol stack includes: outputting the first packet in the first hardware queue to the Ethernet protocol stack.
  • the network port controller includes a second hardware queue, and the second hardware queue corresponds to the data plane protocol stack. After the network port controller determines that the received second message is a data plane message, it includes: Put the received second message into the second hardware queue.
  • the network port controller storing the second message in the first memory includes: outputting at least one message in the second hardware queue to the first memory, wherein at least one message includes the second message.
  • the network port controller after the network port controller saves the second message in the first memory, it further includes: the network port controller saves each message in at least one message in the first memory
  • the location information is written into the second memory; the network port controller reports an interruption to the data plane protocol stack; the data plane protocol stack analyzes the second message in the first memory, and obtains the specified field of the second message in the first
  • the location information in the memory includes: the data plane protocol stack obtains the location information of each message in at least one message from the second memory in response to the received interrupt; the data plane protocol stack obtains the at least one message based on For the position information of each message in the text, at least one message is read in the first memory, and the position information of the specified field of each message in the at least one message is determined.
  • the second aspect and any implementation manner of the second aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • technical effects corresponding to the second aspect and any implementation manner of the second aspect reference may be made to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, and details are not repeated here.
  • the embodiment of the present application provides a chip.
  • the chip includes at least one processor and a network port controller.
  • the network port controller and the processor can implement the first aspect and the method in any one of the implementation manners of the first aspect.
  • the third aspect and any implementation manner of the third aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • technical effects corresponding to the third aspect and any implementation manner of the third aspect reference may be made to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, and details are not repeated here.
  • the embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program, and when the computer program runs on the computer or the processor, the computer or the processor executes the method in the first aspect or any possible implementation manner of the first aspect.
  • the fourth aspect and any implementation manner of the fourth aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • the technical effects corresponding to the fourth aspect and any one of the implementation manners of the fourth aspect refer to the above-mentioned first aspect and the technical effects corresponding to any one of the implementation manners of the first aspect, and details are not repeated here.
  • the embodiment of the present application provides a computer program product.
  • the computer program product includes a software program, and when the software program is executed by a computer or a processor, the method in the first aspect or any possible implementation manner of the first aspect is executed.
  • the fifth aspect and any implementation manner of the fifth aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • the technical effects corresponding to the fifth aspect and any one of the implementation manners of the fifth aspect refer to the technical effects corresponding to the above-mentioned first aspect and any one of the implementation manners of the first aspect, and details are not repeated here.
  • FIG. 1 is a schematic structural diagram of a host computer provided in an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of an exemplary data plane software stack
  • Fig. 3 is a schematic diagram of an initialization process schematically shown
  • Fig. 4a is a schematic diagram of processing received messages by the network port controller
  • Fig. 4b is a schematic diagram illustrating processing of received messages by the network port controller
  • FIG. 4c is a schematic diagram of processing received messages by the network port controller
  • Fig. 4d is a schematic diagram of the interaction flow of each module in the receiving direction shown by way of example;
  • Fig. 4e is a schematic diagram of the processing of the received message by the data plane software stack
  • Fig. 4f is a schematic diagram of the processing of the received message by the data plane software stack
  • Fig. 4g is a schematic diagram of the processing of the received message by the data plane software stack
  • FIG. 4h is a schematic diagram of an exemplary application processing a received message
  • FIG. 5 is a schematic structural diagram of a device provided in an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • first and second in the description and claims of the embodiments of the present application are used to distinguish different objects, rather than to describe a specific order of objects.
  • first target object, the second target object, etc. are used to distinguish different target objects, rather than describing a specific order of the target objects.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design scheme described as “exemplary” or “for example” in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design schemes. Rather, the use of words such as “exemplary” or “such as” is intended to present related concepts in a concrete manner.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • the communication method in the embodiment of the present application can be applied to an automatic driving system.
  • the automatic driving system may include a host computer and external devices.
  • the communication method in the embodiment of the present application can also be applied to other application scenarios that require compatibility and timeliness of data processing, which is not limited in this application.
  • FIG. 1 is a schematic diagram of a host structure provided by an embodiment of the present application. Please refer to FIG. 1.
  • the host includes, but is not limited to: an application layer, a kernel layer, and a management plane software stack running on the kernel, and at least one data plane software stack running on the kernel (for example, data plane software stack 1- n), physical network port, etc.
  • the application layer optionally includes one or more application programs.
  • application programs For example: task scheduling, active/standby communication, MDC (Mobile Data Center, mobile data center) perception algorithm, authentication module (also can be referred to as authentication application program) and one or more algorithm applications, also in the embodiment of the present application It can be called a data plane application, such as App0-Appn shown in Figure 1, and an algorithm application can include but not limited to: fusion algorithm application, perception algorithm application, regulatory algorithm, etc.
  • the software stack can be divided into a management plane software stack and a data plane software stack (for example, the data plane protocol stacks 1-n shown in FIG. 1 ).
  • a management plane software stack and a data plane software stack are isolated from each other, do not affect each other, and can run concurrently.
  • the management plane software stack can be used to process communication data of the management plane.
  • the physical network port receives management plane data (also referred to as a management plane message, management plane data packet or management plane information, which is not limited in this application) input by an external device (such as a radar).
  • the physical network port can output the management plane data to the management plane protocol stack.
  • the management plane protocol stack processes the management plane data accordingly, and outputs the processed data to the application layer.
  • the management plane software stack optionally runs on the Linux kernel (also called the operating system kernel) at the kernel layer.
  • the management plane software stack includes, but is not limited to: Ethernet protocol stack, Ethernet port driver, and Ethernet driver framework.
  • Ethernet protocol stack the Ethernet port driver and the Ethernet driver framework belong to the kernel state.
  • management plane software stack can be compatible with the user programming interface of the POSIX standard.
  • the data plane software stack may include one or more data plane software stacks.
  • each data plane software stack can be bound to one or more applications in the application layer.
  • an example is taken in which each data plane software stack is bound to an application of the application layer.
  • the UIO (User Input Output, user mode input and output) protocol stack in the data plane software stack can be used to analyze the data plane message stored in the memory, to parse out the load (payload) of the data plane message The address and length of a field (also called a valid data field) in memory. And send the address and length corresponding to the payload to the application layer. So that the application layer can directly read the paylaod field of the data plane message from the memory, while ignoring other parts of the data plane message (such as the header). That is to say, in the embodiment of this application, the UIO protocol stack can provide a simple processing method for the data plane message, and its decapsulation process only removes the header of the data plane message to provide the application layer with a payload without header. field.
  • the UIO network port driver can include a module (such as UIO_K_DRV hereinafter) running on the Linux kernel of the kernel layer and a module (such as UIO_K_DRV) running on the AOS (Automotive Operate System, automatic driving operating system) kernel of the kernel layer (such as UIO_U_DRV hereinafter).
  • the UIO network port driver can be used to abstract the underlying hardware (such as the network port controller), so that applications can access (or call) the underlying hardware.
  • the UIO network port driver is also used to send and receive messages, that is, to realize the receiving and processing of messages from external devices and the sending and processing of messages sent by upper-layer applications to external devices.
  • the UIO driver framework can be used to provide functions such as an underlying API (Application Programming Interface, application programming interface) compatible with both the AOS kernel and the Linux kernel.
  • API Application Programming Interface, application programming interface
  • the components included in the application layer, kernel layer and software stack shown in FIG. 1 do not constitute a specific limitation on the device.
  • the device may include more or fewer components than shown in the illustrations, or some components may be combined, or some components may be separated, or different component arrangements may be made.
  • Fig. 2 is a schematic structural diagram of a software stack exemplarily shown.
  • a physical network port may include multiple hardware queues, which may also be called packet queues.
  • multiple queues can be divided into management plane queues and data plane queues.
  • the management plane queue is used for caching management plane data.
  • Data plane queues are used to cache data plane data.
  • the data plane queue optionally includes at least one data plane sub-queue.
  • each data plane subqueue can be bound to a corresponding data plane software stack.
  • the physical network includes queue 0 to queue n.
  • queue 0 corresponds to the management plane software stack
  • queue 0 may optionally be a management plane queue. That is, the packets in queue 0 will be sent and received by the management plane software stack.
  • queue 1 corresponds to data plane software stack 1
  • queue n corresponds to data plane software stack n
  • queues 1 to 1 are optionally data plane queues.
  • the packets in the queue 1 will be handed over to the data plane software stack 1 for sending and receiving processing.
  • the packets in the queue n will be sent and received by the data plane software stack n.
  • the management plane software stack may be bound to multiple APPs (applications, application programs).
  • Figure 2 only shows that the management plane software stack corresponds to APP0.
  • the management plane software stack can correspond to APP0 and other multiple APPs, so as to output the message in queue 0 to the corresponding APP, and the multiple APPs The message is sent to other devices.
  • a single data plane software stack corresponds to one APP.
  • data plane software stack 1 corresponds to APP1
  • data plane software stack n corresponds to APPn. That is, data plane software stack 1 can send and receive messages of APP1
  • data plane software stack n can send and receive messages of APPn.
  • Data plane software stack 1 is taken as an example.
  • Data plane software stack 1 includes, but is not limited to: UIO protocol stack, UIO network port driver, and UIO driver framework.
  • the UIO network port driver can be divided into two parts, including a user mode part and a kernel mode part.
  • a user mode part is referred to as: UIO_U_DRV
  • UIO_K_DRV the kernel mode part
  • UIO_K_DRV optionally includes two components: one component is UIO_K_DRV running on the Linux kernel at the kernel layer, and the other component is UIO_AK_DRV running on the AOS kernel.
  • FIG. 3 is a schematic diagram of an initialization process exemplarily shown. Exemplarily, the initialization process shown in FIG. 3 can also be understood as a preparation process. Please refer to Figure 3, including:
  • UIO_K_DRV creates a UIO device and creates a shared memory.
  • the UIO network port driver may also include NIC_DRV, which is a common network port driver running inside the Linux kernel.
  • NIC_DRV stores an initialization function
  • NIC_DRV runs the initialization function.
  • initialization may include memory allocation, data structure initialization, and so on.
  • NIC_DRV runs the initialization function, so that NIC_DRV calls UIO_K_DRV for initialization.
  • UIO_K_DRV runs the initialization function of UIO_K_DRV in response to the call of NIC_DRV, so that UIO_K_DRV executes to create a UIO device and create a shared memory.
  • the UIO device is optionally a hardware device.
  • it can be a network port controller.
  • UIO_K_DRV virtualizes the underlying hardware (such as the network port controller) into a UIO device, so that the underlying hardware is exposed to the UIO network port driver in the user mode.
  • the UIO network port driver in user mode can operate the underlying hardware. For example, through related instructions, open the UIO device, etc.
  • UIO_K_DRV optionally applies for MBUF memory from the MBUF (Memroy buffer, shared memory) module.
  • the MBUF is used to provide services such as memory allocation and recovery of the shared memory pool for applications and the UIO communication stack, and is used to provide memory blocks with continuous physical addresses for the UIO network port driver. And, it is also used to provide an API interface for UIO drivers to convert virtual addresses into physical addresses.
  • services such as memory allocation and recovery of the shared memory pool for applications and the UIO communication stack
  • UIO network port driver is used to provide memory blocks with continuous physical addresses for the UIO network port driver.
  • UIO network port driver e.g., a UIO network port driver
  • MBUF memory is used to store data.
  • the network port controller can write data into the MBUF memory, and the application program can directly read the data from the MBUF memory.
  • the data of the application program can be written into the MBUF memory, and the network port controller can directly read and send the data from the MBUF memory to achieve zero copy of the data without CPU participation, so as to reduce CPU overhead.
  • UIO_K_DRV can also apply for and create shared memory from the kernel.
  • the shared memory includes, but is not limited to: communication memory between the user state and the kernel state, hardware register memory, BD (buffer descriptor, buffer descriptor) memory, and the like.
  • UIO_K_DRV may call the UIO device registration function provided by the operating system kernel. Make the UIO device register to the operating system kernel. It can be understood that UIO_K_DRV returns the identification information of the created UIO device (such as the device name of the UIO device) to the UIO network port driver in the user mode. In this way, the UIO network port driver in the user mode can perform related operations on the UIO device, such as opening the UIO device, according to the identification information of the UIO device, such as the device name of the UIO device.
  • the UIO network port driver in the user mode can provide API interface functions to the application program.
  • the application program can perform corresponding operations on the UIO device through the API interface function.
  • the application program can input an open command through the API interface function provided by UIO_U_DRV, and the open command optionally includes a corresponding UIO device name, which is used to instruct to open the UIO device corresponding to the specified UIO device name.
  • UIO_U_DRV receives an operation instruction issued by an application program, such as an opening instruction.
  • UIO_U_DRV optionally authenticates the application to detect whether the application has permission to operate the UIO device.
  • the automatic driving operating system may include an authentication module, which may also be called an authority control module.
  • the authentication module can be located in the application layer shown in FIG. 1 .
  • UIO_U_DRV can call the authentication module to authenticate the application program, so as to check whether the application program is legal.
  • UIO_U_DRV rejects the application program's access to the UIO device based on the result returned by the authentication module.
  • UIO_U_DRV allows the application to access the UIO device based on the result returned by the authentication module.
  • UIO_U_DRV binds the process with the SMMU module.
  • the SMMU (system memory management unit, system cache management unit) is a hardware module dedicated to virtual address and physical address translation inside the SoC. It can be understood that this module can be used to provide the underlying hardware with a conversion function between the virtual address of the user mode and the physical address of the hardware. For example, when the network port controller needs to read data from the MBUF memory, what it obtains is the virtual address corresponding to the data in the MBUF memory. The network port controller can call the SMMU module, so that the SMMU module can convert the virtual address in the user mode to obtain the corresponding physical address. The SMMU module can send the physical address to the bus, so that the storage device extracts the corresponding data based on the physical address and transmits it to the network port controller.
  • UIO_U_DRV responds to the received operation instruction of the application program to open the UIO device
  • UIO_U_DRV calls the interface provided by the SMMU module, and outputs the process ID of the application program to the SMMU module.
  • the SMMU module returns the SSID (sub stream identifier, data stream identifier) after the process is bound to the SMMU module to UIO_U_DRV.
  • the SMMU module can be understood as checking the page table of the memory address to implement the address translation function, and the SSID can be used to identify the page table to which the virtual address belongs.
  • each application program has its own address page table in the operating system, and the SMMU module can find the page table corresponding to the application process based on the SSID, and retrieve the corresponding relationship between the virtual address and the physical address in the page table.
  • UIO_U_DRV receives the SSID returned by the SMMU module.
  • UIO_U_DRV assigns SSID to the network port controller. This enables the network port controller to call the SMMU module for address translation based on the SSID.
  • the function of the SMMU module is only briefly described, and the specific details can refer to the implementation process of the SMMU module in the prior art embodiment, and the present application will not repeat them.
  • UIO_U_DRV maps the shared memory in the kernel state to the user state space.
  • the UIO_K_DRV in kernel mode is created with MBUF memory and multiple shared memories in S301. It can be understood that each memory has a corresponding virtual address in the kernel state, and the UIO network port driver in the kernel state can read and write the memory based on the virtual address of each memory.
  • UIO_U_DRV can map the memory in the kernel state to the user state, so that the UIO driver in the user state can also read and write each memory.
  • UIO_U_DRV can call the mmap function provided by the operating system to map multiple memories created by UIO_K_DRV to the user state. It can be understood that each memory has a corresponding virtual address in the user state, and the UIO driver in the user state can access these memories based on the virtual address of each memory.
  • the communication memory between the user mode and the kernel mode can be used to store the number of sending and receiving packet queues, the number of hardware interrupts, the number of threads, the base address of BD (buffer descriptor, cache descriptor), etc.
  • UIO_U_DRV can obtain corresponding information from the communication memory between the user state and the kernel state based on the mapping relationship, such as the hardware interrupt number.
  • Each queue corresponds to a hardware interrupt number. For example, as shown in FIG. 2 , queue 1 corresponds to hardware interrupt number 1, and queue n corresponds to hardware interrupt number n.
  • the hardware interrupt number is generated by the system, and will not be repeated in the following.
  • UIO_U_DRV performs interrupt registration with UIO_AK_DRV.
  • UIO_U_DRV can call the interrupt registration function provided by UIO_AK_DRV to perform interrupt registration after obtaining the sending and receiving interrupt number and other information from the communication memory between the user state and the kernel state.
  • UIO_U_DRV can output the acquired hardware interrupt number and other information to UIO_AK_DRV.
  • the UIO_AK_DRV applies for an interrupt to the operating system kernel and registers an interrupt processing function in response to the received hardware interrupt number and other information.
  • the operating system may assign a corresponding software interrupt number to the hardware interrupt number.
  • UIO_AK_DRV can register interrupt handling based on this interrupt number.
  • the network port controller can report the hardware interrupt number of the queue to the operating system, and the operating system can determine the corresponding software interrupt number based on the corresponding hardware interrupt number, and communicate with the The interrupt event corresponding to the software interrupt number, so that the UIO device (such as the network port controller) can sense the interrupt.
  • the purpose of assigning the software interrupt number is to ensure system security, so that the hardware interrupt number is only used for hardware transmission, and the processing between cores can be processed based on the software interrupt number.
  • UIO_U_DRV starts the thread and the UIO device.
  • UIO_U_DRV starts a data plane thread.
  • the data plane thread can be used to wait for the arrival of the sending and receiving packet event of the network port hardware, so as to execute the processing flow related to sending and receiving packets in the thread function.
  • UIO_U_DRV starts a management plane thread.
  • the management plane thread is used to call the poll function to block and wait for various management plane event messages of the operating system. For example, the network port Link down, Link up, network port failure, etc.
  • the management plane thread may apply for an event ID and an interrupt number from the event scheduling module in UIO_AK_DRV.
  • the management plane thread can input the event ID and the interrupt number into UIO_U_DRV correspondingly.
  • UIO_U_DRV can correspondingly input the time ID and the interrupt number to the operating system kernel.
  • the data plane thread can output the event ID to the event management scheduler.
  • the event management scheduler may store the correspondence between event IDs and thread IDs of data plane threads.
  • UIO_U_DRV enables UIO devices (that is, network port controllers).
  • UIO_U_DRV can write information to the queue interrupt enable register of the network port controller to start the packet sending and receiving hardware interrupt function of the queue (such as queue 1 in Figure 2) corresponding to the protocol stack to which UIO_U_DRV belongs in the network port controller.
  • the network port controller can respond to the operation of UIO_U_DRV and start receiving and sending messages.
  • the UIO device suspends the data plane thread and the management plane thread, enters a dormant state, and waits for the event of sending and receiving packets to arrive.
  • FIG. 4 a is a schematic diagram of processing received packets by the network port controller exemplarily shown.
  • an external device 1 such as a radar
  • message 1 is a management plane message.
  • the dotted line shown in Fig. 4a is a schematically shown transmission path of the management plane message.
  • the network port controller receives packet 1.
  • the network port controller can be pre-configured with a hardware flow table.
  • the hardware flow table may record the correspondence between address information and communication types (including data plane communication and management plane communication).
  • the network port controller can search the hardware flow table based on the address information carried in the message, such as quintuple information, to obtain the communication type corresponding to the successfully matched quintuple information, so as to determine that the message is a data plane message or management plane message.
  • address information carried in the message such as quintuple information
  • the network port controller determines that message 1 is a management plane message, and then the network port controller may place the message in a queue corresponding to the management plane protocol stack, that is, queue 0 in FIG. 2 .
  • the management plane protocol stack can process the packets in the queue 0 accordingly, and output the data to the corresponding application program, such as APP0.
  • FIG. 4 b is a schematic diagram of processing received packets by the network port controller exemplarily shown.
  • an external device 1 such as a radar
  • message 2 is a data plane message.
  • the dotted line shown in Fig. 4b is a schematically shown transmission path of the data plane message.
  • the network port controller determines that packet 2 is a data plane packet.
  • the network port controller can place the message in the corresponding queue based on the binding relationship between the queue, the UIO protocol stack, and the application program. For example, the network port controller receives the message from the external device 1 and detects the message After the text is a data plane message, the network port controller can place the data plane message from the external device 1 in the queue 1 based on the correspondence between the external device 1 and the queue 1.
  • the data plane message (for example, message 2 ) in the queue 1 will be sent and received by the data plane software stack 1 , so that APP1 can obtain the message 2 .
  • UIO_K_DRV creates MBUF memory and one or more shared memories (such as BD memory in FIG. 4c).
  • the network port controller can store message 2 in the MBUF memory.
  • the network port controller can read the virtual address of the MBUF memory from the BD memory based on the virtual address of the BD memory in the kernel state.
  • the network port controller can output the message 2, SSID (see above for the concept details) and the virtual address of the MBUF memory to the SMMU module.
  • the SMMU module may detect the page table identified by the SSID based on the SSID. Based on the virtual address, the SMMU can retrieve the corresponding hardware address in the page table. Exemplarily, the SMMU module can output data and hardware addresses to the bus, and transmit them to storage devices through the bus, such as DDR (Double Data Rate, double-rate synchronous dynamic random access memory). DDR can write message 2 into the MBUF memory indicated by the hardware address.
  • DDR Double Data Rate, double-rate synchronous dynamic random access memory
  • the network port controller updates the relevant information of the data recorded in the memory of the BD.
  • the network port controller will move the write pointer in the BD memory to indicate the number of packets currently written by the network port controller to the MBUF memory.
  • a BD is used to indicate that a message is stored in memory.
  • UIO_U_DRV can determine the number of messages written into the MBUF memory by the network port controller based on the pointer movement of the BD memory.
  • the relevant information corresponding to the message can also be obtained based on the write pointer.
  • relevant information includes but is not limited to: the start address of the message, the length of the message, and the like.
  • FIG. 4d is a schematic diagram of a message receiving processing flow exemplarily shown. Please refer to Figure 4d, specifically including:
  • the network port controller reports an interrupt to the operating system kernel.
  • the network port controller may generate a hardware interrupt to trigger the operating system kernel to perform subsequent steps. For example, as mentioned above, in the preparation stage, the network port controller obtains the hardware interrupt number corresponding to each queue. If the network port controller finishes writing the message 2 in the queue 1, the network port controller can report the hardware interrupt number 1 corresponding to the queue 1 to the operating system kernel (such as the Linux kernel in the kernel layer).
  • the operating system kernel such as the Linux kernel in the kernel layer.
  • the network port controller can generate an interrupt after receiving multiple messages, for example, two or more messages, and trigger other modules to perform processing of multiple messages. processing flow. So that the UIO protocol stack can process multiple messages at one time, so as to reduce the number of interruptions and further reduce the overhead required for each message processing.
  • the operating system kernel outputs a software interrupt number to UIO_K_DRV.
  • the operating system kernel responds to the received hardware interrupt number sent by the network port controller.
  • the operating system kernel maintains the corresponding relationship between the hardware terminal number and the software terminal number.
  • the operating system kernel can obtain the corresponding software terminal number based on the received hardware interrupt number.
  • the operating system outputs the software interrupt number to UIO_K_DRV to call the interrupt processing function in UIO_K_DRV (for the concept, refer to the above).
  • UIO_K_DRV outputs the event ID to the event scheduling module.
  • UIO_K_DRV can obtain the event ID corresponding to the software interrupt number based on the interrupt processing function in response to the received software interrupt number.
  • UIO_K_DRV outputs the event ID to the event scheduling control module to indicate that there is currently an interruption event corresponding to the event ID.
  • UIO_K_DRV may instruct the operating system kernel to disable the interrupt of the queue. That is to say, no interruption will be generated after the message is received in the queue, so as to prevent UIO_K_DRV and other modules from interfering with the current processing flow by subsequent interruptions in the process of processing the terminal this time. It can be understood that, in the process of processing the message corresponding to each interruption by the automatic driving system, there is a corresponding overhead. If the current processing flow is interrupted by a subsequent interrupt and the receiving flow is repeatedly executed, the scheduling overhead will increase. Therefore, disabling interrupts can effectively reduce scheduling overhead.
  • the event scheduling module determines a corresponding thread based on the event ID.
  • the event scheduling module records the correspondence between event IDs and threads.
  • one event ID corresponds to one thread, and a single thread can process packets of multiple queues.
  • the event scheduling module determines the thread corresponding to the event ID in response to the received event ID.
  • the event scheduling module wakes up the data plane thread.
  • the data plane thread in UIO_U_DRV is in a dormant state after preparing the process.
  • the event scheduling module determines the data plane thread that needs to be woken up, it can wake up the data plane thread, so that the data plane thread can send and receive packets.
  • the data plane thread performs sending and receiving packet processing.
  • the data plane thread can read the interrupt status register of the queue to determine whether it is a transmit (TX) interrupt or a receive (RX) interrupt.
  • TX transmit
  • RX receive
  • the network port controller can be determined that the network port controller has successfully received at least one message, for example, it can be one message or two messages. one or more messages.
  • the network port controller writes at least one message into the MBUF memory, and indicates the number of received messages and the number of messages in the memory by moving the write pointer in the BD memory. address and message length and other information
  • UIO_U_DRV maps the MBUF memory and shared memory in the kernel state to the user state, that is, the data plane thread in UIO_U_DRV can be based on the MBUF memory and shared memory in the user state
  • the virtual address in is used for reading information in MBFU memory and shared memory.
  • the data plane thread can read the relevant information indicated by the read pointer (including information such as the address and length of the message 2 in the MBUF memory) by moving the read pointer in the BD memory, Until the read pointer coincides with the write pointer.
  • the relevant information indicated by the read pointer including information such as the address and length of the message 2 in the MBUF memory
  • the read pointer coincides with the write pointer.
  • the data plane thread outputs the acquired information such as the address and length of the message 2 in the MBUF memory to the UIO protocol stack.
  • the UIO protocol stack can read the message 2 in the MBUF memory based on the obtained address and length of the message 2 in the MBUF memory.
  • the UIO protocol stack can detect the address and length of the payload field of message 2.
  • the message may optionally include header and payload fields, and may also include fields such as CRC, which are not limited in this application.
  • the data plane thread can read the address and length of the payload field of message 2 in the MBUF memory.
  • the UIO protocol stack may send the obtained address and length of the payload field of message 2 to APP1.
  • APP1 can read the payload field from the MBUF memory based on the address and length of the payload field input by the UIO protocol stack. That is to say, in the embodiment of the present application, the UIO protocol stack can strip the header of the message 2, so that the upper-layer application can directly obtain the data part in the message.
  • the UIO protocol stack can negotiate with the APP in advance to determine which fields in the message the APP is interested in, and the UIO protocol stack can send the addresses and lengths of the fields that the APP is interested in to the APP.
  • the network port controller can report a hardware interrupt after receiving multiple messages, that is to say, the MBUF memory has already stored multiple message
  • the BD memory records information about each of the multiple messages (such as the address and length of the message in the MBUF memory, etc.).
  • the data plane thread and the UIO protocol stack can process multiple messages in the MBUF memory one by one according to the processing method for message 2 above. For example, after the data plane thread obtains information such as the address and length of a packet, it sends the information such as the address and length of the packet to the UIO protocol stack. The UIO protocol stack parses the message, sends the address and length of the paylaod field to the APP, and sequentially processes other messages in the MBUF memory.
  • the data plane thread and the UIO protocol stack can simultaneously process multiple packets.
  • the data plane thread may acquire information such as the address and length of each of the multiple packets from the BD memory.
  • the data plane thread sends the addresses and lengths corresponding to multiple packets to the UIO protocol stack.
  • the UIO protocol stack reads the address and length of the payload field of each message in the multiple messages. And send the address and length of the payload field of each message to the APP.
  • the application can directly read the payload field from the MBFU memory based on the address and length of the payload field in the MBUF memory of the message input by the data plane software stack. In this way, the zero-copy transmission of the message is realized, without requiring multiple copies of the message in the processing flow of the protocol stack of the management plane, that is, the Ethernet software protocol stack.
  • the application program can instruct the UIO network port driver to release the memory area in the MBUF memory storing the message, so as to reclaim the memory area, thereby saving memory resources.
  • the UIO network port driver responds to the interrupt with the highest priority when processing the interrupt, and notifies the event scheduler in the interrupt function to notify the user
  • the corresponding thread in the state UIO network port driver processes the interrupt event in time. Therefore, through this cut-through mode, the requirement for delay-deterministic data communication in the field of vehicle-mounted automatic driving is met.
  • the data plane thread may instruct the operating system kernel to re-enable the interrupt.
  • the operating system kernel may allow the network port controller to continue reporting interrupts, and repeat the above process.
  • Fig. 4d shows the interaction process of each module in the receiving direction.
  • the application needs to send data to the external device.
  • the specific process may be as follows: in combination with FIG. 2 , taking APP1 as an example, APP1 writes data into the MBUF memory. Moreover, APP1 outputs relevant information such as the address of the data in the MBUF memory and the length of the data to the corresponding software stack, that is, the data plane software stack 1 .
  • the UIO protocol stack transparently transmits the obtained relevant information to the UIO network port driver.
  • the UIO network port driver can update the pointer in the BD memory based on relevant information.
  • the specific update method can refer to the existing technology, which is not limited in this application.
  • the network port controller can obtain information such as the virtual address and data length of the data in the MBUF memory based on the pointer in the BD memory.
  • the network port controller can obtain the data from the MBUF memory through the SMMU module. The specific details are similar to the data receiving process and will not be repeated here.
  • the network port controller may process the data accordingly, for example, perform Ethernet encapsulation on the data, etc., so as to obtain the corresponding message.
  • the network port controller puts the message in queue 1 and sends it.
  • the network port controller may report the queue interrupt number of the queue 1 to the operating system kernel. For details, refer to the description of S402-S406.
  • UIO_U_DRV specifically, the data plane thread
  • UIO_U_DRV can release the cache area storing the data. And re-enable the hardware interrupt, that is, execute S408.
  • FIG. 5 is a schematic structural diagram of a communication device provided by an embodiment of the present application.
  • the communication device 500 may include: a processor 501 , a transceiver 505 , and optionally a memory 502 .
  • the transceiver 505 may be called a transceiver unit, a transceiver, or a transceiver circuit, etc., and is used to implement a transceiver function.
  • the transceiver 505 may include a receiver and a transmitter, and the receiver may be called a receiver or a receiving circuit for realizing a receiving function; the transmitter may be called a transmitter or a sending circuit for realizing a sending function.
  • Computer program or software code or instructions 504 may be stored in memory 502, which may also be referred to as firmware.
  • the processor 501 can control the MAC layer and the PHY layer by running the computer program or software code or instruction 503 therein, or by calling the computer program or software code or instruction 504 stored in the memory 502, so as to realize various embodiments of the present application The communication method provided.
  • the processor 501 may be a central processing unit (central processing unit, CPU), and the memory 502 may be, for example, a read-only memory (read-only memory, ROM) or a random access memory (random access memory, RAM).
  • the processor 501 and transceiver 505 described in this application can be implemented in integrated circuit (integrated circuit, IC), analog IC, radio frequency integrated circuit RFIC, mixed signal IC, application specific integrated circuit (application specific integrated circuit, ASIC), printed circuit board (printed circuit board, PCB), electronic equipment, etc.
  • integrated circuit integrated circuit, IC
  • analog IC analog IC
  • radio frequency integrated circuit RFIC radio frequency integrated circuit
  • mixed signal IC application specific integrated circuit
  • ASIC application specific integrated circuit
  • PCB printed circuit board
  • electronic equipment etc.
  • the above-mentioned communication device 500 may further include an antenna 506, and each module included in the communication device 500 is only an example for illustration, and this application is not limited thereto.
  • the communication device described in the above embodiments may be an automatic driving system, but the scope of the communication device described in this application is not limited thereto, and the structure of the communication device may not be limited by FIG. 5 .
  • a communication device may be a stand-alone device or may be part of a larger device.
  • the implementation form of the communication device may be:
  • An independent integrated circuit IC, or a chip, or a chip system or subsystem (2) a set of one or more ICs, optionally, the set of ICs can also include storage for storing data and instructions components; (3) modules that can be embedded in other equipment; (4) vehicle equipment, etc.; (5) others, etc.
  • the chip shown in FIG. 6 includes a processor 601 and an interface 602 .
  • the number of processors 601 may be one or more, and the number of interfaces 602 may be more than one.
  • the chip or chip system may include a memory 603 .
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes at least one piece of code, and the at least one piece of code can be executed by a computer to control the computer It is used to realize the above-mentioned method embodiment.
  • the embodiments of the present application further provide a computer program, which is used to implement the foregoing method embodiments when the computer program is executed by a terminal device.
  • the program may be stored in whole or in part on a storage medium packaged with the processor, or stored in part or in whole in a memory not packaged with the processor.
  • the embodiment of the present application also provides a chip, including a network port controller and a processor.
  • the network port controller and the processor can implement the foregoing method embodiments.
  • the steps of the methods or algorithms described in connection with the disclosure of the embodiments of the present application may be implemented in the form of hardware, or may be implemented in the form of a processor executing software instructions.
  • the software instructions can be composed of corresponding software modules, and the software modules can be stored in random access memory (Random Access Memory, RAM), flash memory, read-only memory (Read Only Memory, ROM), erasable programmable read-only memory ( Erasable Programmable ROM, EPROM), Electrically Erasable Programmable Read-Only Memory (Electrically EPROM, EEPROM), registers, hard disk, removable hard disk, CD-ROM, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be a component of the processor.
  • the processor and storage medium can be located in the ASIC.
  • the functions described in the embodiments of the present application may be implemented by hardware, software, firmware or any combination thereof.
  • the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Computer And Data Communications (AREA)

Abstract

本申请实施例提供了一种支持多协议栈的通信方法及系统,该方法包括:网口控制器可将接收到的报文,按照报文的类型交由对应的协议栈进行处理。管理面报文可交由以太网协议栈处理,数据面报文可交由数据面协议栈处理。通过在同一个网口上支持以太网协议栈和数据面协议栈的方式,可在保证兼容性的同时,满足数据面报文对时延的需求,实现数据面报文的高性能传输。

Description

支持多协议栈的通信方法及系统 技术领域
本申请实施例涉及通信领域,尤其涉及一种支持多协议栈的通信方法及系统。
背景技术
在自动驾驶领域中,自动驾驶系统通常包括但不限于多个传感器、用于通信互联总线以及核心处理器等硬件,以及,运行在核心处理器上的自动驾驶软件系统。在自动驾驶系统运行过程中,各类传感器数据需经过对应的通信总线输入至自动驾驶软件系统。示例性的,基于对通信性能的需求,可选地将数据分为两类:数据面通信和管理面通信。
数据面通信:SoC(System on Chip,系统级芯片)外部的各种传感器所生成的数据,如Lidar(激光雷达)和Radar(雷达,也可以成为毫米波雷达)。该类数据可直接用于自动驾驶软件系统中的各种算法。由于这类数据的数据量较大,通常带宽要求较高(例如,需要2Gbps以上带宽)。
管理面通信:SoC上还需要运行各类设备配置,设备状态监控,图像压缩传输等管理面业务,这类业务对通信性能要求相对较低。但是,该类通信需要能够兼容POSIX(Portable Operating System Interface,标准可移植操作系统接口)的用户编程接口和网络设备管理功能。
对于数据面的通信数据,由于自动驾驶系统对实时性要求非常高。因此,如果外部传感器数据不能在确定的时间内传输到自动驾驶算法,将直接威胁自动驾驶的安全性和可靠性。而如果采用以太网协议栈(也可以称为标准协议栈)和以太网网口驱动接收数据面的通信数据,则每个数据包都需要穿越厚重的以太网协议栈,并且还需付出数据拷贝及以太网协议栈复杂的报文处理开销。因此,以太网协议栈无法满足自动驾驶系统对数据面的通信数据的确定性传输时延的硬性指标要求。
发明内容
为了解决上述技术问题,本申请实施例提供一种支持多协议栈的通信方法及系统。在该方法中,网口控制器可根据报文的类型,将不同类型的报文交由对应的协议栈处理。以在保证管理面报文的兼容性的同时,实现对数据面报文的快速处理。
第一方面,本申请实施例提供一种支持多协议栈的通信系统。该通信系统包括网口控制器、以太网协议栈和数据面协议栈。网口控制器,用于确定接收的第一报文为管理面报文;将第一报文输出至以太网协议栈。以太网协议栈,用于响应于接收到的第一报文,将第一报文输出至第一应用。网口控制器,还用于确定接收的第二报文为数据面报文,以及,将第二报文保存至第一内存中。数据面协议栈,用于对第一内存中的第二报文进行解析,得到第二报文的指定字段在第一内存中的位置信息;并且,将指定字段的位置信息输出至第二应用,以使得第二应用根据指定字段的位置信息从第一内存中获取 指定字段。这样,本申请实施例中的通信系统可根据报文的类型,将不同类型的报文交由对应的协议栈进行处理。示例性的,对于管理面报文则交由标准以太网协议栈进行收发处理。对于数据面报文,则交由数据面协议栈进行收发处理。从而能够满足不同报文对传输的需求。例如,对于管理面报文,其对时延不敏感,但是需要满足其兼容性。因此,通过以太网协议栈对管理面报文进行处理,可满足以太网报文对兼容性的需求。再例如,对于数据面报文,其对时延敏感,为使得数据面报文可快速到达应用,通过数据面协议栈对数据包报文进行处理,数据面协议栈对数据面报文的精简处理,可满足数据面报文对时延的需求。以及,本申请实施例中的通信系统可实现同一个网口控制器支持多个协议栈,从而提升网口资源的利用率,通过同一个网口支持双通信栈并发工作的方式,可在保证兼容性的同时,满足系统的高性能需求。
示例性的,以太网协议栈对管理报文的处理需要进行至少两次数据拷贝。
示例性的,以太网协议栈能够兼容POSIX的用户编程接口和网络设备管理功能。
示例性的,数据面协议栈可以为下文实施例中的UIO协议栈。
示例性的,指定字段可选地为数据字段。
示例性的,位置信息可选地包括数据字段的起始地址与长度信息。
示例性的,通信系统可包括多个数据面协议栈,每个数据面协议栈与一个或多个应用对应。
示例性的,第一报文与第二报文可以来自同一个外部设备,例如雷达,也可以来自不同的外部设备,本申请不做限定。
在一种可能的实现方式中,网口控制器包含特征字段与报文类型的对应关系,网口控制器,具体用于:基于特征字段与报文类型的对应关系,确定与第一报文的特征字段对应的报文类型为管理面报文。这样,本申请实施例中的网口控制器可以根据报文的类型,对报文进行分流处理,即将管理面报文交由以太网协议栈进行处理,将数据面报文交由数据面协议栈进行处理。
示例性的,网口控制器可包含硬件流表,硬件流表可记录上述特征字段与报文类型的对应关系。
在一种可能的实现方式中,网口控制器包含特征字段与报文类型的对应关系,网口控制器,具体用于:基于特征字段与报文类型的对应关系,确定与第二报文的特征字段对应的报文类型为数据面报文。这样,本申请实施例中的网口控制器可以根据报文的类型,对报文进行分流处理,即将管理面报文交由以太网协议栈进行处理,将数据面报文交由数据面协议栈进行处理。
在一种可能的实现方式中,网口控制器包括第一硬件队列,第一硬件队列与管理面协议栈对应,网口控制器,具体用于确定第一报文的类型为管理面报文后,将接收到的第一报文置于第一硬件队列。这样,本申请实施例中的网口控制器可通过与不同协议栈对应的队列,将报文置于相应的队列中。从而实现同一个网口的复用,即同一个网口可 支持多个协议栈的报文的收发。例如,网口控制器可将管理面报文置于第一硬件队列,并将第一硬件队列中的报文交由与第一硬件队列绑定的以太网协议栈处理。
在一种可能的实现方式中,网口控制器,具体用于将第一硬件队列中的第一报文输出至以太网协议栈。这样,网口控制器可将第一硬件队列的报文交由与第一硬件队列绑定的以太网协议栈进行处理。
在一种可能的实现方式中,网口控制器包括第二硬件队列,第二硬件队列与数据面协议栈对应,网口控制器,具体用于确定第二报文的类型为数据面报文后,将接收到的第二报文置于第二硬件队列。这样,本申请实施例中的网口控制器可通过与不同协议栈对应的队列,将报文置于相应的队列中。从而实现同一个网口的复用,即同一个网口可支持多个协议栈的报文的收发。例如,网口控制器可将数据面报文置于第二硬件队列,并将第二硬件队列中的报文交由与第二硬件队列绑定的数据面协议栈处理。
示例性的,网口控制器可以包含对应于多个数据面协议栈的多个硬件队列。
在一种可能的实现方式中,网口控制器,具体用于将第二硬件队列中的至少一个报文输出至第一内存,其中,至少一个报文中包括第二报文。这样,网口控制器还可以对报文进行批量收发。即,网口控制器可将队列中的多个报文一起交由数据面协议栈进行处理,从而降低每次报文穿越协议栈时的开销。示例性的,本申请实施例中的网口控制器还可以实现数据零拷贝功能,即将至少一个报文输出至内存,可使得应用直接从内存中读取数据,以避免数据拷贝所需的开销。
在一种可能的实现方式中,网口控制器,还用于将至少一个报文中的每个报文在第一内存中的位置信息写入到第二内存中;以及,向数据面协议栈上报中断。数据面协议栈,具体用于响应于接收到的中断,从第二内存中获取至少一个报文中的每个报文的位置信息。基于获取到的至少一个报文中的每个报文的位置信息,在第一内存中读取至少一个报文,并确定至少一个报文中的每个报文的指定字段的位置信息。这样,本申请实施例中可实现中断直通机制,可一次性将中断直接透传到用户态线程,以避免中断调度不确定性时延的性能损失。
示例性的,数据面协议栈可一次对一个报文进行处理,即逐一将报文对应的指定字段的位置信息发给应用,应用每收到一个指定字段的信息,则从内存中读取一次。
示例性的,数据面协议栈可一次对多个报文进行处理,即将多个报文的每个报文的指定字段的位置信息一起发给应用,应用可在内存中同时读取多个报文的指定字段,从而进一步降低系统开销。
第二方面,本申请实施例提供一种支持多协议栈的通信方法。该方法应用于支持多协议栈的通信系统,通信系统包括网口控制器、以太网协议栈和数据面协议栈;网口控制器确定接收的第一报文为管理面报文;网口控制器将第一报文输出至以太网协议栈; 以太网协议栈响应于接收到的第一报文,将第一报文输出至第一应用;网口控制器确定接收的第二报文为数据面报文;网口控制器将第二报文保存至第一内存中;数据面协议栈对第一内存中的第二报文进行解析,得到第二报文的指定字段在第一内存中的位置信息;数据面协议栈将指定字段的位置信息输出至第二应用,以使得第二应用根据指定字段的位置信息从第一内存中获取指定字段。
在一种可能的实现方式中,网口控制器包含特征字段与报文类型的对应关系,网口控制器确定接收的第一报文为管理面报文,包括:基于特征字段与报文类型的对应关系,确定与第一报文的特征字段对应的报文类型为管理面报文。
在一种可能的实现方式中,网口控制器包含特征字段与报文类型的对应关系,网口控制器确定接收的第二报文为数据面报文,包括:基于特征字段与报文类型的对应关系,确定与第二报文的特征字段对应的报文类型为数据面报文。
在一种可能的实现方式中,网口控制器包括第一硬件队列,第一硬件队列与管理面协议栈对应,网口控制器确定接收的第一报文为管理面报文之后,还包括:将接收到的第一报文置于第一硬件队列。
在一种可能的实现方式中,网口控制器将第一报文输出至以太网协议栈,包括:将第一硬件队列中的第一报文输出至以太网协议栈。
在一种可能的实现方式中,网口控制器包括第二硬件队列,第二硬件队列与数据面协议栈对应,网口控制器确定接收的第二报文为数据面报文之后,包括:将接收到的第二报文置于第二硬件队列。
在一种可能的实现方式中,网口控制器将第二报文保存至第一内存中,包括:将第二硬件队列中的至少一个报文输出至第一内存,其中,至少一个报文中包括第二报文。
在一种可能的实现方式中,网口控制器将第二报文保存至第一内存中后,还包括:网口控制器将至少一个报文中的每个报文在第一内存中的位置信息写入到第二内存中;网口控制器向数据面协议栈上报中断;数据面协议栈对第一内存中的第二报文进行解析,得到第二报文的指定字段在第一内存中的位置信息,包括:数据面协议栈响应于接收到的中断,从第二内存中获取至少一个报文中的每个报文的位置信息;数据面协议栈基于获取到的至少一个报文中的每个报文的位置信息,在第一内存中读取至少一个报文,并确定至少一个报文中的每个报文的指定字段的位置信息。
第二方面以及第二方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第二方面以及第二方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第三方面,本申请实施例提供一种芯片。芯片包括至少一个处理器和网口控制器。网口控制器与处理器可实现第一方面以及第一方面的任意一种实现方式中的方法。
第三方面以及第三方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第三方面以及第三方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第四方面,本申请实施例提供一种计算机可读存储介质。计算机可读存储介质存储有计算机程序,当计算机程序运行在计算机或处理器上时,使得计算机或处理器执行第一方面或第一方面的任一种可能的实现方式中的方法。
第四方面以及第四方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第四方面以及第四方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第五方面,本申请实施例提供一种计算机程序产品。计算机程序产品包含软件程序,当软件程序被计算机或处理器执行时,使得第一方面或第一方面的任一种可能的实现方式中的方法被执行。
第五方面以及第五方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第五方面以及第五方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种主机结构示意图;
图2为示例性示出的数据面软件栈的结构示意图;
图3为示例性示出的初始化流程示意图;
图4a为示例性示出的网口控制器对接收到的报文的处理示意图;
图4b为示例性示出的网口控制器对接收到的报文的处理示意图;
图4c为示例性示出的网口控制器对接收到的报文的处理示意图;
图4d为示例性示出的接收方向的各模块交互流程示意图;
图4e为示例性示出的数据面软件栈对接收到的报文的处理示意图;
图4f为示例性示出的数据面软件栈对接收到的报文的处理示意图;
图4g为示例性示出的数据面软件栈对接收到的报文的处理示意图;
图4h为示例性示出的应用对接收到的报文的处理示意图;
图5为本申请实施例提供的一种装置的结构示意图;
图6为本申请实施例提供的一种芯片的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整 地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
本申请实施例中的通信方法可应用于自动驾驶系统中。示例性的,自动驾驶系统中可包括主机和外部设备等。本申请实施例中的通信方法还可以应用于其他对兼容性和数据处理的时效性有需求的应用场景中,本申请不做限定。
图1为本申请实施例提供的一种主机结构示意图。请参照图1,示例性的,主机包括但不限于:应用层、内核层以及运行在内核上的管理面软件栈、运行在内核上的至少一个数据面软件栈(例如数据面软件栈1~n)、物理网口等。
示例性的,应用层可选地包括一个或多个应用程序。例如:任务调度、主备通讯、MDC(Mobile Data Center,移动数据中心)感知算法、鉴权模块(也可以称为鉴权应用程序)以及一个或多个算法应用,在本申请实施例中也可以称为数据面应用,例如图1中所示的App0~Appn,算法应用可以包括但不限于:融合算法应用、感知算法应用、规控算法等。
示例性的,本申请实施例中,软件栈可分为管理面软件栈与数据面软件栈(例如图1中所示的数据面协议栈1~n)。示例性的,管理面软件栈与数据面软件栈互相隔离、互不影响,可以并发运行。
示例性的,管理面软件栈可用于处理管理面的通信数据。例如,物理网口接收到外部设备(例如雷达)输入的管理面数据(也可以称为管理面报文、管理面数据包或管理面信息,本申请不做限定)。物理网口可将管理面数据输出至管理面协议栈。管理面协议栈对管理面数据进行相应处理,并将处理后的数据输出至应用层。
示例性的,管理面软件栈可选地运行在内核层的Linux内核(也可以称为操作系统内核)上。管理面软件栈包括但不限于:以太网协议栈、以太网网口驱动以及以太网驱动框架。
示例性的,以太网协议栈、以太网网口驱动以及以太网驱动框架属于内核态。示例 性的,管理面软件栈能够兼容POSIX标准的用户编程接口。
示例性的,管理面软件栈对管理面数据的处理流程可参照已有协议标准中的描述。本申请不再赘述。
仍参照图1,示例性的,数据面软件栈可包括一个或多个数据面软件栈。示例性的,每个数据面软件栈可与应用层中的一个或多个应用绑定。在本申请实施例中,以每个数据面软件栈与应用层的一个应用绑定为例。
示例性的,数据面软件栈中的UIO(User Input Output,用户态输入输出)协议栈可用于对保存在内存中的数据面报文进行解析,以解析出数据面报文的载荷(payload)字段(也可以称为有效数据字段)在内存中的地址以及长度。并将payload对应的地址以及长度发送给应用层。以使得应用层可以直接从内存中读取数据面报文的paylaod字段,而忽略数据面报文的其它部分(例如报头)。也就是说,在本申请实施例中,UIO协议栈可为数据面报文提供简易的处理方式,其解封装过程仅是将数据面报文的报头去除,以为应用层提供不含报头的payload字段。
示例性的,UIO网口驱动可包含运行于内核层的Linux内核上的模块(例如下文中的UIO_K_DRV)以及运行在内核层的AOS(Automotive Operate System,自动驾驶操作系统)内核上的模块(例如下文中的UIO_U_DRV)。UIO网口驱动可用于将底层硬件(例如网口控制器)抽象化,以使得应用可访问(或调用)底层硬件。示例性的,UIO网口驱动还用于对报文进行收发处理,即实现对来自外部设备的报文的接收处理以及对上层应用发送给外部设备的报文进行发送处理。
示例性的,UIO驱动框架,可用于提供同时兼容AOS内核和Linux内核的底层API(Application Programming Interface,应用程序接口)等功能。
需要说明的是,图1示出的应用层、内核层以及软件栈所包含的部件,并不构成对设备的具体限定。在本申请另一些实施例中,设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。
图2为示例性示出的软件栈的结构示意图。请参照图2,示例性的,在本申请实施例中,物理网口可包括多个硬件队列,也可以称为报文队列等。示例性的,多个队列可以分为管理面队列和数据面队列。其中,管理面队列用于缓存管理面数据。数据面队列用于缓存数据面数据。示例性的,数据面队列可选地包括至少一个数据面子队列。示例性的,每个数据面子队列可绑定对应的数据面软件栈。
举例说明,如图2所示,物理网包括队列0~队列n。示例性的,本申请实施例中,单一队列与一个软件栈绑定。例如,队列0与管理面软件栈对应,队列0可选地为管理面队列。即,队列0中的报文将交由管理面软件栈进行收发处理。示例性的,队列1与数据面软件栈1对应,队列n与数据面软件栈n对应,队列1~队列n可选地为数据面队列。相应的,队列1中的报文将交由数据面软件栈1进行收发处理。队列n中的报文将交由数据面软件栈n进行收发处理。
示例性的,本申请实施例中,管理面软件栈可与多个APP(application,应用程序)绑定。图2中仅示出管理面软件栈与APP0对应,实际上,管理面软件栈可以与APP0及其他多个APP对应,以将队列0中的报文输出至对应的APP,并将多个APP的报文发 送给其他设备。
示例性的,本申请实施例中,单一数据面软件栈与一个APP对应。例如,请参照图2,数据面软件栈1与APP1对应,数据面软件栈n与APPn对应。即,数据面软件栈1可对APP1的报文进行收发处理,数据面软件栈n可对APPn的报文进行收发处理。
仍参照图2,示例性的,以数据面软件栈1为例。数据面软件栈1包括但不限于:UIO协议栈、UIO网口驱动以及UIO驱动框架。
示例性的,在本申请实施例中,UIO网口驱动可分为两部分,包括用户态部分和内核态部分。示例性的,为描述方便,下文中将用户态部分简称为:UIO_U_DRV,将内核态部分简称为UIO_K_DRV。
示例性的,UIO网口驱动运行在内核层的AOS内核上(为更清楚的表示物理网口与管理面软件栈和数据面软件栈之间的交互,图2以及下面的附图中不再单独示出内核层)。示例性的,UIO_K_DRV可选地包括两个组件:一个组件为运行在内核层的Linux内核上的UIO_K_DRV,另外一个组件为运行在AOS内核中的UIO_AK_DRV。
结合图2,图3为示例性示出的初始化流程示意图。示例性的,图3所示的初始化流程也可以理解为准备流程。请参照图3,具体包括:
S301,UIO_K_DRV创建UIO设备,并创建共享内存。
示例性的,本申请实施例中,UIO网口驱动还可以包括NIC_DRV,该模块为运行在Linux内核内部的普通网口驱动。示例性的,NIC_DRV保存有初始化函数,NIC_DRV运行初始化函数。例如,初始化可以包括内存分配、数据结构初始化等。并且,NIC_DRV运行初始化函数,使得NIC_DRV调用UIO_K_DRV进行初始化。
示例性的,UIO_K_DRV响应于NIC_DRV的调用,运行UIO_K_DRV的初始化函数,使得UIO_K_DRV执行创建UIO设备,并且创建共享内存。
示例性的,UIO设备可选地为硬件设备。例如,可以是网口控制器。可以理解为,在本申请实施例中,UIO_K_DRV将底层硬件(例如网口控制器)虚拟成UIO设备,使底层硬件暴露给用户态的UIO网口驱动。用户态的UIO网口驱动即可对底层硬件进行操作。例如通过相关指令,打开UIO设备等。
示例性的,UIO_K_DRV可选地向MBUF(Memroy buffer,共享内存)模块申请MBUF内存。
示例性的,MBUF用于为应用程序和UIO通信栈提供共享内存池的内存分配、回收等服务,以及,用于为UIO网口驱动提供物理地址连续的内存块。以及,还用于为UIO驱动提供将虚拟地址转换为物理地址的API接口。MBUF的具体作用可参照已有技术实施例,此处不再赘述。
示例性的,MBUF内存用于存储数据。在后续的过程中,网口控制器可将数据写入到MBUF内存中,应用程序可从MBUF内存中直接读取数据。以及,应用程序的数据可以写入到MBUF内存中,网口控制器可直接从MBUF内存中读取数据并发送,以实现数据零拷贝,无需CPU参与,以降低CPU开销。
示例性的,UIO_K_DRV还可向内核申请并创建共享内存。可选地,共享内存包括但不限于:用户态与内核态之间的通讯内存、硬件寄存器内存、BD(buffer descriptor, 缓存描述符)内存等。
示例性的,UIO_K_DRV可调用操作系统内核提供的UIO设备注册函数。使得UIO设备注册到操作系统内核。可以理解为,UIO_K_DRV将创建的UIO设备的标识信息(例如UIO设备的设备名称)返回给用户态的UIO网口驱动。这样,用户态的UIO网口驱动即可根据UIO设备的标识信息,例如UIO设备的设备名称,对UIO设备执行相关操作,例如打开该UIO设备。
S302,应用程序打开UIO设备。
示例性的,用户态的UIO网口驱动,即UIO_U_DRV可以向应用程序提供API接口函数。应用程序可通过API接口函数,对UIO设备执行相应操作。例如,应用程序可通过UIO_U_DRV提供的API接口函数,输入打开指令,打开指令中可选地包括对应的UIO设备名称,用于指示打开指定的UIO设备名称对应的UIO设备。
S303,UIO_U_DRV对应用程序进行鉴权。
示例性的,UIO_U_DRV接收到应用程序下发的操作指令,例如打开指令。为保证设备安全,UIO_U_DRV可选地对应用程序进行鉴权,以检测应用程序是否具有对该UIO设备操作的权限。
示例性的,自动驾驶操作系统中可包括鉴权模块,也可以称为权限控制模块。可选地,鉴权模块可位于图1中所示的应用层中。示例性的,UIO_U_DRV可调用鉴权模块对应用程序进行鉴权,以检查应用程序是否合法。可选地,若应用程序不合法,则UIO_U_DRV基于鉴权模块返回的结果,拒绝应用程序对UIO设备的访问。可选地,若应用程序合法,则UIO_U_DRV基于鉴权模块返回的结果,允许应用程序访问UIO设备。
S304,UIO_U_DRV将进程与SMMU模块绑定。
示例性的,SMMU(system memory manage unit,系统缓存管理单元)为SoC内部的一个专门用于虚拟地址与物理地址转换的硬件模块。可以理解为,该模块可用于为底层硬件提供用户态的虚拟地址与硬件的物理地址之间进行转换功能。例如,当网口控制器需要从MBUF内存读取数据时,其获取到的是数据在MBUF内存中对应的虚拟地址。网口控制器可调用SMMU模块,使得SMMU模块对用户态的虚拟地址进行转换,得到对应的物理地址。SMMU模块可将物理地址发送到总线上,以使得存储设备基于物理地址提取对应的数据,并传输给网口控制器。
下面对进程与SMMU模块的绑定过程进行简单说明。示例性的,UIO_U_DRV响应于接收到的应用程序打开UIO设备的操作指令,UIO_U_DRV调用SMMU模块提供的接口,将应用程序的进程ID输出至SMMU模块。
示例性的,SMMU模块向UIO_U_DRV返回该进程与SMMU模块绑定之后的SSID(sub stream identifier,数据流标识)。示例性的,SMMU模块可以理解为查内存地址的页表实现地址翻译功能,SSID可以用于标识虚拟地址所属的页表。示例性的,每个应用程序在操作系统中有自己的一张地址页表,SMMU模块可基于SSID找到应用进程对应的页表,并页表中检索虚拟地址和物理地址的对应关系。
示例性的,UIO_U_DRV接收SMMU模块返回的SSID。UIO_U_DRV将SSID分配给网口控制器。使得网口控制器可基于SSID调用SMMU模块进行地址翻译。需要说明 的是,本申请实施例中仅简要说明SMMU模块的作用,具体细节可参照已有技术实施例中的SMMU模块的实现过程,本申请不再赘述。
S305,UIO_U_DRV将内核态的共享内存映射到用户态空间。
示例性的,如上文所述,内核态的UIO_K_DRV在S301中创建有MBUF内存以及多个共享内存。可以理解为,每个内存在内核态有对应的虚拟地址,内核态的UIO网口驱动可基于各内存的虚拟地址,对内存进行读写。
在本申请实施例中,UIO_U_DRV可将内核态的内存映射到用户态,以使得用户态的UIO驱动也可以对各内存进行读写。具体的,UIO_U_DRV可调用操作系统提供的mmap函数,将UIO_K_DRV创建的多个内存映射到用户态。可以理解为,各内存在用户态有对应的虚拟地址,用户态的UIO驱动可基于各内存的虚拟地址访问这些内存。
示例性的,用户态与内核态之间的通讯内存可用于存储收发包队列数量、硬件中断号、线程数量、BD(buffer descriptor,缓存描述符)的基地址等。UIO_U_DRV即可基于映射关系,从用户态与内核态之间的通讯内存中获取到相应信息,例如硬件中断号等。其中,每个队列对应一个硬件中断号,例如,如图2所示,队列1对应硬件中断号1、队列n对应硬件中断号n。硬件中断号是系统生成的,下文中不再重复说明。
S306,UIO_U_DRV向UIO_AK_DRV进行中断注册。
示例性的,UIO_U_DRV从用户态与内核态之间的通讯内存获取到的收发中断号等信息后,可调用UIO_AK_DRV提供的中断注册函数,进行中断注册。
示例性的,UIO_U_DRV可将获取到的硬件中断号等信息输出至UIO_AK_DRV。示例性的,UIO_AK_DRV响应于接收到的硬件中断号等信息,向操作系统内核申请中断并注册中断处理函数。示例性的,操作系统可为该硬件中断号分配对应的软件中断号。UIO_AK_DRV可基于该中断号注册中断处理。在后续的报文收发过程中,当报文收发完成后,网口控制器可向操作系统上报队列的硬件中断号,操作系统可基于对应的硬件中断号,确定对应的软件中断号,以及与软件中断号对应的中断事件,以实现UIO设备(例如网口控制器)能够感知中断。需要说明的是,分配软件中断号的目的用于保证系统安全,以使得硬件中断号仅用于硬件传输,而内核间的处理可基于软件中断号进行处理。
S307,UIO_U_DRV启动线程和UIO设备。
示例性的,UIO_U_DRV启动一个数据面线程。该数据面线程可用于等待网口硬件的收发包事件的到达,以便在该线程函数中执行收发包相关的处理流程。
示例性的,UIO_U_DRV启动一个管理面线程。该管理面线程用于调用poll函数,以阻塞等待操作系统的各种管理面事件消息。例如网口Link down、Link up、网口故障等。示例性的,管理面线程可向UIO_AK_DRV中的事件调度模块申请事件ID以及中断号。管理面线程可将事件ID和中断号对应输入UIO_U_DRV。相应的,UIO_U_DRV可将时间ID和中断号对应输入至操作系统内核。
示例性的,数据面线程可将事件ID输出至事件管理调度器。示例性的,事件管理调度器可保存事件ID和数据面线程的线程ID之间的对应关系。
示例性的,UIO_U_DRV使能UIO设备(即网口控制器)。例如,UIO_U_DRV可向网口控制器队列中断使能寄存器写入信息,以启动网口控制器中与UIO_U_DRV所属 协议栈对应的队列(例如图2中的队列1)的收发包硬件中断功能。网口控制器可响应UIO_U_DRV的操作,开始接收和发送报文。
示例性的,图3中的准备流程结束后,UIO设备将数据面线程和管理面线程挂起,进入休眠状态,以等待收发包事件到达。
结合图2和图3,图4a为示例性示出的网口控制器对接收到的报文的处理示意图。请参照图4a,示例性的,外部设备1(例如为雷达)向主机发送报文1。其中,报文1为管理面报文。图4a中所示的虚线为示意性示出的管理面报文的传输路径。示例性的,网口控制器接收到报文1。网口控制器可预先配置有硬件流表。示例性的,硬件流表可记录有地址信息与通信类型(包括数据面通信和管理面通信)的对应关系。例如,网口控制器可基于报文中携带的地址信息,例如五元组等信息,在硬件流表中检索,以获取匹配成功的五元组信息对应的通信类型,以确定报文为数据面报文或是管理面报文。
示例性的,网口控制器确定报文1为管理面报文,则网口控制器可将报文至于管理面协议栈对应的队列中,即图2中的队列0。管理面协议栈可对队列0中的报文进行相应处理,并将数据输出至对应的应用程序,例如APP0。
结合图2和图3,图4b为示例性示出的网口控制器对接收到的报文的处理示意图。请参照图4b,示例性的,外部设备1(例如为雷达)向主机发送报文2。其中,报文2为数据面报文。图4b中所示的虚线为示意性示出的数据面报文的传输路径。
示例性的,网口控制器确定报文2为数据面报文。网口控制器可基于队列、UIO协议栈与应用程序之间的绑定关系,将该报文至于对应的队列,例如,网口控制器接收到外部设备1的报文,在检测到该报文为数据面报文后,网口控制器可基于外部设备1与队列1的对应关系,将来自外部设备1的数据面报文置于队列1中。
仍参照图4b,示例性的,队列1中的数据面报文(例如报文2)将由数据面软件栈1进行收发处理,以使得APP1获取到报文2。
下面结合具体实施例对数据面报文的处理方式进行详细说明。请参照图4c,示例性的,如图3中的步骤所述,在准备阶段,UIO_K_DRV创建MBUF内存以及一个或多个共享内存(例如图4c中的BD内存)。网口控制器接收到报文1的完整报文后,网口控制器可将报文2存储至MBUF内存中。举例说明,网口控制器可基于BD内存在内核态的虚拟地址,从BD内存中读取MBUF内存的虚拟地址。示例性的,网口控制器可将报文2、SSID(概念详见上文)与MBUF内存的虚拟地址输出至SMMU模块。示例性的,如上文所述,SMMU模块可基于SSID检测到SSID标识的页表。SMMU可基于虚拟地址在页表中检索对应的硬件地址。示例性的,SMMU模块可将数据以及硬件地址输出至总线,并通过总线传输至存储设备,例如DDR(Double Data Rate,双倍速率同步动态随机存储器)。DDR可将报文2写入硬件地址所指示的MBUF内存中。
示例性的,网口控制器完成报文2的写入后,网口控制器更新BD内存中所记录的数据的相关信息。例如,网口控制器将移动BD内存中的写入指针,以指示当前网口控制器写入到MBUF内存的报文的数量。例如,一个BD用于指示内存中存入了一个报文。相应的,在后续过程中,UIO_U_DRV可基于BD内存的指针移动,确定网口控制器写入到MBUF内存中的报文的数量。并且,还可以基于写入指针,获取到报文对应的相关信 息。例如,相关信息包括但不限于:报文的起始地址、报文的长度等。
结合图4c,图4d为示例性示出的报文接收处理流程示意图。请参照图4d,具体包括:
S401,网口控制器向操作系统内核上报中断。
示例性的,如图4c所示,网口控制器将至少一个报文写入内存后,可生成硬件中断,以触发操作系统内核执行后续的步骤。例如,如上文所述,在准备阶段,网口控制器获取到每个队列对应的硬件中断号。若网口控制器完成队列1中的报文2的写入操作,网口控制器可将队列1对应的硬件中断号1上报给操作系统内核(例如内核层中的Linux内核)。
可选地,在本申请实施例中,网口控制器可在接收到多个报文,例如,两个或两个以上报文之后,再生成中断,并触发其他模块执行对多个报文的处理流程。以使得UIO协议栈可以一次性对多个报文进行处理,以减少中断次数,并进一步降低每次对报文进行处理所需的开销。
S402,操作系统内核向UIO_K_DRV输出软件中断号。
示例性的,操作系统内核响应于接收到的网口控制器发送的硬件中断号。如上文所述,操作系统内核维护有硬件终端号、软件终端号之间的对应关系。示例性的,操作系统内核可基于接收到的硬件中断号,获取到对应的软件终端号。示例性的,操作系统将软件中断号输出至UIO_K_DRV,以调用UIO_K_DRV中的中断处理函数(概念可参照上文)。
S403,UIO_K_DRV将事件ID输出至事件调度模块。
示例性的,UIO_K_DRV响应于接收到的软件中断号,可基于中断处理函数获取与软件中断号对应的事件ID。
示例性的,UIO_K_DRV将事件ID输出至事件调度控制模块,以指示当前存在对应于该事件ID的中断事件。
S404,UIO_K_DRV关闭队列中断。
示例性的,UIO_K_DRV与其他模块对本次中断进行处理的过程中,UIO_K_DRV可指示操作系统内核关闭该队列的中断。也就是说,该队列中再接收到报文后,不会产生中断,以防止UIO_K_DRV及其他模块对本次终端进行处理的过程中,被后续的中断干扰当前处理流程。可以理解的是,自动驾驶系统对每次中断所对应的报文进行处理过程中,伴随有对应的开销。如果在本次处理流程中,被后续的中断打断,而重复执行接收流程,将增加调度开销。因此,关闭中断可有效降低调度开销。
S405,事件调度模块基于事件ID,确定对应的线程。
示例性的,如上文所述,事件调度模块记录有事件ID与线程之间的对应关系。示例性的,在本申请实施例中,一个事件ID对应一个线程,单一线程可以处理多个队列的报文。
示例性的,事件调度模块响应于接收到的事件ID,确定与该事件ID对应的线程。
S406,事件调度模块唤醒数据面线程。
示例性的,如上文所述,UIO_U_DRV中的数据面线程在准备流程后,处于休眠状 态。示例性的,事件调度模块在确定需要唤醒的数据面线程后,可将该数据面线程唤醒,以使得数据面线程进行收发包处理。
S407,数据面线程进行收发包处理。
示例性的,数据面线程可读取队列的中断状态寄存器,以判断是发送(TX)中断或接收(RX)中断。
举例说明,若数据面线程确定为接收中断(发送中断将在下面的实施例中说明),则可确定网口控制器已成功接收至少一个报文,例如可以是一个报文,也可以是两个或两个以上报文。示例性的,如上文所述,网口控制器将至少一个报文写入到MBUF内存中,并通过移动BD内存中的写入指针,指示接收到的报文的数量、报文在内存中的地址以及报文长度等信息
为更好地说明数据面软件栈对数据面报文的处理方式,下文中仅以数据面软件栈对报文2的处理方式进行说明。在其他实施例中,若MBUF内存中存储有其它报文,则与报文2的方式相同,本申请中不再赘述。
请参照图4e,示例性的,如上文S305所述,UIO_U_DRV将内核态的MBUF内存和共享内存映射到用户态,也就是说,UIO_U_DRV中的数据面线程可基于MBUF内存与共享内存在用户态中的虚拟地址,对读取MBFU内存与共享内存中的信息。
仍参照图4e,示例性的,数据面线程可通过移动BD内存中的读取指针,读取读取指针所指示的相关信息(包括报文2在MBUF内存中的地址和长度等信息),直至读取指针与写入指针重合。需要说明的是,对于BD内存中的指针读取方式,可参照已有技术实施例中的相关内容,本申请不再重复说明。
请继续参照图4e,示例性的,数据面线程将获取到的报文2在MBUF内存中的地址和长度等信息输出至UIO协议栈。
请参照图4f,示例性的,UIO协议栈可基于获取到的报文2在MBUF内存中的地址和长度,在MBUF内存中读取报文2。UIO协议栈可检测报文2的payload字段的地址和长度。示例性的,报文可选地包括报头和payload字段,还可以包括CRC等字段,本申请不做限定。数据面线程可读取报文2的payload字段在MBUF内存中的地址和长度。
请参照图4g,示例性的,UIO协议栈可将获取到的报文2的payload字段的地址和长度发送给APP1。
请参照图4h,示例性的,APP1可基于UIO协议栈输入的payload字段的地址和长度,从MBUF内存中读取payload字段。也就是说,在本申请实施例中,UIO协议栈可将报文2的报头剥离,以使得上层应用直接获取到报文中的数据部分。
需要说明的是,本申请实施例中仅以payload字段为例进行说明。在其他实施例中,UIO协议栈可与APP预先进行协商,以确定APP对报文中的哪些字段感兴趣,UIO协议栈可将APP感兴趣的字段的地址和长度发送给APP。
示例性的,在本申请实施例中,如上文所述,网口控制器可在接收到多个报文后,上报一次硬件中断,也就是说,MBUF内存中此时已存储有多个报文,相应的,BD内存中记录有多个报文中的每个报文的相关信息(例如报文在MBUF内存中的地址、长度等)。
一个示例中,数据面线程与UIO协议栈可按照上文中对报文2的处理方式,对MBUF 内存中的多个报文进行逐一处理。例如,数据面线程获取一个报文的地址和长度等信息后,将该报文的地址和长度等信息发送给UIO协议栈。UIO协议栈对报文进行解析,将paylaod字段的地址和长度发送给APP,并依次对MBUF内存中的其它报文进行处理。
另一个示例中,数据面线程与UIO协议栈可同时对多个报文进行处理。例如,数据面线程可从BD内存中获取多个报文中的每个报文的地址和长度等信息。数据面线程将多个报文对应的地址和长度发送给UIO协议栈。UIO协议栈读取多个报文中的每个报文的payload字段的地址和长度。并将每个报文的payload字段的地址和长度一起发送给APP。
在本申请实施例中,应用可基于数据面软件栈输入的报文的payload字段在MBUF内存中的地址和长度,从MBFU内存中直接读取payload字段。从而实现报文的零拷贝传输,而无需如管理面协议栈,即以太网软件协议栈的处理流程中需要对报文进行多次拷贝。
可选地,应用程序读取报文后,可指示UIO网口驱动将存储该报文的MBUF内存中的内存区域释放,以回收该内存区域,从而节约内存资源。
在本申请实施例中,网口控制器上报的中断的过程中,UIO网口驱动收在处理中断时,以最高优先级响应该中断,并且在该中断函数中通知事件调度器,以通知用户态UIO网口驱动中的对应线程对该中断事件进行及时处理。从而通过该种中断直通方式,满足车载自动驾驶领域对时延确定性数据通信的要求。
S408,数据面线程使能硬件中断。
示例性的,数据面线程对本次中断处理完成后,可指示操作系统内核重新使能中断。操作系统内核响应于数据面线程的指示,可允许网口控制器继续上报中断,并重复执行上述流程。
示例性的,图4d中所示的为接收方向的各模块交互过程。对于发送方向,即应用程序需要将数据发送给外部设备。具体流程可以为:结合图2,以APP1为例,APP1将数据写入到MBUF内存中。并且,APP1将数据在MBUF内存中的地址以及数据长度等相关信息,输出至对应的软件栈,即数据面软件栈1。UIO协议栈将获取到的相关信息透传至UIO网口驱动。示例性的,UIO网口驱动可基于相关信息,更新BD内存中的指针。具体更新方式可参照已有技术,本申请不做限定。
示例性的,网口控制器可基于BD内存中的指针,获取到数据在MBUF内存中的虚拟地址、数据长度等信息。示例性的,网口控制器可通过SMMU模块从MBUF内存中获取到该数据。具体细节与数据接收过程类似,此处不再赘述。
示例性的,网口控制器获取到数据后,可将数据进行相应处理,例如对数据进行以太网封装等,以获取到对应的报文。网口控制器将报文置于队列1中,并进行发送。
示例性的,队列1中的报文发送后,网口控制器可向操作系统内核上报队列1的队列中断号,具体细节可参照S402~S406的描述。示例性的,UIO_U_DRV(具体为数据面线程)唤醒后,可判断本次中断事件为发送中断,并可进一步确定MBUF内存中的数据已发送完毕。示例性的,UIO_U_DRV可释放存储该数据的缓存区域。并重新使能硬件中断,即执行S408。
下面介绍本申请实施例提供的一种装置。如图5所示:
图5为本申请实施例提供的一种通信装置的结构示意图。如图5所示,该通信装置500可包括:处理器501、收发器505,可选的还包括存储器502。
所述收发器505可以称为收发单元、收发机、或收发电路等,用于实现收发功能。收发器505可以包括接收器和发送器,接收器可以称为接收机或接收电路等,用于实现接收功能;发送器可以称为发送机或发送电路等,用于实现发送功能。
存储器502中可存储计算机程序或软件代码或指令504,该计算机程序或软件代码或指令504还可称为固件。处理器501可通过运行其中的计算机程序或软件代码或指令503,或通过调用存储器502中存储的计算机程序或软件代码或指令504,对MAC层和PHY层进行控制,以实现本申请各实施例提供的通信方法。其中,处理器501可以为中央处理器(central processing unit,CPU),存储器502例如可以为只读存储器(read-only memory,ROM),或为随机存取存储器(random access memory,RAM)。
本申请中描述的处理器501和收发器505可实现在集成电路(integrated circuit,IC)、模拟IC、射频集成电路RFIC、混合信号IC、专用集成电路(application specific integrated circuit,ASIC)、印刷电路板(printed circuit board,PCB)、电子设备等上。
上述通信装置500还可以包括天线506,该通信装置500所包括的各模块仅为示例说明,本申请不对此进行限制。
如前所述,以上实施例描述中的通信装置可以是自动驾驶系统,但本申请中描述的通信装置的范围并不限于此,而且通信装置的结构可以不受图5的限制。通信装置可以是独立的设备或者可以是较大设备的一部分。例如所述通信装置的实现形式可以是:
(1)独立的集成电路IC,或芯片,或,芯片系统或子系统;(2)具有一个或多个IC的集合,可选的,该IC集合也可以包括用于存储数据,指令的存储部件;(3)可嵌入在其他设备内的模块;(4)车载设备等等;(5)其他等等。
对于通信装置的实现形式是芯片或芯片系统的情况,可参见图6所示的芯片的结构示意图。图6所示的芯片包括处理器601和接口602。其中,处理器601的数量可以是一个或多个,接口602的数量可以是多个。可选的,该芯片或芯片系统可以包括存储器603。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
基于相同的技术构思,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序包含至少一段代码,该至少一段代码可由计算机执行,以控制计算机用以实现上述方法实施例。
基于相同的技术构思,本申请实施例还提供一种计算机程序,当该计算机程序被终端设备执行时,用以实现上述方法实施例。
所述程序可以全部或者部分存储在与处理器封装在一起的存储介质上,也可以部分或者全部存储在不与处理器封装在一起的存储器上。
基于相同的技术构思,本申请实施例还提供一种芯片,包括网口控制器与处理器。网口控制器与处理器可实现上述方法实施例。
结合本申请实施例公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (16)

  1. 一种支持多协议栈的通信系统,其特征在于,包括网口控制器、以太网协议栈和数据面协议栈;
    所述网口控制器,用于:
    确定接收的第一报文为管理面报文;
    将所述第一报文输出至所述以太网协议栈;
    所述以太网协议栈,用于:
    响应于接收到的所述第一报文,将所述第一报文输出至第一应用;
    所述网口控制器,还用于:
    确定接收的第二报文为数据面报文;
    将所述第二报文保存至第一内存中;
    所述数据面协议栈,用于:
    对所述第一内存中的所述第二报文进行解析,得到所述第二报文的指定字段在所述第一内存中的位置信息;
    将所述指定字段的位置信息输出至第二应用,以使得所述第二应用根据所述指定字段的位置信息从所述第一内存中获取所述指定字段。
  2. 根据权利要求1所述的系统,其特征在于,所述网口控制器包含特征字段与报文类型的对应关系,所述网口控制器,具体用于:
    基于特征字段与报文类型的对应关系,确定与所述第一报文的特征字段对应的报文类型为管理面报文。
  3. 根据权利要求1所述的系统,其特征在于,所述网口控制器包含特征字段与报文类型的对应关系,所述网口控制器,具体用于:
    基于特征字段与报文类型的对应关系,确定与所述第二报文的特征字段对应的报文类型为数据面报文。
  4. 根据权利要求2所述的系统,其特征在于,所述网口控制器包括第一硬件队列,所述第一硬件队列与所述管理面协议栈对应,所述网口控制器,具体用于:
    确定所述第一报文的类型为管理面报文后,将接收到的所述第一报文置于所述第一硬件队列。
  5. 根据权利要求4所述的系统,其特征在于,所述网口控制器,具体用于:
    将所述第一硬件队列中的所述第一报文输出至所述以太网协议栈。
  6. 根据权利要求3所述的系统,其特征在于,所述网口控制器包括第二硬件队列,所述第二硬件队列与所述数据面协议栈对应,所述网口控制器,具体用于:
    确定所述第二报文的类型为数据面报文后,将接收到的所述第二报文置于所述第二硬件队列。
  7. 根据权利要求6所述的系统,其特征在于,所述网口控制器,具体用于:
    将所述第二硬件队列中的至少一个报文输出至所述第一内存,其中,所述至少一个报文中包括所述第二报文。
  8. 根据权利要求7所述的系统,其特征在于,
    所述网口控制器,还用于:
    将所述至少一个报文中的每个报文在所述第一内存中的位置信息写入到第二内存中;
    向所述数据面协议栈上报中断;
    所述数据面协议栈,具体用于:
    响应于接收到的所述中断,从所述第二内存中获取所述至少一个报文中的每个报文的位置信息;
    基于获取到的所述至少一个报文中的每个报文的位置信息,在所述第一内存中读取所述至少一个报文,并确定所述至少一个报文中的每个报文的指定字段的位置信息。
  9. 一种支持多协议栈的通信方法,其特征在于,应用于支持多协议栈的通信系统,所述通信系统包括网口控制器、以太网协议栈和数据面协议栈;
    所述网口控制器确定接收的第一报文为管理面报文;
    所述网口控制器将所述第一报文输出至所述以太网协议栈;
    所述以太网协议栈响应于接收到的所述第一报文,将所述第一报文输出至第一应用;
    所述网口控制器确定接收的第二报文为数据面报文;
    所述网口控制器将所述第二报文保存至第一内存中;
    所述数据面协议栈对所述第一内存中的所述第二报文进行解析,得到所述第二报文的指定字段在所述第一内存中的位置信息;
    所述数据面协议栈将所述指定字段的位置信息输出至第二应用,以使得所述第二应用根据所述指定字段的位置信息从所述第一内存中获取所述指定字段。
  10. 根据权利要求9所述的方法,其特征在于,所述网口控制器包含特征字段与报文类型的对应关系,所述网口控制器确定接收的第一报文为管理面报文,包括:
    基于特征字段与报文类型的对应关系,确定与所述第一报文的特征字段对应的报文类型为管理面报文。
  11. 根据权利要求9所述的方法,其特征在于,所述网口控制器包含特征字段与报文类型的对应关系,所述网口控制器确定接收的第二报文为数据面报文,包括:
    基于特征字段与报文类型的对应关系,确定与所述第二报文的特征字段对应的报文类型为数据面报文。
  12. 根据权利要求10所述的方法,其特征在于,所述网口控制器包括第一硬件队列,所述第一硬件队列与所述管理面协议栈对应,所述网口控制器确定接收的第一报文为管理面报文之后,还包括:
    将接收到的所述第一报文置于所述第一硬件队列。
  13. 根据权利要求12所述的方法,其特征在于,所述网口控制器将所述第一报文输出至所述以太网协议栈,包括:
    将所述第一硬件队列中的所述第一报文输出至所述以太网协议栈。
  14. 根据权利要求11所述的方法,其特征在于,所述网口控制器包括第二硬件队列,所述第二硬件队列与所述数据面协议栈对应,所述网口控制器确定接收的第二报文为数据面报文之后,包括:
    将接收到的所述第二报文置于所述第二硬件队列。
  15. 根据权利要求14所述的方法,其特征在于,所述网口控制器将所述第二报文保存至第一内存中,包括:
    将所述第二硬件队列中的至少一个报文输出至所述第一内存,其中,所述至少一个报文中包括所述第二报文。
  16. 根据权利要求15所述的方法,其特征在于,所述网口控制器将所述第二报文保存至第一内存中后,还包括:
    所述网口控制器将所述至少一个报文中的每个报文在所述第一内存中的位置信息写入到第二内存中;
    所述网口控制器向所述数据面协议栈上报中断;
    所述数据面协议栈对所述第一内存中的所述第二报文进行解析,得到所述第二报文的指定字段在所述第一内存中的位置信息,包括:
    所述数据面协议栈响应于接收到的所述中断,从所述第二内存中获取所述至少一个报文中的每个报文的位置信息;
    所述数据面协议栈基于获取到的所述至少一个报文中的每个报文的位置信息,在所述第一内存中读取所述至少一个报文,并确定所述至少一个报文中的每个报文的指定字段的位置信息。
PCT/CN2021/097148 2021-05-31 2021-05-31 支持多协议栈的通信方法及系统 WO2022251998A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/097148 WO2022251998A1 (zh) 2021-05-31 2021-05-31 支持多协议栈的通信方法及系统
CN202180090912.0A CN116803067A (zh) 2021-05-31 2021-05-31 支持多协议栈的通信方法及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/097148 WO2022251998A1 (zh) 2021-05-31 2021-05-31 支持多协议栈的通信方法及系统

Publications (1)

Publication Number Publication Date
WO2022251998A1 true WO2022251998A1 (zh) 2022-12-08

Family

ID=84323748

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/097148 WO2022251998A1 (zh) 2021-05-31 2021-05-31 支持多协议栈的通信方法及系统

Country Status (2)

Country Link
CN (1) CN116803067A (zh)
WO (1) WO2022251998A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116521603A (zh) * 2023-06-30 2023-08-01 北京大禹智芯科技有限公司 一种基于fpga实现mctp协议的方法
CN117395329A (zh) * 2023-12-13 2024-01-12 井芯微电子技术(天津)有限公司 收发以太二层协议报文的方法、装置及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108200086A (zh) * 2018-01-31 2018-06-22 四川九洲电器集团有限责任公司 一种高速网络数据包过滤装置
WO2019101332A1 (en) * 2017-11-24 2019-05-31 Nokia Solutions And Networks Oy Mapping of identifiers of control plane and user plane
CN110535813A (zh) * 2018-05-25 2019-12-03 网宿科技股份有限公司 内核态协议栈与用户态协议栈并存处理方法和装置
CN110753008A (zh) * 2018-07-24 2020-02-04 普天信息技术有限公司 基于dpaa的网络数据处理装置和方法
CN112422453A (zh) * 2020-12-09 2021-02-26 新华三信息技术有限公司 一种报文处理的方法、装置、介质及设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019101332A1 (en) * 2017-11-24 2019-05-31 Nokia Solutions And Networks Oy Mapping of identifiers of control plane and user plane
CN108200086A (zh) * 2018-01-31 2018-06-22 四川九洲电器集团有限责任公司 一种高速网络数据包过滤装置
CN110535813A (zh) * 2018-05-25 2019-12-03 网宿科技股份有限公司 内核态协议栈与用户态协议栈并存处理方法和装置
CN110753008A (zh) * 2018-07-24 2020-02-04 普天信息技术有限公司 基于dpaa的网络数据处理装置和方法
CN112422453A (zh) * 2020-12-09 2021-02-26 新华三信息技术有限公司 一种报文处理的方法、装置、介质及设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116521603A (zh) * 2023-06-30 2023-08-01 北京大禹智芯科技有限公司 一种基于fpga实现mctp协议的方法
CN117395329A (zh) * 2023-12-13 2024-01-12 井芯微电子技术(天津)有限公司 收发以太二层协议报文的方法、装置及存储介质
CN117395329B (zh) * 2023-12-13 2024-02-06 井芯微电子技术(天津)有限公司 收发以太二层协议报文的方法、装置及存储介质

Also Published As

Publication number Publication date
CN116803067A (zh) 2023-09-22

Similar Documents

Publication Publication Date Title
US10924483B2 (en) Packet validation in virtual network interface architecture
US7603429B2 (en) Network adapter with shared database for message context information
KR100555394B1 (ko) Ngio/infiniband 어플리케이션용 리모트 키검증을 위한 방법 및 메커니즘
US8990801B2 (en) Server switch integration in a virtualized system
US8635353B2 (en) Reception according to a data transfer protocol of data directed to any of a plurality of destination entities
US10467154B2 (en) Multi-port multi-sideband-GPIO consolidation technique over a multi-drop serial bus
CN112398817B (zh) 数据发送的方法及设备
WO2022251998A1 (zh) 支持多协议栈的通信方法及系统
CN113688072B (zh) 数据处理方法及设备
CN112639741A (zh) 用于控制联合共享的存储器映射区域的方法和装置
CN112243046B (zh) 通信方法和网卡
US11750418B2 (en) Cross network bridging
CN114201268B (zh) 一种数据处理方法、装置、设备及可读存储介质
US20110107347A1 (en) Generic Transport Layer Mechanism For Firmware Communication
US20040001470A1 (en) Method for controlling wireless network access through wired network access interface and associated computer system
CN108984324B (zh) Fpga硬件抽象层
CN110618962A (zh) Ft-m6678芯片的多核网络并发访问方法、系统及介质
JPWO2003014947A1 (ja) ホスト装置、電子装置及び伝送システムの制御方法
CN112231250B (zh) 存储设备的性能隔离
US10681616B2 (en) Wireless communication device, wireless communication method, computer device, and information processing method
CN112231250A (zh) 存储设备的性能隔离

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202180090912.0

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21943379

Country of ref document: EP

Kind code of ref document: A1