CN115729880A - Data processing method, device, equipment and storage medium - Google Patents

Data processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115729880A
CN115729880A CN202211504912.3A CN202211504912A CN115729880A CN 115729880 A CN115729880 A CN 115729880A CN 202211504912 A CN202211504912 A CN 202211504912A CN 115729880 A CN115729880 A CN 115729880A
Authority
CN
China
Prior art keywords
space
user
address
task
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211504912.3A
Other languages
Chinese (zh)
Inventor
曹颖
钱远盼
李兆耕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211504912.3A priority Critical patent/CN115729880A/en
Publication of CN115729880A publication Critical patent/CN115729880A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The disclosure provides a data processing method, a data processing device, data processing equipment and a storage medium, and relates to the technical field of communication, in particular to the technical field of remote direct data access. The specific implementation scheme is as follows: writing a task to be processed into a queue resource applied in a kernel space through a first user mode process; when monitoring that the queue resource changes through a second user mode process, acquiring and processing the task to be processed from the queue resource; and a first memory space mapping corresponding to the queue resource is provided among a first user mode space corresponding to the first user mode process, a second user mode space corresponding to the second user mode process and the kernel space. According to the technology disclosed by the invention, the data processing efficiency is improved, and the influence of data processing on the system performance of the equipment is reduced.

Description

Data processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a method, an apparatus, a device, and a storage medium for data processing.
Background
RDMA (Remote Direct Memory Access), which transfers data directly into the Memory area of a computer through a network, and quickly moves data from a system to a Remote system Memory without any influence on the operating system.
RoCE (RDMA over Converged Ethernet ) is a network protocol defined in the IBTA (InfiniBand Trade Association) standard that allows RDMA to be used over Ethernet.
Disclosure of Invention
The disclosure provides a data processing method, apparatus, device and storage medium.
According to an aspect of the present disclosure, there is provided a data processing method including:
writing a task to be processed into a queue resource applied in a kernel space through a first user mode process;
when monitoring that the queue resource changes through a second user mode process, acquiring and processing the task to be processed from the queue resource;
and a first memory space mapping corresponding to the queue resource is provided among a first user mode space corresponding to the first user mode process, a second user mode space corresponding to the second user mode process and the kernel space.
According to another aspect of the present disclosure, there is also provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the data processing methods provided by the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform any one of the data processing methods provided by the embodiments of the present disclosure.
According to the technology disclosed by the invention, the data processing efficiency is improved, and the influence of data processing on the system performance of the equipment is reduced.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1A is a schematic diagram of a memory space distribution provided in the embodiment of the present disclosure;
fig. 1B is a flowchart of a data processing method provided by an embodiment of the present disclosure;
FIG. 2A is a SoftRDMA framework diagram provided by an embodiment of the disclosure;
fig. 2B is a schematic diagram of a memory management according to an embodiment of the disclosure;
fig. 2C is a schematic diagram of mapping a physical machine memory space according to an embodiment of the disclosure;
fig. 2D is a flowchart of a memory space mapping construction method provided in the embodiment of the present disclosure;
fig. 2E is a schematic diagram of mapping a memory space of a virtual machine according to an embodiment of the present disclosure;
fig. 2F is a flowchart of another memory space mapping construction method provided in the embodiment of the present disclosure;
fig. 3A is a schematic diagram of another memory space distribution provided in the embodiment of the present disclosure;
fig. 3B is a flowchart of another memory space mapping construction method provided in the embodiment of the present disclosure;
fig. 3C is a flowchart of another memory space mapping construction method provided in the embodiment of the present disclosure;
fig. 4A is a flowchart of a data access method provided by an embodiment of the present disclosure;
fig. 4B is a schematic diagram of an RDMA transmission flow at the transceiving end according to an embodiment of the present disclosure;
fig. 5 is a flowchart of a task completion confirmation method provided by an embodiment of the present disclosure;
FIG. 6 is a block diagram of a data processing apparatus provided in an embodiment of the present disclosure;
fig. 7 is a block diagram of an electronic device for implementing a data processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The data processing method and the data processing device provided by the embodiment of the disclosure are suitable for a software-based software-implemented Soft-RoCE (software RoCE) data communication scene. The data processing methods provided by the embodiments of the present disclosure may be executed by a data processing apparatus, which may be implemented by software and/or hardware, and is specifically configured in a computing device with RDMA data transmission capability, which is not limited in any way by the present disclosure.
For ease of understanding, before describing the data processing method in detail, a brief description will be given of the memory space distribution of the computing device executing the data processing method.
See fig. 1A for a schematic diagram of memory space distribution. The memory space of the computing device includes a user space and a kernel space. The user space is used for running code logic outside the operating system and cannot directly access hardware and a specific address space; the kernel space is used to run operating system code logic, and can execute any instructions, access all address spaces, and access any hardware without restriction.
A first user mode process and a second user mode process run in the user space. For the convenience of distinction, a user space corresponding to the first user state process is referred to as a first user state space, and a user space corresponding to the second user state process is referred to as a second user state space. Queue resources are correspondingly applied in the kernel space. The first user state space, the second user state space and the kernel space are provided with first memory space mapping corresponding to the queue resources, and through the first memory space mapping, the connection between the first user state process and the second user state process and the access of the first user state process and the second user state process to the queue resources applied by the kernel space are realized.
Referring to fig. 1B, a data processing method includes:
s101, writing a task to be processed into a queue resource applied in a kernel space through a first user mode process.
S102, when the change of the queue resource is monitored through the second user mode process, the task to be processed is obtained and processed from the queue resource.
And a first memory space mapping corresponding to the queue resource is provided among a first user mode space corresponding to the first user mode process, a second user mode space corresponding to the second user mode process and a kernel space.
It should be noted that, because the first user mode space, the second user mode space, and the kernel space have the first memory space mapping corresponding to the queue resource, the queue resource in the kernel space can be used as a shared memory for the first user process and the second user process running in the user space to access, that is, the first user process and the second user process are opened, and the queue resource applied by the kernel mode can be accessed without relying on system call, thereby reducing the influence on the system performance of the device and improving the data processing efficiency.
The queue resource is a memory resource which is applied in a kernel mode and is specially used for storing the task to be processed. The number of the queue resources is at least one, and different queue resources are used for sequentially storing the tasks to be processed with different functions.
In an alternative embodiment, the Queue resources may comprise Work Queue (WQ) resources that store pending tasks during RDMA transfers. The work Queue may be a Send Queue (SQ) resource that stores a task to be sent; correspondingly, the task to be sent is SQ WQE (Work Queue Element); the work Queue may also be a Receive Queue (RQ) resource that stores tasks to be received; correspondingly, the task to be received is RQ WQE.
In another optional embodiment, the Queue resources may also include Completion Queue (CQ) resources that store work Completion tasks; accordingly, the work Completion task is a CQE (Completion Queue Element). Accordingly, the task completion task may be a task completion task of a task to be sent, or a task completion task of a task to be received.
Due to the existence of the first memory space mapping, the first user mode process can write the task to be processed into the queue resource based on the first memory space mapping without system call; accordingly, the second user mode process is aware of changes in queue resources by way of real-time or timed monitoring (e.g., polling). When the writing of the tasks to be processed in the queue resources is sensed, the second user mode process can obtain the written tasks to be processed from the queue resources based on the first memory space mapping without system call, and processes the tasks to be processed according to the task attribute information of the tasks to be processed.
It can be understood that, since the processing operations of the tasks to be processed are all executed in the user mode, and are not required to be implemented in the kernel mode, the convenience of management and maintenance of the task processing logic of the tasks to be processed is improved, and the kernel mode codes are not required to be modified under specific conditions (such as protocol change or function extension), so that the system crash is avoided, and the system stability is improved.
In order to adapt the data processing method of the present disclosure to an RDMA transmission scenario, in the first user mode process and the second user mode process, one of the processes may be a user process for a user to operate, and the other process may be a protocol stack process for implementing protocol processing and the like. That is, if the first user mode process is a user process, the second user mode process is a protocol stack process; and if the first user mode process is the protocol stack process, the second user mode process is correspondingly the user process. For the convenience of distinguishing, the user state space corresponding to the user process is subsequently called a user process space, and the user state space corresponding to the protocol stack process is called a protocol stack space.
Correspondingly, a task to be processed can be written in the queue resource applied by the kernel space by the user process; and sensing whether the queue resource changes or not by the protocol stack process based on the first memory space mapping, and acquiring and processing the task to be processed from the queue resource by the protocol stack process when the change of the queue resource is monitored. Or, the protocol stack process can write the task to be processed in the queue resource applied by the kernel space; and sensing whether the queue resource changes or not by the user process based on the first memory space mapping, and acquiring and processing the task to be processed from the queue resource by the user process when the queue resource changes are monitored.
According to the technical scheme, the data processing method is applied to the RDMA transmission process, so that queue resources of kernel-state application can be accessed without depending on system call in the process of executing the data transmission method by a sender or a receiver of data transmission, the influence on the performance of equipment is reduced, and the data processing efficiency in the data transmission process is improved. Meanwhile, the expandability of the protocol stack process is improved, and the stability of the system is improved. In addition, the scheme supports the RDMA capability through software, reduces the dependence on the set RDMA network card and reduces the application cost of the RDMA.
On the basis of the above technical solutions, the present disclosure also provides an optional embodiment, in which the construction process of the first memory space map is described in detail. It should be noted that, for parts not described in detail in the embodiments of the present disclosure, reference may be made to related expressions in other embodiments, and details are not described herein again.
For ease of understanding, a detailed description of the SoftRDMA (software RDMA) implementation architecture is first provided before describing the construction process of the first memory space map.
See fig. 2A for a diagram of SoftRDMA architecture of a computing device executing the memory space mapping construction method (the same as the computing device executing the data processing method described above).
Wherein, in the case of Physical deployment, softRDMA pDevice (SoftRDMA Physical Device), in kernel mode, simulates an RDMA-capable Device, which may be a character Device or a miscellaneous Device, by software; on the basis of SoftRDMA physical devices, IB devices (infiniBand devices, wireless bandwidth devices) are registered, and an Open plug Specification (Open plug Specification) interface is provided to interface with an Open fabric Enterprise Distribution (Open fabric interface program) framework. Queue resources required for SoftRDMA, as well as Mailbox resources used to communicate with SoftRDMA Stack, may be maintained based on SoftRDMA pDevice. The user may call IB Verbs (a set of RDMA application programming interfaces implemented by the push of Open Fabrics) interfaces within a physical machine of the computing device based on the OFED framework to implement the issuing of RDMA pending tasks. The physical machine can run a User Process (User Process), and applies for a User Memory (User Memory) in the User space through the User Process for use.
Under the condition of deployment of the Virtual machines, a Soft RDMA Peripheral Component interface Express Virtual Device is simulated and realized through a Qemu Process (virtualization simulator Process), and the Soft RDMA PCIE vDevice instance (realization) in the corresponding graph is corresponded to; based on the virtual PCIE device, carrying SoftRDMA capability; in a virtual machine of a computing Device, registering an IB Device on the basis of a SoftRDMA PCIE vDevice Device, providing an IB Device ops interface, and butting an OFED framework. Queue resources required to support SoftRDMA within a virtual machine of a computing device, which may include data storage resources such as SQ resources, RQ resources, and CQ resources, as well as Mailbox resources used to communicate with the SoftRDMA protocol stack, may be maintained based on SoftRDMA vDevice. The user can call the IB Verbs interface in the virtual machine of the computing equipment to realize the issuing of the RDMA to-be-processed task. The User Process can be operated in the virtual machine, and a User Memory can be applied in the User space through the User Process for use. It should be noted that the simulation of Soft RDMA PCIE vDevice by QemuProcess is only used as a specific example for implementing a virtual PCIE device, and should not be construed as a specific limitation to the implementation manner thereof.
Wherein SoftRDMA Stack (SoftRDMA protocol Stack) comprises a SoftRDMA data plane. The RDMA Stack is used for carrying out RDMA transport layer protocol processing; the TX Process is used for message sending action when RDMA transmission is executed based on a transport layer protocol; an RX Process (receiving Process) is used for message receiving action when RDMA transmission is carried out based on a transport layer protocol; the SoftRDMA Data Plane is carried on a software Stack of a DPDK Stack (Data Plane Development Kit Stack), and a DPDK HW (hardware) network card is used for completing the packet receiving and sending tasks.
The following describes in detail the construction process of the first memory space map based on the SoftRDMA implementation architecture shown in fig. 2A in conjunction with the memory management diagram shown in fig. 2B. It should be noted that the first memory space mapping is constructed and adapted to a physical machine deployment scenario and a virtual machine deployment scenario.
The following describes in detail a first memory space mapping construction process in a physical machine deployment scenario with reference to the physical machine memory space mapping diagram shown in fig. 2C and the flow chart of the memory space mapping construction method shown in fig. 2D.
Referring to fig. 2D, a memory space mapping construction method includes:
S201A, sending a queue creation request to a kernel driver of the physical equipment simulated by the kernel-mode software through a user process.
When a User needs to create a queue resource, an IB Verbs interface is called by a User Process (User Process) to generate a queue creation request, and the queue creation request is transferred to a Kernel Driver of SoftRDMA pDevice, namely a physical device Kernel Driver (SoftRDMA pDevice Kernel Driver) through an OFED framework call.
Alternatively, the Queue resources may include QP (Queue Pair) resources for SQ and RQ. Or optionally, the queue resources may also include CQ resources; or, optionally, the queue resource may also be an MB (Mailbox) resource for implementing heterogeneous inter-core communication, and is used to establish a communication channel between different processes through an interrupt mechanism.
S202A, responding to a queue creating request through a physical device kernel driver, applying a physical memory in a kernel space as a queue resource, applying a first device address space matched with the queue resource, and establishing memory mapping from the queue resource to the first device address space.
The physical device kernel driver applies for a continuous physical memory of the queue resource in a kernel mode, namely, applies for a corresponding physical page, applies for a first device address space matched with the queue resource in the device address space, and establishes memory mapping from the queue resource to the first device address space.
For example, the physical device kernel driver may determine, based on the size of the physical memory corresponding to the queue resource, the space size of the address space of the first device that needs to be applied for. Meanwhile, in order to facilitate establishment of subsequent memory mapping, when the first device address space is applied, first physical page offset information (page _ off) is generated according to the size of the first device address space.
Specifically, the matched first device address space may be applied based on the number of physical pages of the physical memory corresponding to the queue resource.
S203A, generating a first protocol stack address matched with the queue resource in a protocol stack space through a protocol stack process, and establishing memory mapping from the first protocol stack address to a first equipment address space.
A protocol Stack Process (SoftRDMA Stack Process) generates a first protocol Stack address (Stack VA) matched with a queue resource in a protocol Stack space in a mmap (a method for mapping a file in memory) mode, and establishes memory mapping from the Stack VA to a first device address space, namely, establishes a mapping relationship from the Stack VA to a physical memory of the queue resource.
Specifically, the physical device kernel driver may transmit the first physical page offset information to the protocol stack process through an MB (Mailbox) message; and the protocol Stack process generates a Stack VA matched with the queue resources in an mmap mode based on the first physical page offset information.
S204A, generating a first user access address matched with the queue resource in a user process space through a user process, and establishing memory mapping from the first user access address to a first equipment address space.
The User process generates a first User access address (User VA) matched with the queue resource in a mmap mode, and establishes a mapping relation from the User VA to a first equipment address space, namely the mapping relation from the User VA to a physical memory of the queue resource.
Specifically, after the protocol Stack process generates the Stack VA, a queue identifier of the queue resource may also be allocated, and the queue identifier is fed back to the physical device kernel driver through the Mailbox message. And the physical device kernel driver feeds back the queue identification and the first physical page offset information generated by the physical device kernel driver to the user process by calling the OFED frame. And the User process generates a User VA matched with the queue resource under the queue identification in a mmap mode based on the first physical page offset information, and establishes a mapping relation from the User VA to the first equipment address space. For example, if the Queue resource is a QP resource, the Queue identifier may be a QPN (Queue Pair Number).
It can be understood that through the above process, memory sharing of the queue resource applied by the kernel mode in the user process and the protocol stack process can be realized, so that when one of the user process and the protocol stack process writes the queue resource, the other one of the user process and the protocol stack process can sense memory change of the queue resource to perform subsequent processing. For example, in an RDMA transfer scenario, the protocol stack process may perceive that the user process is performing a task issue operation (Post Send) for SQ WQE, and a task issue operation (Post Recv) for RQ WQE.
According to the technical scheme, under the condition of physical machine deployment, application of the queue resources in the kernel state is realized through interaction among the user process, the kernel driver of the physical equipment and the protocol stack process, the queue resources in the kernel state are respectively mapped to the user process space and the protocol stack space, and the construction of the first memory space mapping is realized, so that memory sharing of the queue resources in the kernel state between the user process and the protocol stack process is realized, and meanwhile, the user process and the protocol stack process are communicated, so that in the data processing process based on the first memory space mapping, system calling between the user state and the kernel state is not needed, the influence on the system performance is reduced, meanwhile, association and extension of processing logic of the protocol stack process are facilitated, and the stability of the system is improved.
The following describes in detail a first memory space mapping construction process in a deployment scenario of a virtual machine with reference to a virtual machine memory space mapping schematic diagram shown in fig. 2E and a flow chart of a memory space mapping construction method shown in fig. 2F.
Referring to fig. 2F, another memory space mapping construction method includes:
S201B, sending a queue creating request to the virtual device kernel driver through the user process.
When a user process needs to create a queue resource, an IB Verbs interface is called by the user process to generate a queue creation request, and the queue creation request is called by an OFED framework and is transmitted to a Kernel Driver of SoftRDMA vDevice, namely a virtual device Kernel Driver.
S202B, sending a queue creation request to a physical device kernel driver simulated by kernel-mode software through a virtual device kernel driver and a virtualization simulator process.
The virtual device kernel driver transmits the queue creation request to a virtualization simulator through a communication channel of SoftRDMA vDevice and PMB (PCIE Mailbox ) of the virtualization simulator (Qemu Process) realized by the SoftRDMA PCIE vDevice device. Qemu Process calls RDMA Mgmt (Management) out-of-band interface mode, and transfers the queue creation request to the physical device kernel driver.
S203B, responding to the queue creating request through the physical device kernel drive application, applying a physical memory in the kernel space as a queue resource, applying a second device address space matched with the queue resource, and establishing memory mapping from the queue resource to the second device address space.
The physical device kernel driver applies for a continuous physical memory of the queue resource in a kernel mode, namely, applies for a corresponding physical page, applies for a second device address space matched with the queue resource in the device address space, and establishes memory mapping from the queue resource to the second device address space.
For example, the physical device kernel driver may determine the space size of the address space of the second device that needs to be applied for, based on the size of the physical memory corresponding to the queue resource. Meanwhile, in order to facilitate the establishment of the subsequent memory mapping, when the address space of the second device is applied, second physical page offset information (page _ off) is generated according to the size of the address space of the second device.
Specifically, the matched second device address space may be applied based on the number of physical pages of the physical memory corresponding to the queue resource.
S204B, generating a second protocol stack address matched with the queue resource in the protocol stack space through the protocol stack process, and establishing memory mapping from the second protocol stack address to a second equipment address space.
The protocol Stack process generates a second protocol Stack address (Stack VA) matched with the queue resource in the protocol Stack space in an mmap mode, and establishes memory mapping from the Stack VA to a second equipment address space, so that the mapping relation from the Stack VA to the physical memory of the queue resource is established.
Specifically, the physical device kernel driver may transmit the second physical page offset information to the protocol stack process through the MB message; and the protocol Stack process generates a Stack VA matched with the queue resource in an mmap mode based on the second physical page offset information.
S205B, generating a virtualization access address matched with the queue resource in the virtualization simulator space through the virtualization simulator process, and establishing memory mapping from the virtualization access address to the second equipment address space.
A virtualization simulator Process (Qemu Process) generates a virtualization access address (Qemu VA) matched with a queue resource in a virtualization simulator space in a mmap mode, and establishes memory mapping from the Qemu User VA to a second equipment address space, so that the mapping relation from the Qemu User VA to a physical memory of the queue resource is established.
Specifically, after the Stack VA is generated by the protocol Stack process, the queue identifier (QPN) corresponding to the allocated queue resource is sent to the physical device kernel driver through the MB message. And the physical device kernel driver transmits the queue identification and the second physical page offset information to the virtualization simulator process by calling the RDMA Mgmt out-of-band interface mode, and is used for indicating the virtualization simulator process to execute the subsequent mapping relation construction operation.
S206B, generating a virtual device address matched with the queue resource in the virtual device address space through the virtual device kernel drive, and establishing memory mapping from the virtualization access address to the virtual device address.
And the virtualization simulator process is called through a PMB communication channel, and sends mounting information of a virtualization access address in SoftRDMA PCIE vDevice and queue identification (QPN) corresponding to queue resources to a virtual device kernel driver. The mount information is used to identify a memory address mapping space pointed by the virtual machine, that is, an address space pointed by the virtualized access address.
Correspondingly, the virtual device kernel driver generates a virtual device address matched with the opponent resource in the virtual address space according to the mounting information, and establishes memory mapping from the virtual device address to the virtual device address space.
Optionally, the mount mode of the virtualized access Address in SoftRDMA PCIE vDevice may be a BAR (Base Address register) space mapping mode; accordingly, the mount information may be BAR spatial Offset information (BAR Offset). Or optionally, the mount mode of the virtualized access address in SoftRDMA PCIE vDevice may be a file binding mode; accordingly, the mount information may be a file identifier of the bound file.
S207B, generating a second user access address matched with the queue resource in the user process space through the user process, and establishing memory mapping from the user access address to the virtual device address.
The User process generates a second User access address (User VA) matched with the queue resource in a mmap mode, and establishes a mapping relation from the User VA to the virtual equipment address, namely establishes a mapping relation from the User VA to a mounting space of SoftRDMA PCIE vDevice, thereby establishing a mapping relation from the User VA to the Qemu VA and further establishing a mapping relation from the User VA to a physical memory of the queue resource.
Specifically, after the virtual device kernel driver generates the virtual device address matching the opponent resource, it may further generate a virtual device address offset information (page _ off), and return the page _ off and the QPN to the user process through an offset frame call, so as to instruct the user process to perform a subsequent mapping relationship construction operation.
According to the technical scheme, under the condition of virtual machine deployment, application of the queue resources in the kernel mode is realized through interaction among the user process, the virtual device kernel driver, the physical device kernel driver, the virtualization simulator process and the protocol stack process, and the queue resources in the kernel mode are respectively mapped to the user process space and the protocol stack space, so that the construction of the first memory space mapping is realized, the memory sharing of the queue resources in the kernel mode between the user process and the protocol stack process is realized, the communication between the user process and the protocol stack process is realized, the system calling between the user mode and the kernel mode is not needed in the data processing process based on the first memory space mapping, the influence on the system performance is reduced, meanwhile, the association and the expansion of the processing logic of the protocol stack process are facilitated, and the improvement of the system stability is facilitated.
On the basis of the above technical solutions, the present disclosure also provides an alternative embodiment. In this optional embodiment, optimization and improvement are performed on a data processing process of a to-be-processed task that realizes transceiving to-be-transmitted data in the RDMA transmission process. It should be noted that, for parts not described in detail in the embodiments of the present disclosure, reference may be made to relevant expressions in other embodiments, and details are not repeated herein.
For convenience of understanding, the memory space distribution according to the present embodiment will be briefly described.
See fig. 3A for a schematic diagram of memory space distribution. The memory space of the computing device includes a user space and a kernel space. Wherein, a user process and a protocol stack process run in the user space. For the convenience of distinguishing, a user space corresponding to a user process is called a user process space; the user space corresponding to the protocol stack process is called protocol stack space. Queue resources are applied in the kernel space and used for storing tasks to be processed; and applying for a user memory space in the user process space, wherein the user memory space is used for storing the data to be transmitted corresponding to the task to be processed.
The method comprises the steps that a first memory space mapping corresponding to a queue resource is arranged among a user process space, a protocol stack space and a kernel space, and the user process and the protocol stack process are communicated through the first memory space mapping and access of the user process and the protocol stack process to the queue resource applied by the kernel space is achieved.
The user process space, the protocol stack space and the memory space are also provided with a second memory space mapping corresponding to the user memory space, and the communication between the user process and the protocol stack process and the access of the user protocol stack process to the user memory space are realized through the second memory space mapping.
The present disclosure also provides an optional embodiment of a method for implementing memory space mapping, and details of a second memory space mapping process will be described below with reference to the memory management diagram shown in fig. 2B on the basis of the SoftRDMA implementation architecture shown in fig. 2A. It should be noted that the construction of the second memory space mapping is adapted to the physical machine deployment scenario and the virtual machine deployment scenario.
The second memory space mapping construction process in the physical machine deployment scenario will be described in detail below with reference to the physical machine memory space mapping diagram shown in fig. 2C and the flow chart of the memory space mapping framework method shown in fig. 3B.
Referring to fig. 3B, a memory space mapping construction method in a physical machine deployment scenario includes:
S301A, applying for user memory resources in a user process space through a user process to obtain a third user access address, and sending a user memory registration request including the third user access address to a kernel driver of a kernel-state physical device.
Applying for User memory resources in a User process space through a User process to obtain a third User access address (User VA); and calling the IB Verbs interface to generate a user memory registration request comprising a third user access address, calling through the OFED framework, and transmitting the user memory registration request to a Kernel Driver of SoftRDMA pDevice, namely a physical device Kernel Driver (SoftRDMA pDevice Kernel Driver), so as to start a user memory resource registration process.
In order to facilitate the differentiation and positioning of the user Process by the kernel driver of the physical device, a Process Identification (PID) of the user Process may be further included in the user memory registration request.
S302A, responding to the user memory registration request through the physical device kernel driver, applying for a third device address space matched with the user memory resource, and establishing memory mapping from the user memory resource to the third device address space.
And responding to the user memory registration request by the physical device kernel driver, applying a third device address space matched with the user memory resource in the device address space, and establishing memory mapping from the user memory resource to the third device address space.
For example, the physical device kernel driver may determine, based on the size of the physical memory corresponding to the user memory resource, the space size of the address space of the third device that needs to be applied for. Meanwhile, in order to facilitate the establishment of the subsequent memory mapping, when a third device address space is applied, third physical page offset information (page _ off) is generated according to the size of the third device address space, and the memory mapping from the third user access address to the third device address space, that is, the memory mapping from the user memory resource to the third device address space, is established.
Specifically, the Physical device kernel driver may call a user Physical page obtaining interface, obtain a Host Physical Address (HPA) Physical page table corresponding to the user memory resource, and apply for a third device Address space matched with the user memory resource based on the number of Physical pages corresponding to the third user access Address.
S303A, generating a third protocol stack address matched with the user memory resource in the protocol stack space through the protocol stack process, and establishing memory mapping from the third protocol stack address to a third equipment address space.
The protocol Stack process generates a third protocol Stack address (Stack VA) matched with the user memory resource in the protocol Stack space in an mmap mode, and establishes memory mapping from the Stack VA to a third equipment address space, so that the mapping relation from the Stack VA to the physical memory of the user memory resource is established.
Specifically, the physical device kernel driver may transmit the offset information of the third physical page and the related information of the user memory resource to the protocol stack process through the Mailbox message; and the protocol Stack process generates a Stack VA matched with the user memory resource in an mmap mode based on the third physical page offset information.
It can be understood that, through the above process, under the condition of physical machine deployment, the third protocol stack address in the protocol stack space and the third user access address in the user process space point to the same physical memory corresponding to the user memory resource, and the data write operation pointing to the user memory resource of the third protocol stack address is performed, so that the user process can sense the data write operation, and the system call between the user mode and the kernel mode is not needed, thereby reducing the influence on the system performance, facilitating the association and extension of the protocol stack process processing logic, and being beneficial to improving the stability of the system.
The second memory space mapping construction process in the virtual machine deployment scenario will be described in detail below with reference to the virtual machine memory space mapping diagram shown in fig. 2E and the flow chart of the memory space mapping construction method shown in fig. 3C.
Referring to fig. 3C, a memory space mapping construction method in a virtual machine deployment scenario includes:
S301B, applying for user memory resources in a user process space through a user process to obtain a fourth user access address, and sending a user memory registration request including the fourth user access address to a kernel driver of the kernel-state virtual device.
And applying for User memory resources in a User process space through a User process to obtain a fourth User access Address (User VA), which corresponds to a GVA (Guest Virtual Address, virtual Address of a Guest operating system) in a Virtual machine scene. And generating a user memory registration request comprising a fourth user access address by calling an IB Verbs interface, and transmitting the user memory registration request to a Kernel-mode virtual device Kernel Driver (SoftRDMA vDevice Kernel Driver) by calling an OFED framework, so as to start a user memory resource registration process.
In order to facilitate the virtual device kernel driver to distinguish and locate the user processes, a Process Identifier (PID) of the user process may be further included in the user memory registration request.
S302B, responding to the user memory registration request through the virtual device kernel driver, and determining a client physical address corresponding to the fourth user access address.
The virtual device kernel driver responds to the user memory registration request, calls a user Physical page acquisition interface based on a fourth user access Address and a process identification of a user process, and determines a GPA (Guest Physical Address) page table corresponding to the fourth user access Address.
S303B, determining a host virtual address corresponding to the physical address of the client in the virtual simulator space through the virtual simulator process.
The virtualization simulator Process (Qemu Process) queries the virtualization simulator space for the HVA (Host Virtual Address) corresponding to the GPA.
Specifically, the virtual device kernel driver may transmit GPA information of the user memory resource registration operation to a virtualization simulator Process through a PMB (PCIE Mailbox) communication channel between the SoftRDMA vDevice and the virtualization simulator Process (Qemu Process) implemented by the SoftRDMA vDevice PCIE device, so as to provide for HVA query of the user memory resource.
S304B, determining a host physical address corresponding to the host virtual address through a kernel driver of the physical device simulated by kernel-mode software, applying for a fourth device address space matched with the host physical address, and establishing memory mapping from the host virtual address to the fourth device address space.
A Physical device Kernel Driver (SoftRDMA pDevice Kernel Driver) determines a Host Physical Address (HPA) corresponding to an HVA of a user memory resource, applies for a fourth device Address space matched with the HPA in a device Address space, and establishes a memory mapping from the HVA to the fourth device Address space.
Illustratively, the Qemu Process initiates a system call, and transfers HVA information of the user memory resource and a Process identifier of the Qemu Process to a physical device kernel driver through an RDMA Mgmt (management) channel; the physical device kernel driver may determine a space size of the fourth device address space to be applied based on the size of the physical memory corresponding to the user memory resource. Meanwhile, in order to facilitate the establishment of the subsequent memory mapping, when a fourth device address space is applied, fourth physical page offset information (page _ off) is generated according to the size of the fourth device address space, and the memory mapping from the user memory resource corresponding to the HVA to the fourth device address space is established
Specifically, a physical device kernel driver calls a user physical page acquisition interface to acquire an HPA page table corresponding to an HVA of a user memory resource; and the physical device kernel driver applies for a fourth device address space matched with the user memory resource in the device address space according to the number of HPA page tables corresponding to the user memory resource.
S305B, the protocol stack process generates a fourth protocol stack address matched with the user memory resource in the protocol stack space, and establishes memory mapping from the fourth protocol stack address to a fourth device address space.
The protocol Stack Process generates a fourth protocol Stack address (Stack VA) matched with the user memory resource in the protocol Stack space in an mmap mode, and establishes memory mapping from the Stack VA to a fourth equipment address space, so that the mapping relation from the Stack VA to the HVA of the user memory resource in the Qemu Process is established.
Specifically, the physical device kernel driver may transmit the fourth physical page offset information and the information related to the user memory resource to the protocol stack process through the Mailbox message; and the protocol Stack process generates a Stack VA matched with the user memory resource in an mmap mode based on the fourth physical page offset information.
It can be understood that, through the above Process, under the condition of virtual machine deployment, the fourth protocol stack address in the protocol stack space and the HVA of the Qemu Process point to the same physical memory, and at the same time, the fourth user access address (GVA) of the user Process space and the HVA of the Qemu Process point to the same physical memory, and for the data write-in operation of the fourth protocol stack address pointing to the user memory resource, the user Process can perceive without performing system call between the user mode and the kernel mode, thereby reducing the influence on the system performance, facilitating association and extension of the protocol stack Process processing logic, and contributing to improving the stability of the system.
On the basis of the second memory space mapping constructed above, the present disclosure also provides another alternative embodiment for implementing the data processing method. In this optional embodiment, the data processing method is refined into a data access method, and a to-be-processed task for transmitting and receiving data to be transmitted in the RDMA transmission process and an access operation for the data to be transmitted are described in detail.
Referring to fig. 4A, a data access method includes:
s401, writing a task to be processed for receiving and transmitting data to be transmitted into a work queue resource applied by the kernel space through a user process.
S402, when the change of the work queue resource is monitored through the protocol stack process, the task to be processed is obtained from the work queue resource, and the access operation of the data to be transmitted corresponding to the task to be processed is carried out in the user memory resource applied by the user process space.
The method comprises the steps that a user process space, a protocol stack space and a kernel space are mapped, wherein the first memory space mapping corresponding to work queue resources is arranged among the user process space, the protocol stack space and the kernel space, and the second memory space mapping corresponding to the user memory resources is arranged among the user process space, the protocol stack space and the kernel space.
The user memory resource is applied in a user process space corresponding to a user process and is used for storing data to be transmitted corresponding to the RDMA transmission process of the transceiving end. Correspondingly, the access operation of the data to be transmitted can be that the sending end reads the data to be transmitted from the memory resource of the user of the sending end so as to subsequently transmit RDMA to the receiving end, so that the RDMA data sending scene is adapted; or, after the receiving end obtains the data to be transmitted by the sending end through RDMA, the receiving end writes the data to be transmitted into the memory resource of the user of the receiving end, so that the receiving scene of the RDMA data is adapted.
In an optional embodiment, in order to ensure the accuracy of the access operation of the data to be transmitted, the task to be processed can be analyzed through a protocol stack process to obtain a data access address of the data to be transmitted in a user process space; and performing access operation of the data to be transmitted in the user memory resource through the protocol stack process according to the data access address of the user process space.
Optionally, in an RDMA data transmission scenario, a protocol stack process of the transmitting end parses a task to be transmitted (corresponding to the task to be processed), and obtains a data read address (corresponding to the data access address) of a user memory resource of data to be transmitted in a user process space of the transmitting end, where the data to be transmitted corresponds to the task to be transmitted; and reading the data to be transmitted from the memory resource of the user of the protocol stack process according to the data reading address based on the second memory space mapping.
Or optionally, in an RDMA data receiving scenario, a protocol stack process at the receiving end parses a task to be received (that is, a task to be processed), and obtains a data write address (corresponding to the data access address) of a user memory resource of the data to be received, which is required to be in a user process space of the receiving end and corresponds to the task to be received; and the protocol stack process writes the data to be transmitted into the user memory resource of the protocol stack process based on the second memory space mapping according to the data write address.
The method and the device have the advantages that the data access address of the user process space is determined through the task to be transmitted, so that the address location of the data to be transmitted in the user memory resource of the user process space is realized, the condition of the data to be transmitted being in error in access is avoided, and the accuracy of data access is improved.
It should be noted that, because the second memory space mapping corresponding to the user memory resource is provided between the user process space, the protocol stack space and the kernel space, the protocol stack process can perform the access operation of the data to be transmitted from the user memory resource in the user memory space without relying on system call, thereby reducing the influence on the performance of the device and improving the data processing efficiency.
For ease of understanding, the data transmission procedure in the RDMA sending scenario and the RDMA receiving scenario will be described in detail below with reference to the RDMA transmission flow at the transceiving end shown in fig. 4B.
In an alternative embodiment, the work queue resource may be a SQ resource during an RDMA transfer; the task to be processed can be a task to be sent (also called SQ WQE) for sending data to be transmitted to the receiving end; the access operation to the data to be transferred may be a read operation to the data to be transferred. Through the further limitation on the work queue resources, the tasks to be processed and the access operation of the data to be transmitted, the data processing process can be adapted to the RDMA sending scene of the sending end.
The RDMA sending flow of the sending end shown in fig. 4B is specifically as follows: a user process of a sending end writes a task to be sent for sending data to be transmitted to a receiving end into SQ resources applied to a kernel space of the user process; when the protocol stack process of the sending end senses the change of the SQ resource, the task to be sent is obtained from the SQ resource, and the data to be transmitted corresponding to the task to be sent is read in the user memory resource applied by the user process space of the sending end.
Specifically, a user process of the sending end calls an IB Verbs Post Send interface, and writes a task to be sent (SQ WQE) in a user access address of an SQ resource corresponding to the user process space based on first memory space mapping, so that writing of the SQ resource in a kernel state is realized; a protocol stack process of a sending end senses changes of SQ resources in a polling mode and other modes based on first memory space mapping, acquires SQ WQE when sensing changes of the SQ resources, and judges whether the SQ WQE is effective or not based on a preset flag bit (Ownerbit) in the SQ WQE. And under the condition of effectiveness, determining a protocol Stack address (StackVA) in a protocol Stack space based on a data reading address (SGE) in the SQ WQE, and reading data (Payload) to be transmitted from user memory resources based on second memory space mapping according to the protocol Stack address.
Further, processing the task to be processed may further include: and encapsulating the data to be transmitted through a protocol stack process to obtain a message to be transmitted, and transmitting the message to be transmitted to a receiving end.
Specifically, a protocol Stack process of the sender encapsulates data to be transmitted based on an RDMA Stack transmission protocol to obtain a packet to be transmitted (PKT), and sends the packet to be transmitted (PKT) to the receiver by calling a DPDK Send interface, so that the receiver receives the data to be transmitted.
It can be understood that, the data to be transmitted is encapsulated through the protocol stack process in the user mode rather than the kernel mode, so that system call is not required in the message encapsulation process, thereby reducing the influence on the system performance and improving the encapsulation efficiency. Meanwhile, the convenience of expanding or modifying the code logic of the protocol stack process is improved, and the influence on the system stability is avoided.
In another alternative embodiment, the work queue resource may be an RQ resource during an RDMA transfer; the task to be processed may be a task to be received (i.e., RQ WQE) for receiving data to be transmitted sent by the sender; the access operation to the data to be transferred may be a write operation to the data to be transferred. Through the further definition of the work queue resources, the tasks to be processed and the access operation of the data to be transmitted, the data processing process can be adapted to the RDMA receiving scene of the receiving end.
The RDMA receiving flow at the receiving end shown in fig. 4B is specifically as follows: a user process at a receiving end writes a task to be received of data to be transmitted sent by a receiving end into an RQ resource applied by a kernel space of the user process; when the protocol stack process of the receiving end perceives the change of the RQ resource, the task to be received is obtained from the RQ resource, and after the data to be transmitted is received, the data to be transmitted corresponding to the task to be received is written in the user memory resource applied by the user process space of the receiving end.
Specifically, a user process at a receiving end calls an IB Verbs Post Recv interface, and based on first memory space mapping, a task to be received (RQ WQE) is written in a user access address of an RQ resource corresponding to a user process space, so that the RQ resource in a kernel state is written; and a protocol stack process of the receiving end senses the change of the RQ resource based on the first memory space mapping, acquires the RQ WQE when sensing the sending change of the RQ resource, and judges whether the RQ WQE is effective or not based on a preset flag bit (Ownerbit) in the RQ WQE. And under the effective condition, determining a protocol Stack address (Stack VA) under a protocol Stack space based on a data reading address (SGE) in the RQ WQE, and writing data (Payload) to be transmitted into the user memory resource based on the second memory space mapping according to the protocol Stack address.
Further, processing the to-be-processed task may further include acquiring the to-be-transmitted data sent by the sending end through a protocol stack process of the receiving end.
Illustratively, a message to be transmitted by a transmitting end is received through a protocol stack process of a receiving end, and the message to be transmitted is analyzed to obtain data to be transmitted for writing.
Continuing with fig. 4B, specifically, the receiving end receives a to-be-transmitted Packet (PKT) sent by the sending end through a DPDKRecv interface in a protocol stack process; analyzing a message to be transmitted through a protocol stack process to obtain data (Payload) to be transmitted and a queue identifier; and accessing the RQ resource based on the queue identification, acquiring the RQ WQE, and writing the data to be transmitted into the user memory resource.
It can be understood that, the data to be transmitted is analyzed (i.e. decapsulated) through the protocol stack process in the user mode rather than the kernel mode, so that system call is not required in the process of decapsulating the packet, thereby reducing the influence on the system performance and improving the decapsulation efficiency. Meanwhile, the convenience of expanding or modifying the code logic of the protocol stack process is improved, and the influence on the system stability is avoided.
On the basis of the first memory space mapping constructed above, the present disclosure also provides another alternative embodiment for implementing the data processing method. In this optional embodiment, the data processing method is refined into a task completion confirmation method, and a detailed description is given of whether the task to be processed is completed in the RDMA transmission process.
Referring to fig. 5, a task completion confirmation method includes:
s501, writing a to-be-processed task which confirms whether the to-be-received and sent task is completed or not into a completion queue resource applied to the kernel space through a protocol stack process.
S502, when the completion queue resource is monitored to be changed through the user process, the task to be processed is obtained from the completion queue resource, and whether the task to be processed is completed or not is determined according to a preset state bit in the task to be processed.
The method comprises the steps that a user process space, a protocol stack space and a kernel space are mapped through a first memory space corresponding to work queue resources.
The task to be received is a task to be sent for sending data to be transmitted to a receiving end; or, receiving a task to be received of the data to be transmitted, which is sent by the sending end.
Wherein, the completion queue resource (CQ resource) is used for storing the completion confirmation task (CQE).
In an alternative embodiment, the computing device executing the data processing method is the sender of an RDMA transfer; the completion queue resource may be a send completion queue resource corresponding to the SQ resource; the task to be processed may be a send completion confirmation task that confirms whether the task to be sent is completed, that is, a CQE corresponding to the SQ WQE.
Illustratively, a protocol stack process of the sending end acquires a data receiving confirmation message fed back by the receiving end after completing a task to be received, and writes a sending completion confirmation task into a sending completion queue resource applied by the kernel space according to the data receiving confirmation message. Correspondingly, when monitoring that the transmission completion queue resource changes, the user process of the transmitting end acquires a transmission completion confirmation task in the transmission completion queue resource, and confirms whether the task to be transmitted is completed according to a preset state bit in the transmission completion confirmation task.
Specifically, with continuing to refer to fig. 4B, after writing the to-be-transmitted data in the to-be-received task corresponding to the to-be-transmitted task into the user memory resource of the receiving end, the protocol stack process of the receiving end generates a data reception acknowledgement message (ACK) based on the user process identifier of the to-be-transmitted task at the transmitting end, and transmits the ACK to the transmitting end of the to-be-transmitted task. A protocol stack process of a sending end analyzes a data receiving confirmation message to obtain a queue identification; determining a sending completion queue resource of the kernel space application according to the queue identification; writing a sending completion confirmation task (SQ WQE CQE) into the sending completion queue resource based on the first memory space mapping; the user process of the sending end senses the change of the sending completion queue resource through polling and other modes based on the first memory space mapping, obtains the sending completion confirmation task, and determines whether the sending completion confirmation task is effective according to the Ownerbit (owner bit) state bit in the sending completion confirmation task, thereby obtaining whether the task to be sent is completed.
According to the technical scheme, in the process of confirming the completion of the task to be sent, based on the first memory space mapping, the protocol stack process of the sending end directly carries out the write operation pointing to the resource of the sending completion queue, the user process directly senses the write operation and carries out the corresponding confirmation of the completion of the confirmation task without carrying out system call, so that the influence on the performance of equipment is reduced, and the confirmation efficiency of the completion of the task to be sent is improved. Meanwhile, the expandability of the protocol stack process is improved, and the system stability is improved.
In another optional embodiment, the computing device executing the data processing method is the sink of an RDMA transfer; the completion queue resource may also be a receive completion queue resource corresponding to an RQ resource; the task to be processed may be a reception completion confirmation task that confirms whether the task to be received is completed, that is, a CQE corresponding to the RQ WQE.
Illustratively, after writing the data to be transmitted into the user memory resource, the protocol stack process at the receiving end writes a reception completion confirmation task into the reception completion queue resource applied for by the kernel space. Correspondingly, when monitoring that the receiving completion queue resource changes, the user process at the receiving end acquires a receiving completion confirmation task in the receiving completion queue resource, and confirms whether the task to be received is completed according to a preset state bit in the receiving completion confirmation task.
Specifically, with reference to fig. 4B, after writing the data to be transmitted in the task to be received into the user memory resource of the receiving end, the protocol stack process of the receiving end writes a reception completion confirmation task (RQ WQE CQE) into the reception completion queue resource based on the first memory space mapping; and the user process of the receiving end acquires the receiving completion confirmation task after sensing the change of the receiving completion queue resources in a polling mode and other modes based on the first memory space mapping, and determines whether the receiving completion confirmation task is effective or not according to the Ownerbit state bit in the receiving completion confirmation task, so that whether the task to be received is completed or not is known.
According to the technical scheme, in the process of confirming the completion of the task to be received, based on the first memory space mapping, the protocol stack process of the receiving end directly carries out the write operation pointing to the resource of the receiving completion queue, the user process directly senses the write operation and confirms the completion of the corresponding receiving confirmation task without carrying out system call, so that the influence on the performance of equipment is reduced, and the confirmation efficiency of confirming the completion of the task to be received is improved. Meanwhile, the expandability of the protocol stack process is improved, and the stability of the system is improved.
As an implementation of the above data processing methods, the present disclosure also provides an optional embodiment of an execution device that implements the above data processing methods. Referring to fig. 6, a data processing apparatus 600 is shown, comprising: a pending task writing module 601 and a pending task processing module 602. Wherein the content of the first and second substances,
a to-be-processed task writing module 601, configured to write a to-be-processed task into a queue resource applied in a kernel space through a first user mode process;
a to-be-processed task processing module 602, configured to obtain and process the to-be-processed task from the queue resource when it is monitored that the queue resource changes through a second user state process;
and a first memory space mapping corresponding to the queue resource is provided among a first user mode space corresponding to the first user mode process, a second user mode space corresponding to the second user mode process and the kernel space.
Because the first user mode space, the second user mode space and the kernel space are provided with the first memory space mapping corresponding to the queue resources, the queue resources in the kernel space can be used as a shared memory for the first user process and the second user process running in the user space to access, namely, the first user process and the second user process are opened, and the queue resources applied by the kernel mode can be accessed without depending on system calling, so that the influence on the system performance of the equipment is reduced, and the data processing efficiency is improved. Because the processing operation of the task to be processed is executed in the user mode without being realized through the kernel mode, the convenience of management and maintenance of the task processing logic of the task to be processed is improved, and the kernel mode code is not required to be modified under specific conditions (such as protocol change or function expansion) so as to avoid the occurrence of system breakdown and contribute to improving the system stability.
In an optional embodiment, the first user mode process is a user process, and the first user mode space is a user process space; the second user mode process is a protocol stack process, and the second user mode space is a protocol stack space; alternatively, the first and second electrodes may be,
the first user mode process is a protocol stack process, and the first user mode space is a protocol stack space; the second user mode process is a user process, and the second user mode space is a user process space.
In an optional embodiment, the apparatus 600 further includes a first memory space map building module, configured to build a first memory space map;
the first memory space mapping construction module includes:
a queue creation request sending unit, configured to send a queue creation request to a kernel driver of a physical device simulated by kernel-mode software through the user process;
a first device address space application unit, configured to respond to the queue creation request through the physical device kernel driver, apply for a physical memory in the kernel space as the queue resource, apply for a first device address space matched with the queue resource, and establish memory mapping from the queue resource to the first device address space;
a first protocol stack address generating unit, configured to generate, in the protocol stack space, a first protocol stack address matching the queue resource through the protocol stack process, and establish memory mapping from the first protocol stack address to the first device address space;
and the first user access address generating unit is used for generating a first user access address matched with the queue resource in the user process space through the user process and establishing memory mapping from the first user access address to the first equipment address space.
In an optional embodiment, the apparatus 600 further includes a first memory space map building module, configured to build a first memory space map;
the first memory space mapping construction module includes:
the device also comprises a first memory space mapping construction module used for constructing a first memory space mapping;
a queue creation request sending unit, configured to send a queue creation request to a virtual device kernel driver through the user process;
the queue creation request transmission unit is used for transmitting the queue creation request to a physical device kernel driver simulated by kernel-mode software through a virtualization simulator process through the virtual device kernel driver;
a second device address space application unit, configured to apply for responding to the queue creation request through the physical device kernel driver, apply for a physical memory in the kernel space as the queue resource, apply for a second device address space matched with the queue resource, and establish memory mapping from the queue resource to the second device address space;
a second protocol stack address generating unit, configured to generate, by the protocol stack process, a second protocol stack address matching the queue resource in the protocol stack space, and establish memory mapping from the second protocol stack address to the second device address space;
the virtualization access address generating unit is used for generating a virtualization access address matched with the queue resource in a virtualization simulator space through the virtualization simulator process and establishing memory mapping from the virtualization access address to the second equipment address space;
a virtual device address generating unit, configured to generate, by a virtual device address through the virtual device kernel driver, a virtual device address matching the queue resource in a virtual device address space, and establish memory mapping from the virtualized access address to the virtual device address;
and the second user access address generating unit is used for generating a second user access address matched with the queue resource in the user process space through the user process and establishing memory mapping from the user access address to the virtual equipment address.
In an optional embodiment, the pending task writing module 601 includes:
a data receiving and transmitting task writing unit, configured to write a to-be-processed task for receiving and transmitting data to be transmitted into a work queue resource applied by the kernel space through the user process;
the to-be-processed task processing module 602 includes:
the data access unit is used for performing access operation of data to be transmitted in the user memory resource applied by the user process space through the protocol stack process;
and a second memory space mapping corresponding to the user memory resource is provided among the user process space, the protocol stack space and the kernel space.
In an optional embodiment, the apparatus 600 further includes a second memory space mapping construction module, configured to construct a second memory space mapping;
the second memory space mapping construction module specifically includes:
a third user access address obtaining unit, configured to apply for a user memory resource in the user process space through the user process to obtain a third user access address, and send a user memory registration request including the third user access address to a kernel-state physical device kernel driver;
a third device address space generating unit, configured to respond to the user memory registration request through the physical device kernel driver, apply for a third device address space matching the user memory resource, and establish memory mapping from the user memory resource to the third device address space;
and the third protocol stack address generating unit is used for generating a third protocol stack address matched with the user memory resource in the protocol stack space through a protocol stack process and establishing memory mapping from the third protocol stack address to a third equipment address space.
In an optional embodiment, the apparatus 600 further includes a second memory space mapping construction module, configured to construct a second memory space mapping;
the second memory space mapping construction module specifically includes:
a fourth user access address obtaining unit, configured to apply for a user memory resource in the user process space through the user process to obtain a fourth user access address, and send a user memory registration request including the fourth user access address to a kernel driver of a kernel-state virtual device;
a guest physical address determining unit, configured to determine, by the virtual device kernel driver, a guest physical address corresponding to the fourth user access address in response to the user memory registration request;
a host virtual address determining unit, configured to determine, through a virtualization emulator process, a host virtual address corresponding to the guest physical address in a virtualization emulator space;
a fourth device address space generation unit, configured to determine, through a kernel driver of a physical device simulated by kernel-mode software, a host physical address corresponding to the host virtual address, apply for a fourth device address space matched with the host physical address, and establish memory mapping from the host virtual address to the fourth device address space;
and the fourth protocol stack address generating unit is used for generating a fourth protocol stack address matched with the user memory resource in the protocol stack space by the protocol stack process and establishing memory mapping from the fourth protocol stack address to the fourth equipment address space.
In an alternative embodiment, the data access unit comprises:
the data access address determining subunit is used for analyzing the task to be processed through the protocol stack process to obtain a data access address of the data to be transmitted in the user process space;
and the data access subunit is used for performing access operation on the data to be transmitted in the user memory resource according to the data access address of the user process space through the protocol stack process.
In an optional embodiment, if the to-be-processed task is a to-be-sent task that sends the to-be-transmitted data to a receiving end, and the access operation is a read operation on the to-be-transmitted data, the to-be-processed task processing module further includes:
and the message packaging unit is used for packaging the data to be transmitted through the protocol stack process to obtain a message to be transmitted and transmitting the message to be transmitted to a receiving end.
In an optional embodiment, if the to-be-processed task is a to-be-received task that receives the to-be-transmitted data sent by the sending end, and the access operation is a write operation on the to-be-transmitted data, the to-be-processed task processing module further includes:
and the message analysis unit is used for receiving the message to be transmitted by the transmitting end through the protocol stack process and analyzing the message to be transmitted to obtain the data to be transmitted.
In an optional embodiment, the task to be processed is a completion confirmation task; the to-be-processed task writing module 601 includes:
a completion confirmation task writing unit, configured to write a to-be-processed task that confirms whether the to-be-received and sent task is completed in a completion queue resource applied to the kernel space by using the protocol stack process;
the to-be-processed task processing module 602 includes:
a task completion confirming unit, configured to determine, through the user process, whether the task to be processed is completed according to a preset state bit in the task to be processed;
the task to be transmitted and received is a task to be transmitted, which transmits data to be transmitted to a receiving end; or receiving the task to be received of the data to be transmitted, which is sent by the sending end.
In an optional embodiment, the task to be processed is a transmission completion confirmation task that confirms whether the task to be transmitted is completed; the completion queue resource is a transmission completion queue resource;
the completion confirmation task writing unit includes:
and the sending completion confirmation task writing subunit is used for acquiring a data receiving confirmation message fed back by the receiving end after the receiving end completes the task to be received through the protocol stack process, and writing the sending completion confirmation task into the sending completion queue resource applied by the kernel space according to the data receiving confirmation message.
In an optional embodiment, the task to be processed is a reception completion confirmation task that confirms whether the task to be received is completed; the completion queue resource is a receive completion queue resource;
the completion confirmation task writing unit includes:
and the receiving completion confirmation task writing subunit is used for writing the receiving completion confirmation task into the receiving completion queue resource applied by the kernel space after the data to be transmitted is written into the user memory resource through the protocol stack process.
The data processing device can execute the data processing method provided by any embodiment of the disclosure, and has functional modules and beneficial effects corresponding to the execution of each data processing method.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the related data to be transmitted all conform to the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 701 executes the respective methods and processes described above, such as the data processing method. For example, in some embodiments, the data processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the data processing method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the data processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome. The server may also be a server of a distributed system, or a server incorporating a blockchain.
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge map technology and the like.
Cloud computing (cloud computing) refers to a technology system that accesses a flexibly extensible shared physical or virtual resource pool through a network, where resources may include servers, operating systems, networks, software, applications, storage devices, and the like, and may be deployed and managed in a self-service manner as needed. Through the cloud computing technology, high-efficiency and strong data processing capacity can be provided for technical application and model training of artificial intelligence, block chains and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in this disclosure may be performed in parallel or sequentially or in a different order, as long as the desired results of the technical solutions provided by this disclosure can be achieved, and are not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (29)

1. A method of data processing, comprising:
writing a task to be processed into a queue resource applied in a kernel space through a first user mode process;
when monitoring that the queue resource changes through a second user mode process, acquiring and processing the task to be processed from the queue resource;
and a first memory space mapping corresponding to the queue resource is provided among a first user mode space corresponding to the first user mode process, a second user mode space corresponding to the second user mode process and the kernel space.
2. The method of claim 1, wherein the first user-state process is a user process and the first user-state space is a user process space; the second user mode process is a protocol stack process, and the second user mode space is a protocol stack space; alternatively, the first and second electrodes may be,
the first user state process is a protocol stack process, and the first user state space is a protocol stack space; the second user mode process is a user process, and the second user mode space is a user process space.
3. The method of claim 2, wherein the first memory space mapping is constructed based on:
sending a queue creation request to a kernel driver of physical equipment simulated by kernel-mode software through the user process;
responding to the queue creating request through the physical device kernel driver, applying a physical memory in the kernel space as the queue resource, applying a first device address space matched with the queue resource, and establishing memory mapping from the queue resource to the first device address space;
generating a first protocol stack address matched with the queue resource in the protocol stack space through the protocol stack process, and establishing memory mapping from the first protocol stack address to the first equipment address space;
and generating a first user access address matched with the queue resource in the user process space through the user process, and establishing memory mapping from the first user access address to the first equipment address space.
4. The method of claim 2, wherein the first memory space mapping is constructed based on:
sending a queue creation request to a virtual device kernel driver through the user process;
sending the queue creation request to a physical device kernel driver simulated by kernel-mode software through a virtualization simulator process by the virtual device kernel driver;
responding to the queue creating request through the physical device kernel drive application, applying a physical memory in the kernel space as the queue resource, applying a second device address space matched with the queue resource, and establishing memory mapping from the queue resource to the second device address space;
generating a second protocol stack address matched with the queue resource in the protocol stack space through the protocol stack process, and establishing memory mapping from the second protocol stack address to the second equipment address space;
generating a virtualization access address matched with the queue resource in a virtualization simulator space through the virtualization simulator process, and establishing memory mapping from the virtualization access address to the second equipment address space;
generating a virtual device address matched with the queue resource in a virtual device address space through the virtual device kernel driver, and establishing memory mapping from the virtualization access address to the virtual device address;
and generating a second user access address matched with the queue resource in the user process space through the user process, and establishing memory mapping from the user access address to the virtual equipment address.
5. The method of any one of claims 2-4, wherein the writing, by the first user-state process, the pending task to the queue resource for kernel space application comprises:
writing a task to be processed for receiving and transmitting data to be transmitted into a work queue resource applied by the kernel space through the user process;
the processing the task to be processed comprises the following steps:
performing access operation of data to be transmitted in a user memory resource applied by a user process space through the protocol stack process;
and a second memory space mapping corresponding to the user memory resource is provided among the user process space, the protocol stack space and the kernel space.
6. The method of claim 5, wherein the second memory space mapping is constructed based on:
applying for user memory resources in the user process space through the user process to obtain a third user access address, and sending a user memory registration request including the third user access address to a kernel driver of a physical device in a kernel state;
responding to the user memory registration request through the physical device kernel driver, applying for a third device address space matched with the user memory resource, and establishing memory mapping from the user memory resource to the third device address space;
and generating a third protocol stack address matched with the user memory resource in the protocol stack space through a protocol stack process, and establishing memory mapping from the third protocol stack address to a third equipment address space.
7. The method of claim 5, wherein the second memory space mapping is constructed based on:
applying for user memory resources in the user process space through the user process to obtain a fourth user access address, and sending a user memory registration request including the fourth user access address to a kernel driver of a kernel-state virtual device;
responding to the user memory registration request through the virtual device kernel driver, and determining a client physical address corresponding to the fourth user access address;
determining a host virtual address corresponding to the guest physical address in a virtualization simulator space through a virtualization simulator process;
determining a second host physical address corresponding to the second host virtual address through a kernel driver of a physical device simulated by kernel mode software, applying for a fourth device address space matched with the second host physical address, and establishing memory mapping from the second host virtual address to the fourth device address space;
and the protocol stack process generates a fourth protocol stack address matched with the user memory resource in the protocol stack space, and establishes memory mapping from the fourth protocol stack address to a fourth equipment address space.
8. The method of claim 5, wherein performing, by the protocol stack process, an access operation on data to be transmitted in a user memory resource applied for by a user process space comprises:
analyzing the task to be processed through the protocol stack process to obtain a data access address of the data to be transmitted in the user process space;
and performing access operation of the data to be transmitted in the user memory resource according to the data access address of the user process space through the protocol stack process.
9. The method according to claim 5, wherein if the task to be processed is a task to be sent for sending the data to be transmitted to a receiving end, and the access operation is a read operation on the data to be transmitted, the processing the task to be processed further comprises:
and packaging the data to be transmitted through the protocol stack process to obtain a message to be transmitted, and transmitting the message to be transmitted to a receiving end.
10. The method according to claim 5, wherein if the to-be-processed task is a to-be-received task that receives the to-be-transmitted data sent by a sending end, and the access operation is a write operation on the to-be-transmitted data, the processing of the to-be-processed task further comprises:
and receiving a message to be transmitted by a transmitting end through the protocol stack process, and analyzing the message to be transmitted to obtain the data to be transmitted.
11. The method according to any one of claims 2-4, wherein the task to be processed is a completion confirmation task; the writing of the to-be-processed task into the queue resource applied in the kernel space through the first user mode process includes:
writing a task to be processed for confirming whether the task to be received and sent is completed or not into a completion queue resource applied to the kernel space through the protocol stack process;
the processing the task to be processed comprises the following steps:
determining whether the task to be processed is completed or not according to a preset state bit in the task to be processed through the user process;
the task to be transmitted and received is a task to be transmitted, which transmits data to be transmitted to a receiving end; or receiving the task to be received of the data to be transmitted, which is sent by the sending end.
12. The method according to claim 11, wherein the task to be processed is a transmission completion confirmation task that confirms whether the task to be transmitted is completed; the completion queue resource is a transmission completion queue resource;
the writing of the to-be-processed task confirming whether the to-be-received and sent task is completed in the completion queue resource applied to the kernel space by the protocol stack process includes:
and acquiring a data receiving confirmation message fed back by a receiving end after the receiving end completes the task to be received through the protocol stack process, and writing the sending completion confirmation task into sending completion queue resources applied by the kernel space according to the data receiving confirmation message.
13. The method according to claim 11, wherein the task to be processed is a reception completion confirmation task that confirms whether the task to be received is completed; the completion queue resource is a receive completion queue resource;
the writing of the to-be-processed task into the queue resource applied in the kernel space through the first user mode process includes:
and writing a receiving completion confirmation task into the receiving completion queue resource applied by the kernel space after the data to be transmitted is written into the user memory resource through the protocol stack process.
14. A data processing apparatus comprising:
the to-be-processed task writing module is used for writing the to-be-processed task into the queue resource applied in the kernel space through the first user mode process;
the to-be-processed task processing module is used for acquiring and processing the to-be-processed task from the queue resource when the change of the queue resource is monitored through a second user mode process;
and a first memory space mapping corresponding to the queue resource is provided among a first user mode space corresponding to the first user mode process, a second user mode space corresponding to the second user mode process and the kernel space.
15. The apparatus of claim 14, wherein the first user state process is a user process and the first user state space is a user process space; the second user mode process is a protocol stack process, and the second user mode space is a protocol stack space; alternatively, the first and second electrodes may be,
the first user mode process is a protocol stack process, and the first user mode space is a protocol stack space; the second user mode process is a user process, and the second user mode space is a user process space.
16. The apparatus of claim 15, wherein the apparatus further comprises a first memory space map construction module to construct a first memory space map;
the first memory space mapping construction module includes:
a queue creation request sending unit, configured to send a queue creation request to a kernel driver of a physical device simulated by kernel-mode software through the user process;
a first device address space application unit, configured to respond to the queue creation request through the physical device kernel driver, apply for a physical memory in the kernel space as the queue resource, apply for a first device address space matching the queue resource, and establish memory mapping from the queue resource to the first device address space;
a first protocol stack address generating unit, configured to generate, in the protocol stack space, a first protocol stack address matching the queue resource through the protocol stack process, and establish memory mapping from the first protocol stack address to the first device address space;
and the first user access address generating unit is used for generating a first user access address matched with the queue resource in the user process space through the user process and establishing memory mapping from the first user access address to the first equipment address space.
17. The apparatus of claim 15, wherein the apparatus further comprises a first memory space map construction module to construct a first memory space map;
a queue creation request sending unit, configured to send a queue creation request to a virtual device kernel driver through the user process;
the queue creation request transmission unit is used for transmitting the queue creation request to a physical device kernel driver simulated by kernel-mode software through a virtualization simulator process by the virtual device kernel driver;
a second device address space application unit, configured to apply for responding to the queue creation request through the physical device kernel driver, apply for a physical memory in the kernel space as the queue resource, apply for a second device address space matched with the queue resource, and establish memory mapping from the queue resource to the second device address space;
a second protocol stack address generating unit, configured to generate, in the protocol stack space, a second protocol stack address matching the queue resource through the protocol stack process, and establish memory mapping from the second protocol stack address to the second device address space;
the virtualization access address generating unit is used for generating a virtualization access address matched with the queue resource in a virtualization simulator space through the virtualization simulator process and establishing memory mapping from the virtualization access address to the second equipment address space;
a virtual device address generating unit, configured to generate, by a virtual device address through the virtual device kernel driver, a virtual device address matching the queue resource in a virtual device address space, and establish memory mapping from the virtualized access address to the virtual device address;
and the second user access address generating unit is used for generating a second user access address matched with the queue resource in the user process space through the user process and establishing memory mapping from the user access address to the virtual equipment address.
18. The apparatus according to any one of claims 15-17, wherein the pending task writing module comprises:
a data receiving and transmitting task writing unit, configured to write a to-be-processed task for receiving and transmitting data to be transmitted into a work queue resource applied by the kernel space through the user process;
the task processing module to be processed comprises:
the data access unit is used for performing access operation on data to be transmitted in the user memory resource applied by the user process space through the protocol stack process;
and a second memory space mapping corresponding to the user memory resource is provided among the user process space, the protocol stack space and the kernel space.
19. The apparatus of claim 18, wherein the apparatus further comprises a second memory space map construction module to construct a second memory space map;
the second memory space mapping construction module specifically includes:
a third user access address obtaining unit, configured to apply for a user memory resource in the user process space through the user process to obtain a third user access address, and send a user memory registration request including the third user access address to a kernel-state physical device kernel driver;
a third device address space generating unit, configured to respond to the user memory registration request through the physical device kernel driver, apply for a third device address space matching the user memory resource, and establish memory mapping from the user memory resource to the third device address space;
and the third protocol stack address generating unit is used for generating a third protocol stack address matched with the user memory resource in the protocol stack space through a protocol stack process and establishing memory mapping from the third protocol stack address to a third equipment address space.
20. The apparatus of claim 18, wherein the apparatus further comprises a second memory space map construction module to construct a second memory space map;
the second memory space mapping construction module specifically includes:
a fourth user access address obtaining unit, configured to apply for a user memory resource in the user process space through the user process to obtain a fourth user access address, and send a user memory registration request including the fourth user access address to a kernel driver of a kernel-state virtual device;
a guest physical address determining unit, configured to determine, by the virtual device kernel driver, a second guest physical address corresponding to the fourth user access address in response to the user memory registration request;
a host virtual address determining unit, configured to determine, through a virtualization emulator process, a host virtual address corresponding to the guest physical address in a virtualization emulator space;
a fourth device address space generation unit, configured to determine, through a kernel driver of a physical device simulated by kernel-mode software, a host physical address corresponding to the host virtual address, apply for a fourth device address space matched with the host physical address, and establish memory mapping from the host virtual address to the fourth device address space;
and the fourth protocol stack address generating unit is used for generating a fourth protocol stack address matched with the user memory resource in the protocol stack space by the protocol stack process and establishing memory mapping from the fourth protocol stack address to the fourth equipment address space.
21. The apparatus of claim 18, wherein the data access unit comprises:
the data access address determining subunit is used for analyzing the task to be processed through the protocol stack process to obtain a data access address of the data to be transmitted in the user process space;
and the data access subunit is used for performing access operation on the data to be transmitted in the user memory resource according to the data access address of the user process space through the protocol stack process.
22. The apparatus of claim 18, wherein if the to-be-processed task is a to-be-processed task that sends the to-be-transmitted data to a receiving end, and the access operation is a read operation on the to-be-transmitted data, the to-be-processed task processing module further includes:
and the message packaging unit is used for packaging the data to be transmitted through the protocol stack process to obtain a message to be transmitted and transmitting the message to be transmitted to a receiving end.
23. The apparatus according to claim 18, wherein if the to-be-processed task is a to-be-received task that receives the to-be-transmitted data sent by a sending end, and the access operation is a write operation on the to-be-transmitted data, the to-be-processed task processing module further includes:
and the message analysis unit is used for receiving the message to be transmitted by the transmitting end through the protocol stack process and analyzing the message to be transmitted to obtain the data to be transmitted.
24. The apparatus according to any of claims 14-17, wherein the task to be processed is a completion confirmation task; the module for writing the task to be processed comprises:
a completion confirmation task writing unit, configured to write a to-be-processed task that confirms whether the to-be-received and sent task is completed in a completion queue resource applied to the kernel space by using the protocol stack process;
the task processing module to be processed comprises:
a task completion confirming unit, configured to determine, through the user process, whether the task to be processed is completed according to a preset state bit in the task to be processed;
the task to be transmitted and received is a task to be transmitted, which transmits data to be transmitted to a receiving end; or receiving the task to be received of the data to be transmitted, which is sent by the sending end.
25. The apparatus according to claim 24, wherein the task to be processed is a transmission completion confirmation task that confirms whether the task to be transmitted is completed; the completion queue resource is a sending completion queue resource;
the completion confirmation task writing unit includes:
and the sending completion confirmation task writing subunit is used for acquiring a data receiving confirmation message fed back by the receiving end after the receiving end completes the task to be received through the protocol stack process, and writing the sending completion confirmation task into the sending completion queue resource applied by the kernel space according to the data receiving confirmation message.
26. The apparatus according to claim 24, wherein the task to be processed is a reception completion confirmation task that confirms whether the task to be received is completed; the completion queue resource is a receive completion queue resource;
the completion confirmation task writing unit includes:
and the receiving completion confirmation task writing subunit is used for writing the receiving completion confirmation task into the receiving completion queue resource applied by the kernel space after the data to be transmitted is written into the user memory resource through the protocol stack process.
27. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the data processing method of any one of claims 1-13.
28. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the data processing method according to any one of claims 1 to 13.
29. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the data processing method of any one of claims 1-13.
CN202211504912.3A 2022-11-28 2022-11-28 Data processing method, device, equipment and storage medium Pending CN115729880A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211504912.3A CN115729880A (en) 2022-11-28 2022-11-28 Data processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211504912.3A CN115729880A (en) 2022-11-28 2022-11-28 Data processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115729880A true CN115729880A (en) 2023-03-03

Family

ID=85298836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211504912.3A Pending CN115729880A (en) 2022-11-28 2022-11-28 Data processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115729880A (en)

Similar Documents

Publication Publication Date Title
US8738815B2 (en) System and method for virtualizing the peripherals in a terminal device to enable remote management via removable portable media with processing capability
CN111542064B (en) Container arrangement management system and arrangement method for wireless access network
WO2022095348A1 (en) Remote mapping method and apparatus for computing resources, device and storage medium
EP3343364A1 (en) Accelerator virtualization method and apparatus, and centralized resource manager
TWI458314B (en) Server system and management method thereof for transferring remote packet to host
EP2911342A1 (en) Home gateway and intelligent terminal integrated system and communication method therefor
CN110770708A (en) Method and apparatus for hardware virtualization
US9619272B1 (en) Virtual machine networking
CN108255614A (en) A kind of interface calling system and method based on micro services framework
CN108228309B (en) Data packet sending and receiving method and device based on virtual machine
US10810024B2 (en) Redirection method and apparatus, and system
WO2022143714A1 (en) Server system, and virtual machine creation method and apparatus
CN102811230B (en) Resource call method based on application integration and system thereof
CN103092676A (en) Analog input output method, device and system of virtual machine cluster
CN113157624B (en) Serial port communication method, device, equipment and storage medium
CN107861803A (en) Cpci bus RS422 communications driving method under a kind of XP systems based on interruption
CN115729880A (en) Data processing method, device, equipment and storage medium
CN116450554A (en) Interrupt processing method, root complex device and electronic device
US11477304B2 (en) Conversion to an internet protocol stream
CN115022424B (en) Hydropower LCU controller network card virtual control method, system, equipment and medium thereof
CN102301333A (en) System and method for remotely operating a wireless device using a server and client architecture
WO2010086731A1 (en) System and method for virtualizing the peripherals in a terminal device to enable remote management via removable portable media with processing capability
KR100956640B1 (en) Self-Control Common Apparatus of Resource and Method Thereof
WO2019127475A1 (en) Method and apparatus for implementing virtual sim card, storage medium, and electronic device
CN116743587B (en) Virtual network interface implementation method and device based on heterogeneous computing accelerator card

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination