CN112650558B - Data processing method and device, readable medium and electronic equipment - Google Patents

Data processing method and device, readable medium and electronic equipment Download PDF

Info

Publication number
CN112650558B
CN112650558B CN202011595902.6A CN202011595902A CN112650558B CN 112650558 B CN112650558 B CN 112650558B CN 202011595902 A CN202011595902 A CN 202011595902A CN 112650558 B CN112650558 B CN 112650558B
Authority
CN
China
Prior art keywords
data
processed
notification
descriptors
data processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011595902.6A
Other languages
Chinese (zh)
Other versions
CN112650558A (en
Inventor
董伸
黄朝波
刘禄仁
邱模炯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ucloud Technology Co ltd
Original Assignee
Ucloud Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ucloud Technology Co ltd filed Critical Ucloud Technology Co ltd
Priority to CN202011595902.6A priority Critical patent/CN112650558B/en
Publication of CN112650558A publication Critical patent/CN112650558A/en
Application granted granted Critical
Publication of CN112650558B publication Critical patent/CN112650558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Abstract

The present application relates to the field of information technologies, and in particular, to a data processing method, an apparatus, a readable medium, and an electronic device. The data processing method of the application is used for a data processing system, and the data processing system comprises: the data processing method comprises the following steps: the scheduling module generates data to be processed and a descriptor corresponding to the data to be processed, wherein the descriptor is used for describing information of the data to be processed; the scheduling module sends a data processing notification, the execution module receives and stores the data processing notification, a preset number of descriptors are obtained based on the data processing notification, and the execution module sets a notification enabling flag bit to be valid when the execution module judges that the preset number of descriptors include invalid descriptors. According to the method, the problem that bandwidth resources are greatly occupied due to the fact that the scheduling module generates the data to send the notification once can be effectively solved, and waste of data transmission bandwidth is avoided.

Description

Data processing method and device, readable medium and electronic equipment
Technical Field
The present application relates to the field of information technologies, and in particular, to a data processing method, an apparatus, a readable medium, and an electronic device.
Background
In the field of hardware virtualization, one host device may be virtualized into a plurality of virtual devices, each of which may process data independently using the resources of the host device.
For example, taking a virtual device as an execution module as an example, when a new to-be-processed data is generated in the existing data processing method, the scheduling module 100 sends a notification to the execution module 2001 for processing the data, and the scheduling module 100 informs the execution module 2001 of the specific number of the to-be-processed data, for example, as shown in fig. 1, when 2 to-be-processed data, i.e., the to-be-processed data 1, are generated at a time and the to-be-processed data 2 needs to be processed by the queue 1 of the execution module 2001, the scheduling module 100 sends the notification 1 carrying the number of data 2 to the execution module 2001 again, when 1 to-be-processed data, i.e., the to-be-processed data 3 needs to be processed by the queue 1 of the execution module 2001, the scheduling module 100 sends the notification 2 carrying the number of data 1 again to the execution module 2001, when 3 to-be-processed data, i.e., the to-be-processed data 4, the to-be-processed data 5, and the to-be-processed data 6 need to be processed by the queue 1 of the execution module 2002, the scheduling module 100 sends notification 3 to the execution module 2002. When new data to be processed is generated continuously, the scheduling module 100 does not stop sending a notification to the execution module 2001 or the execution module 2002. It can be understood that, each time the scheduling module 100 generates the to-be-processed data, it needs to send a notification, and at the same time, the number of the to-be-processed data generated each time is notified to the execution module, but in the case of high performance, that is, large flow, bandwidth of data transmission may be wasted, and at the same time, additional overhead of the scheduling module 100 may be brought.
Disclosure of Invention
The embodiment of the application provides a data processing method, a data processing device, a readable medium and electronic equipment.
In a first aspect, an embodiment of the present application provides a data processing method, which is used in a data processing system, where the data processing system includes: the data processing comprises: the scheduling module generates data to be processed and a descriptor corresponding to the data to be processed, wherein the descriptor is used for describing information of the data to be processed; the scheduling module sends a data processing notification, wherein the data processing notification is used for notifying the processing of the data to be processed; the execution module receives and stores the data processing notification, and acquires a preset number of descriptors based on the data processing notification, wherein the data to be processed is acquired and processed according to the acquired descriptors; the execution module sets a notification enabling flag bit to be invalid and continues to acquire the descriptors with the preset number under the condition that the descriptors with the preset number are judged to be all valid descriptors; and the execution module sets the notification enabling zone bit to be valid under the condition that the preset number of descriptors contain invalid descriptors.
For example, the electronic device includes a scheduling module and an execution module, the scheduling module and the execution module are connected through a bus, where the bus may be a PCIe bus, the scheduling module includes a processor, a controller, a memory, and the like, and the execution module may be a system on a chip when being implemented specifically. For example, the execution module may be a physical network card, and the physical network card may virtualize n virtual network cards by using an SR-IOV technique, where the physical network card or the virtual network card includes a plurality of queues, and each queue may be used to process data to be processed.
For example, the scheduling module is connected to the physical network card through a PCIe bus, and the scheduling module may generate to-be-processed data that needs to be processed by the multi-queue physical network card and/or the multi-queue virtual network card and a descriptor corresponding to the to-be-processed data, where the multi-queue physical network card may be a queue 1 of the physical network card, a queue 2 of the physical network card, or a queue m of the physical network card. The multi-queue virtual network card can be a queue 1 of the virtual network card 1, a queue 2 of the virtual network card 1, a queue m of the virtual network card 1, or a queue m of the virtual network card n. The method comprises the steps that a scheduling module sends data to be processed to a physical network card through a PCIe bus, an execution module receives and stores a data processing notice, a preset number of descriptors are obtained based on the data processing notice, under the condition that the descriptor parsing module of the execution module judges that the descriptors in the preset number are all valid descriptors, the descriptor parsing module of the execution module continues to obtain the descriptors in the preset number, and under the condition that the descriptor parsing module of the execution module judges that the descriptors in the preset number contain invalid descriptors, the scheduling module sets a notice enabling flag bit to be valid. It can be understood that, for example, when the scheduling module generates to-be-processed data that needs to be processed by the queue 1 of the virtual network card 1, the scheduling module sends notification to the queue 1 of the virtual network card 1, and the notification enable flag of the queue 1 of the virtual network card 1 is set to be invalid, the scheduling module does not send notification to the queue 1 of the virtual network card 1 any more, and only when the notification enable flag of the queue 1 of the virtual network card 1 is valid and new to-be-processed data needs to be processed by the queue 1 of the virtual network card 1, the scheduling module sends notification to the queue 1 of the virtual network card 1 again. It can be understood that the scheduling module only needs to send a notification to the queue 1 of the virtual network card 1 once, and the virtual network card actively processes the data to be processed, and will not stop processing the data until an invalid descriptor is encountered. It can be understood that according to the method of the present application, the problem that bandwidth resources are greatly occupied due to the fact that the scheduling module generates data and sends a notification once can be effectively solved, waste of data transmission bandwidth is avoided, meanwhile, extra clock cycle consumption of the processor caused by the fact that the processor of the scheduling module continuously reads and writes the execution module can be reduced, process context switching can be reduced in some scenes, and performance of the scheduling module is improved. In a possible implementation manner of the first aspect, the information of the data to be processed includes address information of the data to be processed, length information of the data to be processed, and read-write information of the data to be processed.
For example, the scheduling module generates to-be-processed data to be processed by the queue 1 of the virtual network card 1, and a descriptor corresponding to the to-be-processed data is used to describe information of the to-be-processed data. The descriptor corresponding to the to-be-processed data can be used for describing the following to-be-processed data information: the address information of the data to be processed which needs to be processed by the queue 1 of the virtual network card 1, the length information of the data to be processed, whether the data to be processed is read data or write data information, and the like. It will be appreciated that there is no data pending when the descriptor is an invalid descriptor and there is data pending when the descriptor is a valid descriptor.
In a possible implementation manner of the first aspect, the notification enable flag is used for sending a data processing notification, where the scheduling module sends the data processing notification when the notification enable flag is valid and the to-be-processed data is generated.
It is understood that the scheduling module may set the notification enable flag to be valid when the notification enable flag is set to 1, and then set the notification enable flag to be invalid when the notification enable flag is set to 0. The scheduling module may further set the notification enable flag to be valid when the notification enable flag is set to 1, and then set the notification enable flag to be invalid when the notification enable flag is set to 0.
In a possible implementation manner of the first aspect, the method further includes: the data processing notification is saved in the execution module, and the self-starting flag bit is set to be valid under the condition that the data processing notification is saved in the execution module.
It can be understood that the execution module includes a notification module, a self-starting module, and a descriptor processing module, the data processing notification may be stored in the notification module or in the self-starting module, and when the data processing notification is stored in the notification module and/or the self-starting module, the self-starting flag is set to be valid. And when the data processing notice is not stored in the notice module or the self-starting module, setting the self-starting flag bit to be invalid. It can be understood that when the self-starting flag bit is set to be valid, it indicates that a data processing notification needs to be processed, and when the self-starting flag bit is set to be invalid, it indicates that no data processing notification needs to be processed or a descriptor processing notification is processing a data processing notification.
It can be understood that the execution module may set the self-starting flag to be valid when 1, and then set the self-starting flag to be invalid when 0. The execution module may be enabled when the self-starting flag is set to 0, and disabled when the self-starting flag is set to 1.
In a possible implementation manner of the first aspect, the method further includes: the data processing notification includes identification information, and a preset number of descriptors are acquired based on the identification information, where the identification information is used to uniquely identify the execution module, for example, the scheduling module generates to-be-processed data to be processed by the queue 1 of the virtual network card 1, and the data processing notification includes the identification information of the queue 1 of the virtual network card 1, where the identification information of the queue 1 of the virtual network card 1 is used to distinguish from other multi-queue physical network cards or multi-queue virtual network cards and uniquely identify the queue 1 of the virtual network card 1. It is to be understood that the data processing notification is a notification containing the queue 1 information (i.e., identification information) of the virtual network card 1. The identification information of the queue 1 of the virtual network card 1 may be a combination of the ID of the virtual network card 1 and the ID of the queue 1, for example, the identification information of the queue 1 of the virtual network card 1 may be 21, where 2 represents the ID of the virtual network card and 1 represents the ID of the queue 1.
In a possible implementation manner of the first aspect, the method further includes: and under the condition that the descriptors with the preset number are all effective descriptors, storing the index of the last effective descriptor, wherein the index is used for indicating the position information of the last effective descriptor, namely the position in the memory. It is to be understood that the descriptor indexing module stores an index of the last valid descriptor, and when the descriptor engine acquires the preset number of descriptors next time, the descriptor engine may determine to acquire the descriptor from a specific index position according to the last acquired index of the last valid descriptor. For example, for to-be-processed data including the ID of the virtual network card 1 and the ID of the queue 1, the position where the preset number of descriptors are acquired next time is determined according to the index of the descriptor describing the information of the to-be-processed data, which is beneficial to realizing that the first generated data is processed first and the to-be-processed data is processed in sequence.
In a possible implementation manner of the first aspect, the method further includes: in a case where it is determined that the predetermined number of descriptors includes an invalid descriptor and where the notification enable flag is valid, the predetermined number of descriptors are acquired again based on the data processing notification, and in a case where it is determined that the predetermined number of descriptors includes an invalid descriptor, the data processing notification is cleared and the self-start flag is set invalid.
Taking the scheduling module to generate the data to be processed which needs to be processed by the queue 1 of the virtual network card 1 as an example, for example, a notification of the queue 1 information containing the virtual network card 1 received by the execution module is received, the data processing notification is stored in the notification module, meanwhile, the self-starting flag bit of the queue 1 of the virtual network card 1 is set to be valid, that is, the notification of the queue 1 with the virtual network card 1 needs to be processed, when the descriptor processing module acquires the notification of the queue 1 information containing the virtual network card 1 from the notification module, the self-starting flag bit of the queue 1 of the virtual network card 1 is set to be invalid, that is, the notification of the queue 1 which is processing the virtual network card 1 is processed, according to the notification of the queue 1 information containing the virtual network card 1, the descriptor processing module continues to acquire the preset number of descriptors and judges whether the preset number of descriptors is valid, and in the case that the descriptors are valid descriptors, and storing the descriptor processing notification into a self-starting module, and setting a self-starting flag bit to be effective, namely indicating that the notification of the queue 1 with the virtual network card 1 needs to be processed. When the execution module judges that the preset number of descriptors include invalid descriptors, the execution module sets a notification enable flag bit to be valid, meanwhile, the notification including the queue 1 information of the virtual network card 1 is stored in the self-starting module, the self-starting flag bit of the queue 1 of the virtual network card 1 is set to be valid, namely the notification of the queue 1 of the virtual network card 1 needs to be processed, when the preset number of descriptors acquired from the self-starting module by the descriptor processing module still include the invalid descriptors and the self-starting flag bit of the queue 1 of the virtual network card 1 is in a valid state, the data processing notification (namely the notification including the queue 1 information of the virtual network card 1) is cleared, and the self-starting flag bit of the queue 1 of the virtual network card 1 is cleared.
It can be understood that, when the scheduling module generates new data to be processed by the queue 1 of the virtual network card 1 after the notification enable flag bit of the queue 1 of the virtual network card 1 is valid, the scheduling module sends a data processing notification (i.e., a notification including the queue 1 information of the virtual network card 1).
It can be understood that after the descriptors with the preset number are judged to include the invalid descriptors for the first time and the notification enable flag bit of the queue 1 of the virtual network card 1 is set to be valid, the descriptors with the preset number are acquired to judge whether the descriptors include the preset value, if the descriptors still include the invalid descriptors, the data processing notification (i.e., the notification including the information of the queue 1 of the virtual network card 1) is cleared, and the self-starting flag bit of the queue 1 of the virtual network card 1 is cleared. It can be understood that, in the case that the notification enable flag bit of the queue 1 of the virtual network card 1 is set to be valid, the secondary determination of whether the valid descriptor is included is helpful to eliminate the asynchronous situation, that is, after the descriptor engine acquires the invalid descriptor, in the case that the notification enable flag bit of the queue 1 of the virtual network card 1 is not set to be valid, a new valid descriptor is generated.
In a second aspect, an embodiment of the present application discloses a data processing apparatus, including: the generating module is used for generating the data to be processed and a descriptor corresponding to the data to be processed, wherein the descriptor is used for describing the information of the data to be processed; the sending module is used for sending a data processing notice, wherein the data processing notice is used for informing the processing of the data to be processed; the receiving and storing module is used for receiving and storing the data processing notification and acquiring a preset number of descriptors based on the data processing notification, wherein the to-be-processed data is acquired and processed according to the acquired descriptors; and the judging module is used for setting the notification enabling zone bit to be invalid and continuously acquiring the descriptors with the preset number under the condition that the descriptors with the preset number are judged to be valid descriptors, and the executing module is used for setting the notification enabling zone bit to be valid under the condition that the descriptors with the preset number are judged to contain invalid descriptors. The information of the data to be processed comprises address information of the data to be processed, length information of the data to be processed and read-write information of the data to be processed. The notification enable flag bit is used for sending a data processing notification, wherein the scheduling module sends the data processing notification when the notification enable flag bit is valid and the data to be processed is generated. The data processing notification is saved in the execution module, and the self-starting flag bit is set to be valid under the condition that the data processing notification is saved in the execution module. The data processing notification includes identification information, and a preset number of descriptors are acquired based on the identification information, wherein the identification information is used for uniquely identifying the execution module. And under the condition that the descriptors with the preset number are all valid descriptors, storing the index of the last valid descriptor, wherein the index is used for indicating the position information of the last valid descriptor. In the case where it is determined that the preset number of descriptors includes an invalid descriptor and the notification enable flag is valid, the preset number of descriptors is acquired again based on the data processing notification, and in the case where it is determined that the preset number of descriptors includes an invalid descriptor, the data processing notification and the self-start flag are cleared.
In a third aspect, an embodiment of the present application discloses a machine-readable medium, on which instructions are stored, and when the instructions are executed on a machine, the instructions cause the machine to execute the data processing method of the first aspect.
In a fourth aspect, an embodiment of the present application discloses an electronic device, including: a memory to store instructions; a processor, the processor being coupled to the memory, the electronic device performing the data processing method of the first aspect described above when the program instructions stored in the memory are executed by the processor.
Drawings
FIG. 1 illustrates a block schematic diagram of the structure of data processing of a conventional execution module, according to some embodiments of the present application;
fig. 2 illustrates an information diagram of pending data described by descriptors in Virtio protocol, according to some embodiments of the present application.
FIG. 3 illustrates a block schematic diagram of the structure of data processing of an execution module, according to some embodiments of the present application;
FIG. 4 illustrates an architectural diagram of a device interaction over a PCIe bus based on SR-IOV technology devices, according to some embodiments of the present application;
FIG. 5 illustrates a flow diagram of a method for virtual network card queue data processing, according to some embodiments of the present application;
FIG. 6 illustrates a data processing apparatus according to some embodiments of the present application;
FIG. 7 illustrates a block diagram of an electronic device, in accordance with some embodiments of the present application;
fig. 8 illustrates a block diagram of a system on a chip (SoC), according to some embodiments of the present application.
Detailed Description
The illustrative embodiments of the present application include, but are not limited to, data processing methods, apparatuses, readable media, and electronic devices.
In order to solve the problems in the prior art, the data processing method provided by the application describes data to be processed through a descriptor, sends the data to an execution module once, does not need to send a notification to the execution module when new data is generated subsequently, and the execution module only needs to judge whether the data needs to be processed according to whether the descriptor acquired and analyzed by a descriptor analysis module is an effective descriptor.
For the convenience of understanding the technical solutions provided by the embodiments of the present application, the following key terms used in the embodiments of the present application are explained:
multi-queue network card: the method refers to that a network card is provided with a plurality of queues, and a CPU of a server distributes network card queue information to each queue. The multi-queue is mainly applied to Quality of Service (QoS) traffic classes, the sending queue is allocated to different traffic classes, and the network card can be scheduled at the sending side; the packet receiving queues are distributed to different flow categories, and flow-based speed limitation can be achieved. The multi-queue network card is the mainstream of the current high-speed network card, and can fully utilize the bandwidth of the network card under the condition that the single queue cannot fully utilize the bandwidth of the network card, so that the high-performance large-flow scene is met.
Descriptor: the descriptor is used for describing the information of the data to be processed in the memory, and the descriptor is stored in a continuous memory. The descriptor has a size of 16 bytes (byte) or 128 bits (bit).
Fig. 2 illustrates an information diagram of data to be processed described by descriptors in a virtualized input/output (virtual I/O) protocol, according to some embodiments of the present application. As shown in fig. 2, Address represents a storage Address of the data to be processed in the memory. Length represents the Length of the data to be processed. Id is the descriptor Id for identifying the descriptor. Next flag is used to describe the index number of the Next descriptor that is logically adjacent to the descriptor. The Write flag is used to indicate whether the descriptor is used to transmit data or receive data. To-be-processed data index flag is used to point to larger to-be-processed data. Reserved flag is a Reserved bit to indicate unused bits. The Avail flag is Used in conjunction with the Used flag to determine whether the descriptor is a valid descriptor.
A single root I/O virtualization (SR-IOV) technology is a basic technology of IO hardware virtualization, and the SR-IOV standard allows PCIe devices to be efficiently shared between IOs, so that, based on the SR-IOV technology, a Physical electronic device including a Physical Function (PF) can be virtualized as a Virtual electronic device having hundreds of associated Virtual Functions (VFs) through a Peripheral Component Interconnect express (PCIe) bus. Fig. 3 is a schematic structural diagram illustrating data processing performed by a module according to an embodiment of the present application.
As shown in fig. 3, the interactive schematic diagram includes a scheduling module 100 and an execution module 200, and the scheduling module 100 and the execution module 200 are connected by a bus 300. The scheduling module 100 includes a processor, a controller, a memory, and the like. In an embodiment of the present application, the bus 300 may be a PCIe bus 300. In some embodiments of the present application, the execution module 200 may virtualize a plurality of virtual execution modules through SR-IOV techniques. The plurality of virtual execution modules include 1 physical execution module 200 including a physical function and n virtual execution modules including n virtual functions. Multiple queues, e.g., m queues, may be used per virtual function or physical function. Wherein any two virtual functions are isolated from each other. The execution module 200 presents n virtual functions in the scheduling module 100, each virtual function may be responsible for managing a block of storage space in the scheduling module 100, and at this time, the n virtual functions are equivalent to n virtual network cards for the virtual execution module.
For example, the execution module 200 presents to the scheduling module 100, via SRIOV techniques, a plurality of execution modules containing virtual functions, such as the execution module 2001, the execution module 2002, the execution module 2003 … …, the execution module n. The execution module 2001, the execution module 2002 and the execution module 2003 … … may perform the same function of the module n at the scheduling module 100 as the function of the execution module 200 at the scheduling module 100.
In some embodiments of the present application, the execution module 200 may support communication with a front-end driver that includes a development based on the Virtio protocol. Where the front-end driver runs on the scheduling module 100. It can be understood that the front-end driver generates data that needs to be processed by the execution module, i.e. 1 front-end driver generates data that 1 execution module needs to process. As shown in FIG. 3, the execution module 200 virtualizes n virtual execution modules through SR-IOV technology, and then includes n +1 front-end drivers running on the scheduling module 100.
For example, in the embodiment of the present application, as shown in fig. 3, when 1 piece of to-be-processed data is generated at a time, that is, when the to-be-processed data 1 needs to be processed by the queue 1 of the execution module 2001, the scheduling module 100 sends the notification 1 to the execution module 2001, where the notification 1 does not need to carry the number of the to-be-processed data, and when the to-be-processed data 2 and the to-be-processed data 3 need to be processed by the queue 1 of the execution module 2001, the scheduling module 100 does not send the notification to the execution module 2001 again, and after the queue 1 of the to-be-processed module 2001 finishes processing the to-be-processed data 1, the to-be-processed data newly generated and needing to be processed by the queue 1 of the execution module 2001, for example, the to-be-processed data 2 and the to-be-processed data 3, are actively processed according to the sequence of the to-be-processed data. When 2 pieces of to-be-processed data, that is, the to-be-processed data 4, are generated at a time, and the to-be-processed data 5 needs to be processed by the queue 1 of the execution module 2002, the scheduling module 100 sends the notification 2 to the execution module 2002, and similarly, when new to-be-processed data needs to be processed by the queue 1 of the execution module 2002, the scheduling module 100 does not need to send the notification to the execution module 2002 again, and after the queue 1 of the to-be-processed module 2002 finishes processing the to-be-processed data 4 and the to-be-processed data 5, the newly generated to-be-processed data that needs to be processed by the queue 1 of the execution module 2002 is actively processed according to the sequence of the generated to-be-processed data. It can be understood that, when new pending data is generated and needs to be processed by the queue of the execution module, the scheduling module 100 sends a notification to the execution module once, and the execution module actively processes the newly generated pending data. It is understood that the scheduling module 100 sends a notification to the execution module, and the execution module actively processes the continuously generated data to be processed. Therefore, the problem that bandwidth resources are greatly occupied by the notification due to the fact that the scheduling module 100 generates the data and sends the notification once can be effectively solved, waste of data transmission bandwidth is avoided, meanwhile, extra clock cycle consumption of a processor caused by the fact that the processor of the scheduling module 100 continuously reads and writes the execution module can be reduced, process context switching can be reduced under certain scenes, and performance of the scheduling module 100 is improved. Here, the queue of the execution module that processes the data to be processed may be the queue 1 of the execution module 2001, the queue m of the execution module, or the queue m of the execution module n, and is not limited herein according to the actual work requirement.
In some embodiments of the present application, the electronic device including the execution module 200 and the scheduling module 100 may be a Personal Computer (PC), a notebook computer, a server, or the like. The server may be an independent physical server, may also be a server cluster formed by a plurality of physical servers, and may also be a server providing basic cloud computing services such as a cloud database, a cloud storage, and a CDN, which is not limited in this embodiment of the present application.
In some embodiments of the present application, the execution module 200 may be implemented as a system on chip (SoC), where the SoC may be a separate PCIe card, and the PCIe card may be disposed on the scheduling module 100 or integrated with the scheduling module 100 motherboard directly. Specifically, the execution module 200 may be a network card, a video card, a storage device, a sound card, or the like. The embodiment of the present application does not limit this.
As shown in fig. 4, the execution module 200 includes a descriptor processing module 203, an index saving module 201, a flag bit saving module 202, a self-starting module 204, a notification module 205, and a data handling module 206.
The descriptor processing module 203: for processing the notifications stored in the bootstrapping module 204 and the notification module 205, parsing the obtained descriptors and determining if the obtained descriptors are valid descriptors.
The notification module 205: for saving notifications sent by the scheduling module 100 over the PCIe bus 300.
The self-starting module 204: for saving the notification. If all the descriptors analyzed by the descriptor processing module 203 to obtain the preset number are valid descriptors, the notification obtained from the notification module 205 or the self-starting module 204 is saved or saved in the self-starting module 204 again.
The index saving module 201: when the descriptors acquired by the descriptor processing module 203 for one time are all valid descriptors, the index storing module 201 is configured to store an index of the last valid descriptor.
The flag bit saving module 202: used for saving the self-starting flag bit.
The data handling module 206: for saving the valid descriptors and for retrieving the data to be processed according to the content of the descriptors.
It is understood that each module included in the execution module 200 in fig. 4 may be a hardware module or a software module, and the present application is not limited thereto.
For convenience of description, the scheduling module 100 includes a processor, a controller, a memory, etc., and the execution module 200 uses a network card as an example to describe the content of the embodiments of the present application in detail. Next, according to some embodiments of the present application, a technical solution of the present application is specifically described with reference to fig. 4, fig. 5 shows a flowchart of a data processing method, and as shown in fig. 5, a scheduling module 100 interacts with a queue 1 of a virtual network card 2001, where the data processing method includes:
501: the scheduling module 100 performs data interaction with the queue 1 of the virtual network card 2001.
In the embodiment of the present application, the scheduling module 100 establishes a physical connection with the physical network card 200 through the PCIe bus 300 and the scheduling module 100 performs data interaction with the queue of the virtual network card. Specifically, according to the user requirement, the scheduling module 100 performs data interaction with the virtual network card, wherein the physical network card 200 presents a plurality of virtual network cards including virtual functions to the scheduling module 100 through SRIOV technology, for example, the virtual network card 2001, the virtual network card 2002, and the virtual network card 2003 … … virtual network card n. The function presented by the virtual network card at the scheduling module 100 is the same as the function presented by the physical network card 200 at the scheduling module 100. For example, the virtual network card may also include a plurality of queues, and the queues of the virtual network card may also interact with the scheduling module 100 and process data. Next, a data processing method when the scheduling module 100 interacts with the queue 1 of the virtual network card 2001 will be described by taking an example in which the scheduling module 100 generates data to be processed by the queue 1 of the virtual network card 2001.
502: the front-end driver of the scheduling module 100 generates data to be processed and adds descriptors.
In the embodiment of the present application, the scheduling module 100 establishes a physical connection with the physical network card 200 through a PCIe bus, and the scheduling module 100 performs data interaction with the queue 1 of the virtual network card 2001. The front-end driver running on the scheduling module 100 generates data that needs to be processed by the queue 1 of the virtual network card 2001, i.e., data to be processed, and meanwhile, the front-end driver of the scheduling module 100 adds a descriptor. The added descriptor is used to describe information of the pending data of the queue 1 of the virtual network card 2001. Specifically, the front-end driver running on the scheduling module 100 generates data to be processed, which is data to be processed and needs to be processed by the queue 1 of the virtual network card 2001, where the data to be processed is composed of one or more data blocks to be processed. The data to be processed comprises current data, generated data and data to be processed to be generated. Therefore, the scheduling module 100 temporarily stores the data to be processed in the continuous memory of the scheduling module 100, and the virtual network card 2001 is notified to process the data to be processed. The information of the data to be processed in the memory may be described by a plurality of descriptors, where the description information of the data to be processed includes an address of the data to be processed in the scheduling module 100, a length of the data to be processed, whether the data to be processed is data to be received or data to be transmitted, and the like, and specific content of the description information of the data to be processed described by the descriptors is explained in detail by the above descriptor nouns. It can be understood that the front-end driver of the scheduling module 100 may also generate data processed by multiple queues of the physical network card 200 and add a descriptor, and the front-end driver of the scheduling module 100 may also generate data processed by multiple queues of the virtual network card 2002 or multiple queues of the virtual network card n and add a descriptor, where the multiple queues may be queue 1, queue 2, or queue m. The specific method is consistent with the method in which the front-end driver of the scheduling module 100 generates the to-be-processed data of the queue 1 of the virtual network card 2001 and adds the descriptor.
It can be understood that adding the descriptor may enable the physical network card multi-queue or the virtual network card multi-queue to process data according to the generated sequence of the data to be processed. For a certain queue of a certain network card, for example, queue 1 of the virtual network card 2001, data may be processed according to a data first processing principle generated first. The problems that due to the fact that the scheduling module 100 continuously generates the data to be processed within a certain period of time, the queue 1 of the virtual network card 2001 is not processed in time, the data to be processed is accumulated, the queue 1 of the generated data to be processed of the virtual network card 2001 is processed first, the data to be processed generated first is not processed all the time, data processing is delayed, user experience is affected and the like are caused are effectively avoided.
503: the scheduling module 100 transmits a notification including the queue 1 information of the virtual network card 2001 to the virtual network card 2001.
In the embodiment of the present application, the front-end driver of the scheduling module 100 generates data that needs to be processed by the queue 1 of the virtual network card 2001, the scheduling module 100 temporarily stores the data in a continuous memory of the scheduling module 100, and the scheduling module 100 needs to notify the queue 1 of the virtual network card 2001 to process the data to be processed. The scheduling module 100 sends a notification of the queue 1 information of the virtual network card 2001 to the physical network card 100 through the PCIe bus 300. Specifically, the notification containing the queue 1 information of the virtual network card 2001 may be a notification containing identification information of the queue 1 of the virtual network card 2001, where the identification information may be a combination of the ID of the virtual network card 2001 and the ID of the queue 1.
In the embodiment of the present application, for example, the physical network card 200 virtualizes two virtual network cards, namely the virtual network card 2001 and the virtual network card 2002, through the SRIOV technology, where each network card has 2 queues, and the identification information may be a combination of an ID of the network card and an ID of the queue 1. For example, the identification information of the physical network card 200 may be 1, the identification information of the virtual network card 2001 may be 2, and the identification information of the virtual network card 2002 may be 3. The identification information of queue 1 may be 1 and the identification information of queue 2 may be 2. The identification information of the queue 1 of the physical network card 200 is 11, the identification information of the queue 2 of the physical network card 200 is 12, the identification information of the queue 1 of the virtual network card 2001 is 21, the identification information of the queue 2 of the virtual network card 2001 is 22, the identification information of the queue 1 of the virtual network card 2002 is 31, and the identification information of the queue 2 of the virtual network card 2002 is 32. It is understood that, according to the notification of the queue 1 information containing the virtual network card 2001, the physical network card 200 may determine that the data to be processed is processed by the queue 1 of the virtual network card 2001. It is advantageous for the physical network card 200 to recognize the notification and process the notification.
It can be understood that the scheduling module 100 may also send a notification including multi-queue information of the virtual network card 2002 to the virtual network card 2002, and the scheduling module 100 may also send a notification including multi-queue information of the virtual network card n to the virtual network card n, where the multi-queue may be a queue 1, a queue 2, or a queue m. The specific method is the same as the method for the scheduling module 100 to send the notification containing the queue 1 information of the virtual network card 2001 to the physical network card 200, and is not described herein again.
504: the physical network card 200 stores a notification including the queue 1 information of the virtual network card 2001 in the notification module 205.
In the embodiment of the present application, the virtual network card 2001 receives the notification including the queue 1 information of the virtual network card 2001, which is sent by the scheduling module 100 through the PCIe bus 300, and queues the notification including the queue 1 information of the virtual network card 2001 in the notification module 205 of the physical network card 200, and the queue 1 of the virtual network card 2001 is switched to the self-boot mode. It can be understood that the virtual network card 2001 confirms that the queue 1 of the virtual network card 2001 has data to process according to the notification of the queue 1 including the virtual network card 2001. After the queue 1 of the virtual network card 2001 is switched to the self-starting mode, the newly generated data which needs to be processed by the queue 1 of the virtual network card 2001 does not need to be sent to the virtual device 2001 by the scheduling module 100, and the queue 1 of the virtual network card 2001 can actively process the data to be processed.
Further, the physical network card 200 stores the notification including the queue 1 information of the virtual network card 2001 in the notification module 205, closes the notification, and sets the self-start flag bit of the queue 1 of the virtual network card 2001, for example, the self-start flag bit of the execution module is set to be valid, that is, the self-start flag bit is set to be 1 and stored in the flag bit storage module 202, the self-start flag bit is used for indicating that the notification is already stored in the notification module 205 and the notification belongs to the pending state. It is understood that the notification module 205 is not only used to store the notification containing the queue 1 information of the virtual network card 2001, but also used to store the notification containing the multi-queue information of other network cards. For example, the notification module 205 may store a notification including multi-queue information of the physical network card 200, a notification including multi-queue information of the virtual network card 2002, or a notification including multi-queue information of the virtual network card n. The multiple queues may be queue 1, queue 2, or queue m, and the specific idea is the same as that of queue 1 of the virtual network card 2001, which is not described herein again. According to the principle of saving first and processing first, the descriptor processing module 203 processes the data processing notifications in sequence according to the sequence of saving the notifications. It can be understood that the temporary storage of the notification including the network card multi-queue information in the notification module 205 is beneficial to the physical network card 200, so that the descriptor processing module 203 of the physical network card 200 can still process the notifications sequentially and orderly under the condition that the number of notification items is large or the hardware configuration is limited. In other embodiments of the present application, the physical network card 200 may further add a plurality of descriptor processing modules 203, so that when there are many notification items, the notifications are processed in parallel, and the data interaction performance is improved.
Specifically, the virtual network card 2001 receives a notification including the queue 1 information of the virtual network card 2001, which is sent by the scheduling module 100 through the PCIe bus 300, and the queue 1 of the virtual network card 2001 is switched to the self-boot mode. When the virtual network card 2001 receives the data processing notification, the queue 1 of the virtual network card 2001 is switched to the self-start mode, the descriptor processing module 203 processes the notification items in order, and the queue 1 of the virtual network card 2001 actively processes data. The specific notification items include: the notification module 205 stores a data processing notification, and the self-starting module 204 stores a data processing notification. It is understood that after the queue 1 of the virtual network card 2001 is switched to the self-booting mode, the descriptor processing module 203 can efficiently process the notifications in the notification module 205 and the self-booting module 204 in a time-sharing manner.
505: the descriptor processing module 203 of the physical network card 200 acquires and processes the notification containing the queue 1 information of the virtual network card 2001.
In an embodiment of the present application, the descriptor processing module 203 retrieves the notification stored in the notification module 205. For example, the descriptor processing module 203 obtains the notification of the queue 1 information of the virtual network card 2001 included in the notification module 205, and clears the self-start flag bit of the queue 1 of the virtual network card 2001 stored in the flag bit storage module 202, for example, sets the self-start flag bit to 0. Further, the descriptor processing module 203 processes the notification, i.e., performs step 506. Specifically, for example, the descriptor processing module 203 acquires the notification including the queue 1 information of the virtual network card 2001, which is stored in the notification module 205, and according to the identification information of the queue 1 of the virtual network card 2001, the descriptor processing module 203 acquires the context information of the queue 1 of the virtual network card 2001, and step 506 is executed. The context information of the queue 1 of the virtual network card 2001 includes a base address of the queue 1 of the virtual network card 2001 in a memory, a size of the queue 1 of the virtual network card 2001, a size of descriptors with a preset number, and the like, and it can be understood that after the queue 1 of the virtual network card 2001 enters a self-starting mode, the descriptor processing module 203 acquires and processes the notification including the network card multi-queue information according to a sequence of data processing notifications stored by the notification module 205. It is understood that the descriptor processing module 203 may obtain and process notifications containing multi-queue information for physical network cards. The descriptor processing module 203 may also acquire and process a notification of multi-queue information including the virtual network card 2002. The descriptor processing module 203 may further obtain and process a notification of multi-queue information including the virtual network card n, where the multi-queue may be a queue 1, a queue 2, or a queue m. The specific process is the same as the method for obtaining the queue 1 of the virtual network card 2001, and is not described herein again.
506: the descriptor processing module 203 of the physical network card 200 sends a request for reading the descriptors, and the descriptors with the preset number can be read at one time.
In the embodiment of the present application, according to the context information of the queue 1 of the virtual network card 2001, the descriptor processing module 203 of the physical network card 200 initiates a descriptor reading request to the scheduling module 100 through the PCIe bus 300, and further reads the descriptors of the preset number at a time from the memory of the scheduling module 100. Specifically, based on the virtio 1.1 protocol, descriptors are stored in the memory of the scheduling module 100, wherein the memory storing descriptors is represented by a descriptor table. Therefore, the descriptor processing module 203 reads the descriptor describing the to-be-processed data of the queue 1 of the virtual network card 2001 from the continuous memory through the PCIe bus 300, and the descriptor processing module 203 reads the descriptors of the to-be-processed data information describing the queue 1 of the virtual network card 2001 by the preset number from the memory of the scheduling module 100 at a time. It can be understood that the descriptors read by the descriptor processing module 203 of the physical network card 200 may be descriptors of the to-be-processed data information of the multiple queues of the virtual network card 2002 in a preset number, and the descriptors read by the descriptor processing module 203 of the physical network card 200 may also be descriptors of the to-be-processed data information of the multiple queues of the virtual network card n in a preset number, which is not described herein again.
507: the descriptor processing module 203 of the physical network card 200 parses whether the descriptors of the acquired preset number are all valid descriptors.
In the embodiment of the present application, the descriptor processing module 203 of the physical network card 200 parses, according to the read descriptors describing the data to be processed of the queue 1 of the virtual network card 2001 by the preset number, whether each descriptor is a valid descriptor. If the descriptors of the parsed preset number are all valid descriptors, step 508 is performed. For example, taking 10 descriptors with a preset number of descriptors as an example, the descriptor processing module 203 receives 10 descriptors, read by the descriptor processing module 203, describing to-be-processed data of the queue 1 of the virtual network card 2001, and the descriptor processing module 203 analyzes that the 10 descriptors are all valid descriptors, then step 508 is executed. If the 5 th descriptor is parsed as an invalid descriptor, the remaining 5 may be determined to be also invalid descriptors. It is understood that, no matter the descriptor processing module 203 resolves the invalid descriptor, the descriptor processing module 203 will continue to process the data processing notification regarding the other network card multi-queue in the notification module 205 or the self-starting module 204.
In the embodiment of the present application, for example, the descriptor processing module 203 of the physical network card 200 parses whether the descriptors of the preset number of obtained descriptors, which are used to describe the to-be-processed data information of the queue 1 of the virtual network card 2001, are all valid descriptors, if the descriptors of the preset number include invalid descriptors, the physical network card 200 opens the notification of the queue 1 information including the virtual network card 2001, sets the self-start flag bit of the queue 1 of the virtual network card 2001 in the flag bit saving module 202 to 1, and according to the self-start working mechanism, it is turned to the descriptor processing module 203 to process the notification of the enabled descriptor including the queue 1 of the virtual network card 2001, where the contents of step 506 and step 507 are executed, if the descriptors of the preset number obtained by the descriptor processing module 203 do not have valid descriptors, the self-start flag bit of the queue 1 of the virtual network card stored in the flag bit saving module 202 is cleared, that is, it means that the descriptor processing module 203 does not actively process the notification of the information of the queue 1 including the virtual network card 2001 any more, and when the module to be scheduled 100 transmits the notification of the descriptor of the queue 1 including the virtual network card 2001, the descriptor processing module 203 restarts the notification of the information of the queue 1 including the virtual network card 2001. It can be understood that the descriptor processing module 203 of the physical network card 200 may further analyze whether the obtained descriptors with preset numbers for describing the to-be-processed data information of the multiple queues of the virtual network card 2002 are all valid descriptors, and the descriptor processing module 203 of the physical network card 200 may further analyze whether the obtained descriptors with preset numbers for describing the to-be-processed data information of the multiple queues of the virtual network card n are all valid descriptors, and if the descriptors with preset numbers include invalid descriptors, the processing method is consistent with the method for the descriptor processing module 203 of the physical network card 200 to process the descriptors of the to-be-processed data information of the queue 1 of the virtual network card 2001, and is not described herein again.
It is to be understood that the enabled notification of the descriptor of the queue 1 including the virtual network card 2001 is consistent with the notification content of the queue 1 information including the virtual network card 2001, which is stored in the notification module 205 in step 504, that is, the notification of the identification information of the queue 1 including the virtual network card 2001. The identification information may be a combination of the ID of the virtual network card 2001 and the ID of the queue 1. And in this step, the self-starting flag bit of the queue 1 of the virtual network card 2001 is set to 1 to indicate that the notification has been saved in the self-starting module 204 and the notification belongs to a pending state.
The physical network card 200 saves the effective descriptor in the data carrying module 206, saves the notification containing the queue 1 information of the virtual network card 2001 in the self-starting module 204, and the data carrying module 206 acquires and processes the data to be processed described by the descriptor.
In the embodiment of the present application, for example, the descriptor processing module 203 of the physical network card 200 acquires and analyzes whether the descriptors of the preset number of descriptors, which are used for describing the to-be-processed data information of the queue 1 of the virtual network card 2001, are all valid descriptors, and if the descriptors of the preset number are all valid descriptors, the physical network card 200 stores the valid descriptors in the data carrying module 206.
In the embodiment of the present application, the physical network card 200 further stores, into the index saving module 201, an index of the last valid descriptor of descriptors, which is a preset number of descriptors, used for describing the to-be-processed data information of the queue 1 of the virtual network card 2001, and it can be understood that, storing, into the index saving module 201, an index of the last valid descriptor of descriptors, which is a preset number of descriptors, may be used to view a position where the descriptor processing module 203 processes descriptors in the descriptor table.
In the embodiment of the present application, the physical network card 200 stores the notification including the queue 1 information of the virtual network card 2001 in the self-start module 204, and sets the self-start flag bit of the queue 1 of the virtual network card 2001 stored in the flag bit storage module 202 to 1. It can be understood that the notification stored in the bootstrapping module 204 and including the queue 1 information of the virtual network card 2001 is consistent with the notification content of the queue 1 information including the virtual network card 2001 and stored in the notification module 205 in step 504, that is, the notification includes the identification information of the queue 1 of the virtual network card 2001. The identification information may be a combination of the ID of the virtual network card 2001 and the ID of the queue 1. It can be understood that the physical network card 200 may also store the notification including the multi-queue information of the virtual network card 2002 in the self-starting module 204, set the self-starting flag bit of the multi-queue of the virtual network card 2002 stored in the flag bit storage module 202 to 1, and the physical network card 200 stores the notification including the multi-queue information of the virtual network card n in the self-starting module 204, and sets the self-starting flag bit of the multi-queue of the virtual network card n stored in the flag bit storage module 202 to 1, which is not described herein again.
Further, in the embodiment of the present application, according to a mechanism of actively acquiring data to be processed by the physical network card 200, when it comes that the descriptor processing module 203 acquires and processes the notification containing the queue 1 information of the virtual network card 2001 in the self-start module 204, the method of acquiring and processing the notification containing the queue 1 information of the virtual network card 2001 by the descriptor processing module 203 in the step 505 is the same as the method of acquiring and processing the notification containing the queue 1 information of the virtual network card 2001 by the notification module 205, and the self-start flag bit of the queue 1 of the virtual network card 2001 in the flag bit saving module 202 is set to 0. Further, the descriptor processing module 203 acquires the descriptor of the to-be-processed data of the queue 1 including the virtual network card 2001 from the memory of the scheduling module 100, and executes step 507 and the content of step 508.
In the embodiment of the present application, the data handling module 206 is based on the descriptor of the pending data information for describing the queue 1 of the virtual network card 2001. And processing the data to be processed. It can be understood that the descriptor is used to describe the address, length, data read-write state, and other information of the data to be processed in the memory of the scheduling module 100, the write flag of the descriptor is used to describe the transmission state of the data to be processed, when the write flag of the descriptor used to describe the data information to be processed of the queue 1 of the virtual network card 2001 is 1, the data handling module 206 handles the data to be processed received by the queue 1 of the virtual network card 2001, and when the write flag of the descriptor is 0, the data handling module 206 handles the data to be processed sent by the queue 1 of the virtual network card 2001. The data handling module 206 may also process the data of the multi-queue of the virtual network card 2002 or the multi-queue of the virtual module n, and the specific method is the same as that in step 508, which is not described herein again.
Fig. 6 illustrates a block diagram of a data processing apparatus 600, according to some embodiments of the present application. As shown in fig. 6, specifically, it includes:
the generating module (602) generates the data to be processed and a descriptor corresponding to the data to be processed, wherein the descriptor is used for describing information of the data to be processed.
A sending module (604) for sending a data processing notification, wherein the data processing notification is used for notifying the processing of the data to be processed;
the receiving and storing module (606) is used for receiving and storing the data processing notification, acquiring a preset number of descriptors based on the data processing notification, and acquiring and processing the data to be processed according to the acquired descriptors;
and the judging module (608) is used for setting the notification enabling flag bit to be invalid and continuously acquiring the descriptors with the preset number under the condition that the descriptors with the preset number are judged to be valid descriptors, and the executing module is used for setting the notification enabling flag bit to be valid under the condition that the descriptors with the preset number are judged to contain invalid descriptors.
In an embodiment of the present application, the data processing apparatus 600 further includes: the information of the data to be processed comprises address information of the data to be processed, length information of the data to be processed and read-write information of the data to be processed. The notification enable flag bit is used for sending a data processing notification, wherein the scheduling module sends the data processing notification when the notification enable flag bit is valid and generates data to be processed. The data processing notification is saved in the execution module, and the self-starting flag bit is set to be valid under the condition that the data processing notification is saved in the execution module. The data processing notification includes identification information, and a preset number of descriptors are acquired based on the identification information, wherein the identification information is used for uniquely identifying the execution module. And under the condition that the descriptors with the preset number are all valid descriptors, storing the index of the last valid descriptor, wherein the index is used for indicating the position information of the last valid descriptor. In the case where it is determined that the preset number of descriptors includes an invalid descriptor and the notification enable flag is valid, the preset number of descriptors is acquired again based on the data processing notification, and in the case where it is determined that the preset number of descriptors includes an invalid descriptor, the data processing notification and the self-start flag are cleared.
It can be understood that the data processing apparatus 600 shown in fig. 6 corresponds to the data processing method provided in the present application, and the technical details in the above detailed description about the data processing method provided in the present application are still applicable to the data processing apparatus 600 shown in fig. 6, and the detailed description is referred to above and is not repeated herein.
The present application also provides a machine-readable medium having stored thereon instructions which, when executed on a machine, cause the machine to perform the above-described data processing method.
Fig. 7 is a block diagram illustrating an electronic device 700 according to some embodiments of the present application. FIG. 7 schematically illustrates an example electronic device 700 in accordance with various embodiments. The electronic device 700 may include an execution module 200, a scheduling module 100, and a bus 300.
In some embodiments, electronic device 700 may include one or more processors 704, system control logic 708 coupled to at least one of processors 704, system memory 712 coupled to system control logic 708, non-volatile memory (NVM)716 coupled to system control logic 708, and network interface 720 coupled to system control logic 708.
In some embodiments, processor 704 may include one or more single-core or multi-core processors. In some embodiments, the processor 704 may include any combination of general-purpose processors and special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.).
In some embodiments, system control logic 708 may include any suitable interface controllers to provide any suitable interface to at least one of processors 704 and/or any suitable device or component in communication with system control logic 708.
In some embodiments, system control logic 708 may include one or more memory controllers to provide an interface to system memory 712. System memory 712 may be used to load and store data and/or instructions. Memory 712 of electronic device 700 may include any suitable volatile memory in some embodiments, such as suitable Dynamic Random Access Memory (DRAM).
NVM/memory 716 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions. In some embodiments, the NVM/memory 716 may include any suitable non-volatile memory such as flash memory and/or any suitable non-volatile storage device, such as at least one of a HDD (Hard Disk Drive), CD (Compact Disc) Drive, DVD (Digital Versatile Disc) Drive.
NVM/memory 716 may comprise a portion of a storage resource on the device on which electronic device 700 is installed, or it may be accessible by, but not necessarily a part of, the device. For example, the NVM/storage 716 may be accessed over a network via the network interface 720.
In particular, system memory 712 and NVM/storage 716 may each include: a temporary copy and a permanent copy of the instructions 724. The instructions 724 may include: instructions that, when executed by at least one of the processors 704, cause the electronic device 700 to implement a method as shown in fig. 5. In some embodiments, the instructions 724, hardware, firmware, and/or software components thereof may additionally/alternatively be located in the system control logic 708, the network interface 720, and/or the processor 704.
Network interface 720 may include a transceiver to provide a radio interface for electronic device 700 to communicate with any other suitable device (e.g., front end module, antenna, etc.) over one or more networks. In some embodiments, the network interface 720 may be integrated with other components of the electronic device 700. For example, network interface 720 may be integrated into at least one of processors 704, system memory 712, NVM/storage 716, and a firmware device (not shown) having instructions that, when executed by at least one of processors 704, electronic device 700 implements a data processing method as shown in fig. 5.
Network interface 720 may further include any suitable hardware and/or firmware to provide a multiple-input multiple-output radio interface. For example, network interface 720 may be a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem.
In one embodiment, at least one of the processors 704 may be packaged together with logic for one or more controllers of system control logic 708 to form a System In Package (SiP). In one embodiment, at least one of the processors 704 may be integrated on the same chip with logic for one or more controllers of system control logic 708 to form a system on a chip (SoC).
The electronic device 700 may further include: input/output (I/O) devices 732. The I/O device 732 may include a user interface to enable a user to interact with the electronic device 700; the design of the peripheral component interface enables peripheral components to also interact with the electronic device 700. In some embodiments, the electronic device 700 further includes a sensor for determining at least one of environmental conditions and location information associated with the electronic device 700.
Fig. 8 shows a block diagram of a SoC (System on Chip) 800 according to an embodiment of the present application. Where the execution module 200 may be a system-on-chip 800, in fig. 8, like components have the same reference numerals. In addition, the dashed box is an optional feature of more advanced socs. In fig. 8, the SoC 800 includes: an interconnect unit 850 coupled to the application processor 810; a system agent unit 870; a bus controller unit 880; an integrated memory controller unit 840; a set or one or more coprocessors 820 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; a Static Random Access Memory (SRAM) unit 830; a Direct Memory Access (DMA) unit 860. In one embodiment, coprocessor 820 includes a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPU, a high-throughput MIC processor, embedded processor, or the like.
Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. Program code may also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in this application are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed via a network or via other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy diskettes, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), Random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or tangible machine-readable memories for transmitting information using the Internet in the form of electrical, optical, acoustical or other propagated signals, e.g., carrier waves, infrared digital signals, etc.). Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some features of the structures or methods may be shown in a particular arrangement and/or order. However, it is to be understood that such specific arrangement and/or ordering may not be required. Rather, in some embodiments, the features may be arranged in a manner and/or order different from that shown in the illustrative figures. In addition, the inclusion of a structural or methodical feature in a particular figure is not meant to imply that such feature is required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the apparatuses in the present application, each unit/module is a logical unit/module, and physically, one logical unit/module may be one physical unit/module, or may be a part of one physical unit/module, and may also be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logical unit/module itself is not the most important, and the combination of the functions implemented by the logical unit/module is the key to solve the technical problem provided by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-mentioned device embodiments of the present application do not introduce units/modules which are not so closely related to solve the technical problems presented in the present application, which does not indicate that no other units/modules exist in the above-mentioned device embodiments.
It is noted that, in the examples and description of the present patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the use of the verb "comprise a" to define an element does not exclude the presence of another, same element in a process, method, article, or apparatus that comprises the element.
While the present application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application.

Claims (14)

1. A data processing method for use in a data processing system, the data processing system comprising: the system comprises a scheduling module and an execution module, wherein the execution module includes but is not limited to a physical network card and a virtual network card, and the method comprises the following steps:
the scheduling module generates data to be processed and a descriptor corresponding to the data to be processed, wherein the descriptor is used for describing information of the data to be processed;
the scheduling module sends a data processing notification, wherein the data processing notification is used for notifying the processing of the to-be-processed data;
the execution module receives and stores a data processing notification, and acquires a preset number of descriptors based on the data processing notification, wherein the to-be-processed data is acquired and processed according to the acquired descriptors;
the execution module sets a notification enabling flag bit to be invalid when judging that the descriptors in the preset number are all valid descriptors, and continues to acquire the descriptors in the preset number, wherein the valid descriptors are used for indicating that data to be processed are available, and the notification enabling flag bit is invalid and used for indicating that the scheduling module does not send a data processing notification;
the execution module sets the notification enable flag bit to be valid when judging that the preset number of descriptors include invalid descriptors, wherein the invalid descriptors are used for representing the data to be processed which are not waiting for processing, and the scheduling module sends the data processing notification when the notification enable flag bit is valid and the data to be processed is generated.
2. The data processing method according to claim 1, wherein the information of the data to be processed includes address information of the data to be processed, length information of the data to be processed, and read-write information of the data to be processed.
3. The data processing method of claim 1, wherein the data processing notification is saved in the execution module,
and is
And setting a self-starting zone bit to be effective under the condition that the data processing notice is saved in the execution module, wherein the set self-starting zone bit is effective and is used for indicating that the data processing notice needs to be processed.
4. The data processing method according to claim 1, wherein the data processing notification contains identification information, and a preset number of the descriptors are acquired based on the identification information, wherein the identification information is used for uniquely identifying the execution module.
5. The data processing method according to claim 1, wherein in case that all of a preset number of said descriptors are judged to be said valid descriptors,
and saving an index of the last effective descriptor, wherein the index is used for indicating the position information of the last effective descriptor.
6. The data processing method of claim 3, further comprising:
in case that it is judged that a preset number of descriptors include the invalid descriptor, and in case that the notification enable flag bit is valid,
and acquiring preset value numbers of the descriptors again based on the data processing notification, and clearing the data processing notification and the self-starting zone bit when the preset value numbers of the descriptors are judged to contain the invalid descriptors.
7. A data processing apparatus, comprising:
the generating module is used for generating data to be processed and descriptors corresponding to the data to be processed, wherein the descriptors are used for describing information of the data to be processed;
the sending module is used for sending a data processing notification, wherein the data processing notification is used for notifying the processing of the data to be processed;
the receiving and storing module is used for receiving and storing a data processing notice and acquiring a preset number of descriptors based on the data processing notice, wherein the to-be-processed data is acquired and processed according to the acquired descriptors;
the system comprises a judging module, a scheduling module and a sending module, wherein the judging module is used for setting a notification enabling flag bit to be invalid when judging that the descriptors with preset number are all valid descriptors, continuously acquiring the descriptors with preset number, and setting the notification enabling flag bit to be valid when judging that the descriptors with preset number contain invalid descriptors, wherein the valid descriptors are used for representing to-be-processed data waiting for processing, the notification enabling flag bit is invalid and used for representing that the scheduling module does not send a data processing notification, the invalid descriptors are used for representing the to-be-processed data not waiting for processing, and the scheduling module sends the data processing notification when the notification enabling flag bit is valid and the to-be-processed data is generated.
8. The data processing apparatus according to claim 7, wherein the information of the data to be processed includes address information of the data to be processed, length information of the data to be processed, and read-write information of the data to be processed.
9. The data processing apparatus of claim 7, wherein the data processing notification is saved at an execution module,
and is
And setting a self-starting flag bit to be effective under the condition that the data processing notice is saved in an execution module, wherein the set self-starting flag bit is effective and is used for indicating that the data processing notice needs to be processed.
10. The data processing apparatus of claim 7, wherein the data processing notification contains identification information, and a preset number of the descriptors are obtained based on the identification information, wherein the identification information is used for uniquely identifying the execution module.
11. The data processing apparatus according to claim 7, wherein in a case where it is determined that a preset number of the descriptors are all the valid descriptors,
and saving an index of the last effective descriptor, wherein the index is used for indicating the position information of the last effective descriptor.
12. The data processing apparatus of claim 9, further comprising:
in case it is judged that a preset number of the descriptors include the invalid descriptor, and in case the notification enable flag is valid,
and acquiring preset value numbers of the descriptors again based on the data processing notification, and clearing the data processing notification and the self-starting zone bit when the preset value numbers of the descriptors are judged to contain the invalid descriptors.
13. A machine-readable medium having stored thereon instructions which, when executed on a machine, cause the machine to perform the data processing method of any one of claims 1 to 6.
14. An electronic device, comprising:
a memory to store instructions;
a processor coupled to a memory, the electronic device to perform the data processing method of the electronic device of any of claims 1-6 when program instructions stored by the memory are executed by the processor.
CN202011595902.6A 2020-12-29 2020-12-29 Data processing method and device, readable medium and electronic equipment Active CN112650558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011595902.6A CN112650558B (en) 2020-12-29 2020-12-29 Data processing method and device, readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011595902.6A CN112650558B (en) 2020-12-29 2020-12-29 Data processing method and device, readable medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN112650558A CN112650558A (en) 2021-04-13
CN112650558B true CN112650558B (en) 2022-07-05

Family

ID=75364473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011595902.6A Active CN112650558B (en) 2020-12-29 2020-12-29 Data processing method and device, readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112650558B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225577B (en) * 2022-09-20 2022-12-27 深圳市明源云科技有限公司 Data processing control method and device, electronic equipment and readable storage medium
CN116578234B (en) * 2023-04-27 2023-11-14 珠海妙存科技有限公司 Flash memory access system and method
CN117009265B (en) * 2023-09-28 2024-01-09 北京燧原智能科技有限公司 Data processing device applied to system on chip
CN117411842B (en) * 2023-12-13 2024-02-27 苏州元脑智能科技有限公司 Event suppression method, device, equipment, heterogeneous platform and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101539902A (en) * 2009-05-05 2009-09-23 中国科学院计算技术研究所 DMA device for nodes in multi-computer system and communication method
CN101015187B (en) * 2004-07-14 2011-05-11 国际商业机器公司 Apparatus and method for supporting connection establishment in an offload of network protocol processing
US9141651B1 (en) * 2012-07-31 2015-09-22 Quantcast Corporation Adaptive column set composition
CN109117288A (en) * 2018-08-15 2019-01-01 无锡江南计算技术研究所 A kind of message optimisation method of low latency bypass
CN110049070A (en) * 2018-01-15 2019-07-23 华为技术有限公司 Event notification method and relevant device
CN110825485A (en) * 2018-08-07 2020-02-21 华为技术有限公司 Data processing method, equipment and server
CN111190842A (en) * 2019-12-30 2020-05-22 Oppo广东移动通信有限公司 Direct memory access, processor, electronic device, and data transfer method
CN111381946A (en) * 2018-12-29 2020-07-07 上海寒武纪信息科技有限公司 Task processing method and device and related product

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101015187B (en) * 2004-07-14 2011-05-11 国际商业机器公司 Apparatus and method for supporting connection establishment in an offload of network protocol processing
CN101539902A (en) * 2009-05-05 2009-09-23 中国科学院计算技术研究所 DMA device for nodes in multi-computer system and communication method
US9141651B1 (en) * 2012-07-31 2015-09-22 Quantcast Corporation Adaptive column set composition
CN110049070A (en) * 2018-01-15 2019-07-23 华为技术有限公司 Event notification method and relevant device
CN110825485A (en) * 2018-08-07 2020-02-21 华为技术有限公司 Data processing method, equipment and server
CN109117288A (en) * 2018-08-15 2019-01-01 无锡江南计算技术研究所 A kind of message optimisation method of low latency bypass
CN111381946A (en) * 2018-12-29 2020-07-07 上海寒武纪信息科技有限公司 Task processing method and device and related product
CN111190842A (en) * 2019-12-30 2020-05-22 Oppo广东移动通信有限公司 Direct memory access, processor, electronic device, and data transfer method

Also Published As

Publication number Publication date
CN112650558A (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN112650558B (en) Data processing method and device, readable medium and electronic equipment
EP3754498B1 (en) Architecture for offload of linked work assignments
US20210216453A1 (en) Systems and methods for input/output computing resource control
EP3211530B1 (en) Virtual machine memory management method, physical main machine, pcie device and configuration method therefor, and migration management device
US7849214B2 (en) Packet receiving hardware apparatus for TCP offload engine and receiving system and method using the same
US20090055831A1 (en) Allocating Network Adapter Resources Among Logical Partitions
US8996774B2 (en) Performing emulated message signaled interrupt handling
EP3029912A1 (en) Remote accessing method for device, thin client, and virtual machine
US11074203B2 (en) Handling an input/output store instruction
CN115344226B (en) Screen projection method, device, equipment and medium under virtualization management
CN111290979B (en) Data transmission method, device and system
CN110851276A (en) Service request processing method, device, server and storage medium
CN116774933A (en) Virtualization processing method of storage device, bridging device, system and medium
CN109857553B (en) Memory management method and device
US8984179B1 (en) Determining a direct memory access data transfer mode
CN114020529A (en) Backup method and device of flow table data, network equipment and storage medium
CN115904259B (en) Processing method and related device of nonvolatile memory standard NVMe instruction
CN116954675A (en) Used ring table updating method and module, back-end equipment, medium, equipment and chip
CN207424866U (en) A kind of data communication system between kernel based on heterogeneous multi-nucleus processor
US10284501B2 (en) Technologies for multi-core wireless network data transmission
US20110258282A1 (en) Optimized utilization of dma buffers for incoming data packets in a network protocol
CN113296972A (en) Information registration method, computing device and storage medium
CN115297169B (en) Data processing method, device, electronic equipment and medium
CN117389685B (en) Virtual machine thermal migration dirty marking method and device, back-end equipment and chip thereof
CN115580644B (en) Method, apparatus, device and storage medium for communication between client systems in host

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant